This rant is about two common fallacies about quantum physics propagated by the orthodox community: wave/particle duality is all about position/momentum; quantum mechanics cannot be hemmed in with classical concepts.

That first does not describe wave-particle duality, rather, it describes the Uncertainty Principle as encapsulated in the mathematics of standard Quantum Theory (QT) and standard Quantum Field Theory (QFT), both of which have proven inadequate in many ways. Of course, the orthodox community will try to convince you that Bohr was right — quantum theory is a complete description of individual quanta, and Einstein was wrong — quantum theory is a descriptively incomplete ensemble (statistical) theory. They tend to convolute things even more by calling the Copenhagen Interpretation the “statistical” interpretation, where individual quanta exist in some “smeared-out” state prior to “measurement.” But statistical theories deal with ensembles, not individual entities, as Einstein well knew.

Now, Richard Feynman stated more than once that the entire mystery of Quantum Mechanics is contained in the double-slit experiment, and this is where both QT and QFT are inadequate: neither can reproduce the statistical pattern we see in the double-slit, nor can they account for the quantized momentum transfer between quanta and slit apparatus that yields said pattern.

Okay, now, the proof of wave-particle duality must be empirical, correct? I don’t believe we ever deal with 100% certainty in physics, being a Constructive Empiricist, but models and their explanatory force can be pretty convincing. With regards to wave-particle duality, contrary to the orthodox position, this was introduced by de Broglie to explain Bohr orbitals: constructive interference of the wave aspect is what prevents electrons from spiraling into the nucleus of atoms, hence, orbitals are quantized. This conjecture of de Broglie, that massive particles also have a wave aspect, was empirically supported by electron scattering experiments and the Compton effect shortly thereafter. From the Wikipedia page:

“The Davisson–Germer experiment confirmed the de Broglie hypothesis that matter has wave-like behavior. This, in combination with the Compton effect discovered by Arthur Compton (who won the Nobel Prize for Physics in 1927),[8] established the wave–particle duality hypothesis which was a fundamental step in quantum theory.”

Okay, that’s pretty convincing proof that quantum entities have both wave and particle aspects, and this is what is so aggravating about the orthodox community. Because where are we today, in mainstream physics? The Spanish physicist Oliver Consa covers this quite well in a few papers, but especially in the conclusion to his Helical Solenoid Model of the Electron:

“Despite his initial objections, Pauli formalized the theory of spin in 1927 using the modern theory of QM as set out by Schrödinger and Heisenberg. Pauli proposed that spin, angular moment, and magnetic moment are intrinsic properties of the electron and that these properties are not related to any actual spinning motion. The Pauli Exclusion Principle states that two electrons in an atom or a molecule cannot have the same four quantum numbers. Pauli’s ideas brought about a radical change in QM. The Bohr-Sommerfeld Model’s explicit electron orbitals were abandoned and with them any physical model of the electron or the atom.”

The helical solenoid model models the electron as a superconducting LC circuit with a quantum of electric charge, e, and a quantum of magnetic flux, ϕ, with eϕ = h, h being Planck’s constant. The helical motion describes Schroedinger’s Zitterbewegung (trembling motion) and the geometry/topology explains the electron’s anomalous magnetic moment. Consa briefly summarizes the history of toroidal moments in physics and “in 1997, toroidal moment was experimentally measured in the nuclei of Cesium-133 and Ytterbium174 [26].” In his reference [26] (not the exact paper, but probably even better), they measure the toroidal moment using state changes, and in the paper, Magnetic Monopole Field Exposed by Electrons, the authors simulate a monopole field using a nano-needle with its tip poised over an aperture; they find that electrons interacting with the field actually change state, which should enable a measurement of Consa’s toroidal moment. Additionally, if you happen to be familiar with David Hestenes’ Zitter Model, Zitterbewegung as helical motion of a point charge, e, can explain a lot of the so-called quantum weirdness and provide a mechanism for quantized momentum exchange in diffraction (resonance). Consa has recently extended his Helical Solenoid model to a preon model in The Helicon: A New Preon Model, and has also described many problems with QED in Something is Wrong in the State of QED, while so-called anomalous heat exchange indicates something is wrong in the state of QCD, both of these situations being more than just a bit scandalous. But I find much of his preon work to be a bit conservative; Consa seems to have the same aversion to the superluminal phase velocities in the de Broglie construct as many in the orthodox community. Waves are generally constructed from groups of phase waves. The waves in these groups vary in amplitude, generating an envelope wave. It can be shown mathematically (see also the Feynman Lecture) that this envelope wave moves at the same velocity as the particle while the velocity of the group is superluminal.

Let v_w be the phase velocity and v_g the group velocity, then

v_wv_g = c62

which makes sense given that all waves have frequency and wavelength with λν = c, i. e. the product of “space” and “time” is c. But now v_w = c^2/v_g >> c and

1/v_w(v_w − v_g) = 1 − (v_g)^2/c^2

The square of the inverse Lorentz factor is right there in the de Broglie construct. This is related, in turn, to the relativistic entropy of the construct.

Kevin Knuth is a highly regarded physicist who specializes in Bayesian Model Selection and MaxEnt methods. He has played a central role in creating what might be called Inference Theory, formally initiated by Edwin Jaynes, with his formulation of MaxEnt as a variational principle, and Richard Cox, with his derivation of Probability Theory as a calculus generalizing an algebra of implication. Knuth and his co-conspirators have extended these methods of Jaynes and Cox to general algebras, in many cases relevant to the foundations of physics. A good introduction to Knuth’s work in this vein is his paper, Information-Based Physics: An Observor-Centric Foundation. Knuth is who introduced me to Hestenes’ Zitter model of fermions and in The Problem of Motion: The Statistical Mechanics of Zitterbewegung ( see a more involved treatment here) he explores the consequences of Schoedinger’s Zitterbewegung, which came about due to the velocity eigenvalues of the Dirac equation being ±c±�, the velocity of light. Knuth derives the relativistic velocity addition rule in 1 + 1 dimensions, developing a statistical mechanics of motion in the process. This leads to an entropy measure based on Helicity and Shannon Entropy:

S = − Pr(R)logPr(R) − Pr(L)logPr(L)

where Pr(X) denotes the probability of coming from direction X (Helicity). This can be represented with relativistic terms

S = log(2γ) − β log(1 + z)

where γ is the relativistic Lorentz factor γ = (1 − β^2)^{−1/2} and 1 + z is related to the redshift z given by 1 + z = (√1 + β)/(√1 − β) for motion in the radial direction. As Knuth points out, with this relativistic entropy measure, a particle at rest is MaxEnt, since Pr(L) = 1/2 = Pr(R), while a particle moving at the velocity of light minimizes entropy, with S = 0, because either Pr(L) = 1 or Pr(R) = 1 (see his short paper). In other words, Helicity disappears at the velocity of light. This is consistent with these helical models of Hestenes and Consa, where the electron is a point charge orbiting a center of mass at the speed of light with radius r = ℏ/mc. With translational velocity it traces out a helix in spacetime, with the radius of the helix going to zero as the translational velocity goes to c, obviously, i. e. per the Pythagorean Theorem, c^2 = (v_t)^2 + (v_r)^2, where v_t is translational velocity and v_r rotational. This is a great candidate for explaining Ulf Klein’s result showing that the so-called Bohr Correspondence Principle does not hold in general; it doesn’t hold because ℏ → 0 as v_t → c, a rather elegant explanation.

But then, given the above, we have these relativistic entropies going to zero — becoming minimum, precisely when v_w = c = v_g! And this is the key point! It is precisely these superluminal phase waves which motivates William Tiller’s deltron moiety, his dual-space reference frame, which separates a distance/time dependent domain (spacetime) from a frequency domain (wave domain), and his PsychoEnergetics. As Tiller points out, these phase waves contain all of the information about these de Broglie particle/pilot wave constructs and, via the relation between information and entropy, made explicit by Claude Shannon in the 1950’s, there is a thermodynamic free energy exchange going on here in apparent conflict with Einstein’s Special theory. Tiller resolves this with his deltron moiety, a coupling field with variable coupling strength and which couples the spacetime domain to the wave domain, and it would seem a natural conjecture that this relativistic entropy is related to Tiller’s coupling field, i. e. the information resolution, hence, entropy, is a function of the velocity difference between the pilot wave and its phase waves! This is a true form of holography and I think it has the potential to explain quantized momentum exchange in diffraction more completely than Hestenes’ resonance.

Hopefully this is clear. It’s probably not without reading the papers linked to. And both Consa and Hestenes are dealing with semi-classical models, i. e. “classical” concepts, whatever that even means. But this is especially not clear when you have the orthodox community — and they all do it, spewing forth the biggest bunch of nonsense.

 

by Wes Hansen

 

“Deconstruction was a basic device of Derrida’s post-structuralism. It asserts undecidability, the endless deferral of meaning. [T]his debilitating word-play is an attempt to defeat any notion of foundational truth or meaning; it trivializes any such pursuit as baseless.” 

—John Zerzan, Lévi-Strauss Revisited, Graffiti #8 [1] 

Technically speaking, undecidability relates to syntax – Proof Theory, not semantics – Model Theory. But if we accept the rather straight-forward first-order proofs of Soundness – every formal sentence true in some model has a proof, and Completeness – every formal sentence which has a proof is true in some model, then syntax and semantics are, at the very least, complementary; there is a logical equivalence between proof and truth, at least on the first-order level. So, what, then, is the foundation of undecidability? Historically speaking, this would be Kurt Goedel’s infamous 1931 paper, On Formally Undecidable Propositions of Principia Mathematica and Related Systems [2], and it is easily shown that this paper, which is also the foundation for a 90 year-old Proof Theory, suffers from numerous logical gaps and outright omissions. In essence, it is a poorly argued, though creative, Platonist polemic directed at Ludwig Wittgenstein and the Logical Positivists. The irony here, is that this Platonist polemic led rather directly to post-structuralism, and even informs, to a large degree, the debilitating secular humanism of post-modernism! This, the intellectual dishonesty surrounding Goedel’s infamous work, is the subject of my rant.  

 In Section 1 of his infamous paper [2], Goedel states (page 38): 

For metamathematical purposes it is naturally immaterial what objects are taken as basic signs, and we propose to use natural numbers for them.” 

The key idea expressed here is, the metamathematical analysis, at least as it is constrained to Proof Theory, is wholly and entirely syntactical, hence, the basic signs need be nothing more nor less than distinguishable placeholders. Semantics – meaning, plays no role. Is this premise, taken for granted by many, true? Goedel presents no support whatsoever in his paper, and I can quite easily show that his paper, and the history since, resoundingly refute the premise.  

Goedel’s “Proof” 

Goedel begins his formal argument by specifying his system P, which “is essentially the system obtained by superimposing on the Peano Axioms the logic of PM” (Principia Mathematica). He then, beginning on page 46 of [2], introduces “a parenthetic consideration having no immediate connection with the formal system P,” this being his definition of recursively defined number-theoretic functions (relations) and recursive number-theoretic functions (relations). On pages 47 and 48 he provides and discusses five Propositions related to recursive functions (relations) and uses these to confirm that each of the 1–45 functions (relations) listed on pages 49–55 are recursive. Relevant to the present discussion, item #31 on page 53, Sub(x (v, y)), which simply says to replace the free variable v in the formula x by the entity y, is recursive. But nowhere in his paper does he discuss the reflexivity of this function, i. e. Sub(x (v,┌x┐)), where ┌x┐ is the Goedel number encoding the formula x itself. Given the central role this function, and its reflexivity, play, not only in his “proof” of Proposition VI but in his entire project (it IS his undecidable Proposition), this is a very curious omission. 

In his “proof” of Proposition VI, the function Sub(y (19,┌y┐)) enters in relation 8.1, page 58 

Q(x, y) ≡ ¬(x B_c(Sub(y (19,┌y┐)))) 

and with very little discussion he claims that ¬ (x B_c(Sub(y (19,┌y┐)))) is recursive, hence, per his discussion on page 56, there is a recursive 2-place relation sign q such that (his formula 9 and 10, page 58) 

¬ (x B_c(Sub(y (19,┌y┐)))) Bew_c(Sub(q (17,┌x┐)(19,┌y┐))); and, 

x B_c(Sub(y (19,┌y┐)))) Bew_c(¬ Sub(q (17,┌x┐)(19,┌y┐))). 

He then defines the recursive (yes, it is recursive provided Sub(y (19,┌y┐)) is recursive, see page 56, but that IS the question) 1-place class-sign (his formula 11, page 58) 

p = 17 Gen q. 

This is recursive provided his Sub(y (19,┌y┐)) is recursive per his item #15 on page 51. He next reduces Sub(y (19,┌y┐)) by p defining the recursive 1-place class-sign (his formula 12, page 58) 

r = Sub(q (19,┌p┐)). 

He then derives the all-important syntactical identity (his derivation 13, page 58) 

Sub(p (19,┌p┐)) = Sub((17 Gen q)(19,┌p┐)); 

= 17 Gen Sub(q (19,┌p┐)); 

= 17 Gen r. 

And this, then, is the formal sentence that he proves is undecidable from his limited class c using the system P.  

Okay, the problem really enters with his relation 8.1, because it includes Sub(y (19,┌y┐)), but we’ll look at Sub(p (19,┌p┐)), since p is specifically defined as 17 Gen q and, per Goedel’s own derivation, Sub(p (19,┌p┐)) is syntactically identical to 17 Gen r, the undecidable sentence. Goedel’s claim, quoted above, is that “for metamathematical purposes it is naturally immaterial what objects are taken as basic signs,” so, to test this, we’ll use p and 17 Gen q showing 

Sub(p (19,┌p┐)) =Sub((17 Gen q(17,19))(19,┌p┐)); 

= 17 Gen q(17, 17 Gen q(17, 17 Gen q(17,…))). 

And we see here that we INEVITABLY end up with two possible situations, if we give a Universal Turing machine Sub(p (19,┌p┐)) (equivalently, 17 Gen r) as input: 

Case 1: The Universal Turing Machine does not halt when given Sub(p (19,┌p┐)) as input, because it continuously finds free variables 19 that it replaces by a proposition including free variable 19. Then it leads immediately to a nested regress and its very existence in system P is impossible without supplementing system P with, say, Peter Aczel’s Anti-Foundation Axiom, i. e. it is not recursive, hence, does not exist in Goedel’s system P, as defined. 

Case 2: The Universal Turing Machine does halt when given Sub(p (19,┌p┐)) as input, because it doesn’t acknowledge the free variable 19 in the proposition being substituted. Then r = Sub(q (19,┌p┐)) is NOT a class-sign because it still contains two free variables, 17 and 19, which means 17 Gen r is not a sentence, in that IT contains the variable 19 free, and it makes no sense to discuss decidability without additional information [3].   

In both cases, Goedel’s “proof” is rendered, well, meaningless, and the FACT that this logical error has (cough, hack, spit) remained “undiscovered” for 90+ years calls into question the validity of Goedels methods in general, certainly his arithmetization of first-order logic, the very foundation of Proof Theory. Okay, let’s be honest, this is intellectual dishonesty on a global scale; certainly Derrida was guilty of it. I mean, it’s not like I just happen to be smarter than everyone else, Goedel included. And if you find this episode of intellectual dishonesty – Academy wide, profoundly disturbing, then you should read my rant about so-called “quantum computation.” I mean, what a scam! Everything funded by SPACs and taking place on “the cloud.” 

  1. https://graffiti-magazine.com/ 
  1. https://monoskop.org/images/9/93/Kurt_G%C3%B6del_On_Formally_Undecidable_Propositions_of_Principia_Mathematica_and_Related_Systems_1992.pdf 
  1. http://euclid.trentu.ca/math/sb/pcml/pcml-16.pdf 

 

  

by Kevin Graves

The publisher wants rants, well here’s a rant. “Privileged White Guy Rant”, coming at you from the mean streets of Eugene, Oregon.

Why do guys drive those insanely jacked up trucks? Not the kind of truck one shows at a car show a few times a year, but I’m talking daily drivers. Do they not know that approximately 98% of the world looks at those trucks and assumes the driver has a small penis? It’s like having a giant bumper sticker that says, “I’M FUCKING COMPENSATING!!!”

And why, inevitably, do these same guys park facing out in their parking spaces, instead of facing in, like everyone else? Are they like Clark Griswald and want to make sure they are the first one out at the end of the day when all those forward-parkers are fighting to get out of their normal, pedestrian, ordinary parking spots. The losers. I know, I know some huge percentage of wrecks happen in parking lots, so you could maybe sorta kinda make a case that parking like an asshole is safer than what everyone normal does. But really, these guys are concerned with safety? In those giant, oversized death machines? Safety is not first.

And lastly, about these same tiny-dicked-backward-parking-assholes, why is it that these guys not only lift their truck up like a kid in puberty wearing pants that are three inches off their shoes, but they throw million-dollar tires and rims that BY DEFINITION, lower the gas efficiency of the truck? And then these same guys are the first and loudest ones bitching about gas prices!

Yeah, I know what you’re thinking, asking humans to be consistent is like asking a bear to not shit in the woods, but when I find myself being inconsistent, I either change my ways, or shut the fuck up. If I can do it, so can they.