Yesterday I saw with disappointment a new paper on the arXiv by Hossenfelder and Palmer, Rethinking Superdeterminism. There they argue that physics took a wrong turn when we immediately dismissed superdeterminism; instead it is a solution to the conundrum of nonlocality and the measurement problem.
No. It’s not. It’s a completely sterile idea. I’ll show why, by fleshing out the calculations of the smoking and cancer example they quote in the paper, and then examining the case of the Bell test.
Let’s suppose you do the appropriate randomized trial, and measure the conditional probabilities1
\[ p(\text{cancer}|\text{smoke}) = 0.15\quad\text{and}\quad p(\text{cancer}|\neg\text{smoke}) = 0.01,\]a pretty damning result. A tobacco company objects to the conclusion, saying that the genome of the subjects was correlated with whether you forced them to smoke2, such that you put more people predisposed to have cancer in the smoking group.
It works like this: the law of total probability says that
\[ p(a|x) = \sum_\lambda p(\lambda|x)p(a|x,\lambda),\] where in our case $a \in \{\text{cancer},\neg\text{cancer}\}$, $x \in \{\text{smoke},\neg\text{smoke}\}$, and $\lambda \in \{\text{predisposed},\neg\text{predisposed}\}$ is the hidden variable, in this case the genome determining whether the person will have cancer anyway. The tobacco company says that your results are explained by the conspiracy $p(\text{predisposed}|\text{smoke}) = 0.15$ and $p(\text{predisposed}|\neg\text{smoke}) = 0$, from which we can calculate the actual cancer rates to be
\begin{gather*}
p(\text{cancer}|\text{smoke},\neg\text{predisposed}) = 0 \\
p(\text{cancer}|\neg\text{smoke},\neg\text{predisposed}) = 0.01,
\end{gather*}so the same data indicates that smoking prevents cancer! If you assume, though, that $p(\text{predisposed}|\text{smoke}) = p(\text{predisposed}|\neg\text{smoke})$, then the absurd conclusion is impossible.
With this example I want to illustrate two points: first, that assuming $p(\lambda|x) \neq p(\lambda)$ is just a generic excuse to dismiss any experimental result that you find inconvenient, be it that smoking causes cancer or that Bell inequalities are violated. Second, that without assuming $p(\lambda|x) = p(\lambda)$ 3 you can’t conclude anything from your data.
In their paper, Hossenfelder and Palmer dismiss this example as merely classical reasoning that is not applicable to quantum mechanics. It’s not. One can always use the law of total probability to introduce a hidden variable to explain away any correlation, whether it was observed in classical or quantum contexts. Moreover, they claim that while $p(\lambda|x) = p(\lambda)$ is plausible in classical contexts, it shouldn’t be assumed in quantum contexts. This is laughable. I find it perfectly conceivable that tobacco companies would engage in conspiracies to fake results related to smoking and cancer, but to think that Nature would engage in a conspiracy to fake the results of Bell tests? Come on.
They also propose an experiment to test their superdeterministic idea. It is nonsense, as any experiment about correlations is without the assumption that $p(\lambda|x) = p(\lambda)$. Of course, they are aware of this, and they assume that $p(\lambda|x) = p(\lambda)$ would hold for their experiment, just not for Bell tests. Superdeterminism for thee, not for me. They say that when $x$ is a measurement setting, changing it will necessarily cause a large change in the state $\lambda$, but if you don’t change the setting, the state $\lambda$ will not change much. Well, but what is a measurement setting? That’s human category, not a fundamental one. I can just as well say that the time the experiment is made is the setting, and therefore repetitions of the experiment done at different times will probe different states $\lambda$, and again you can’t conclude anything about it.
Funnily, they say that “…one should make measurements on states prepared as identically as possible with devices as small and cool as possible in time-increments as small as possible.” Well, doesn’t this sound like a very common sort of experiment? Shouldn’t we have observed deviations from the Born rule a long time ago then?
Let’s turn to how superdeterministic models dismiss violations of Bell inequalities. They respect determinism and no action at a distance, but violate no conspiracy, as I define here. The probabilities can then be decomposed as
\[ p(ab|xy) = \sum_\lambda p(\lambda|xy)p(a|x,\lambda)p(b|y,\lambda),\]and the dependence of the distribution of $\lambda$ on the settings $x,y$ is used to violate the Bell bound. Unfortunately Hossenfelder and Palmer4 do not specify $p(\lambda|xy)$, so I have to make something up. It is trivial to reproduce the quantum correlations if we let $\lambda$ be a two-bit vector, $\lambda \in \{(0,0),(0,1),(1,0),(1,1)\}$, and postulate that it is distributed as
\[p((a,b)|xy) = p^Q(ab|xy),\] where $p^Q(ab|xy)$ is the correlation predicted by quantum mechanics for the specific experiment, and the functions $p(a|x,\lambda)$ and $p(b|y,\lambda)$ are given by
\[p(a|x,(a’,b’)) = \delta_{a,a’}\quad\text{and}\quad p(b|y,(a’,b’)) = \delta_{b,b’}.\] For example, if $p^Q(ab|xy)$ is the correlation maximally violating the CHSH inequality, we would need $\lambda$ to be distributed as
\[ p((a,b)|xy) = \frac14\left(1+\frac1{\sqrt2}\right)\delta_{a\oplus b,xy}+\frac14\left(1-\frac1{\sqrt2}\right)\delta_{a\oplus b,\neg(xy)}.\]The question is, why? In the quantum mechanical case, this is explained by the quantum state being used, the dynamical laws, the observable being measured, and the Born rule. In the superdeterministic theory, what? I have never seen this distribution be even mentioned, let alone justified.
More importantly, why should this distribution be such that the superdeterministic correlations reproduce the quantum ones? For example, why couldn’t $\lambda$ be distributed like
\[ p((a,b)|xy) = \frac12\delta_{a\oplus b,xy},\] violating the Tsirelson bound?5 Even worse, why should the superdeterministic distributions respect even no-signalling? What stops $\lambda$ being distributed like
\[ p((a,b)|xy) = \delta_{a,y}\delta_{b,x}?\]
In their paper, Hossenfelder and Palmer define a superdeterministic theory as a local, deterministic, reductionist theory that reproduces quantum mechanics approximately. I’m certain that such a theory will never exist. Its dynamical equations would need to correlate 97,347,490 human choices with the states of atoms and photons in 12 laboratories around the planet to reproduce the results of the BIG Bell test. Its dynamical equations would need to correlate the frequency of photons emitted by other stars in the Milky Way with the states of photons emitted by a laser in Vienna to reproduce the results of the Cosmic Bell test. Its dynamical equations would need to correlate the bits of a file of the movie “Monty Python and the Holy Grail” with the state of photons emitted by a laser in Boulder to reproduce the results of the NIST loophole-free Bell test. It cannot be done.
Mateus,
First, let’s define superdeterminism.
In Bell’s paper:
On the Einstein Podolsky Rosen Paradox
J. S. Bell, Physics Vol. 1, No. 3, pp. 195-290
DOI:https://doi.org/10.1103/PhysicsPhysiqueFizika.1.195
we read at the begining of page 196:
“The vital assumption [2] is that the result B for particle 2 does not depend on the setting a of the magnet for particle 1, nor A on b”
This is the so-called independence assumption, or “free-will” assumption. Superdeterminism is the denial of the independence assumption, in other words:
A deterministic theory that, when applied to a Bell-type experiment, implies that the hidden variable corresponding to particle A is not independent of the detector’s B setting and the hidden variable corresponding to particle B is not independent of the detector’s A setting is a superdeterministic theory.
Now let me give you several refutations of your “Tobaco” argument.
Refutation A. Your argument is based on circular reasoning. You assume that superdeterminism is false in order to show that superdeterminism is unscientific.
Explanation: The point of superdeterminism is to reproduce QM’s predictions. There are two logical possibilities:
1. It is not possible to reproduce QM’s predictions with a superdeterministic theory.
2. It is possible to reproduce QM’s predictions with a superdeterministic theory.
If you think you can provide evidence for 1. I would be interested to see it. This evidence can only be in the form of a mathematical incompatibility between superdeterminism and QM’s formalism.
If you cannot provide evidence for 1., you need to address 2. The implication of the option 2. is that our superdeterministic theory gives exactly the same predictions as QM. So, unless you want to argue that QM itself predicts that “the genome of the subjects was correlated with whether you forced them to smoke” your argument is dead.
Conclusion: You assume that superdeterminism must be wrong (option 1.) and you conclude that such a wrong theory is unscientific.
Refutation B. Your argument is not sound.
Your argument has the following structure:
P 1: superdeterminism is true
P 2 : if superdeterminism is true then statistical data does not show that smoking causes cancer in some patients.
P 3: statistical data shows that smoking causes cancer in some patients.
So, if you agree that P3 is true (which I agree) it follows that P1 is false.
Let’s reformulate P2 using the above provided definition:
P2′: If, in a Bell-type experiment, the hidden variable corresponding to particle A is not independent of the detector’s B setting and the hidden variable corresponding to particle B is not independent of the detector’s A setting it follows that “the genome of the subjects was correlated with whether you forced them to smoke”.
I see no shred of evidence in your post linking the behavior of entangled elementary particles with the human genome and the tendency of smoking. I reject therefore P2, so the conclusion does not follow.
Refutation C. Bell’s independence assumption is false in any theory with long range forces, like classical EM, General Relativity, Newtonian gravity, etc. Any such theory that is also deterministic will be “superdeterministic” in Bell’s sense. All these theories imply correlations between distant systems as a result of those long range fields/forces. Such examples abound: planets in planetary systems, stars in a galaxy, electrons in a wire, atoms in a crystal, etc. So, if your argument is correct you need to make the claim that all the above enumerated theories are unscientific, which is absurd.
OK, let me now address explicitly some of your claims:
“p(λ|x)=p(λ) is plausible in classical contexts” – this is from the paper but this is false. In any classical theory with long range forces there will be some variables (not all) where this assumption is false (see also Refutation C).
“why should this distribution be such that the superdeterministic correlations reproduce the quantum ones?”
You make a confusion between a class of theories (superdeterministic ones) and specific implementations of such theories. It’s possible to invent superdeterministic theories that give different predictions from QM just like it’s possible to invent field theories that give different predictions from GR or classical EM. Is this an argument against the class of field theories? I say no. The question is not if any superdeterministic theory you can imagine gives QM’s prediction, but if there exists at least one that does. Let me provide you with such a theory which, I think, has a good chance to be on the right track. It’s called stochastic electrodynamics (SED). You can find an introductory material here:
Stochastic Electrodynamics: The Closest Classical Approximation to Quantum Theory
Timothy H. Boyer, Atoms 2019, 7(1), 29; https://doi.org/10.3390/atoms7010029
It’s on arxiv:
https://arxiv.org/pdf/1903.00996.pdf
“Its dynamical equations would need to correlate 97,347,490 human choices with the states of atoms and photons in 12 laboratories around the planet to reproduce the results of the BIG Bell test.”
I fail to see your argument here. This is true for all physical laws, like conservation laws. They apply everywhere irrespective of the perceived complexity of the system. GR does not fail because there are many stars, and QM does not fail because there are many atoms. I see here just an appeal to emotion.
In the end I have a question for you. How do you explain the perfect anti correlation observed in an EPR-Bohm experiment (spin-entangled particles) when the measurements are performed with similar detector orientation? Important: I need a local explanation.
Dear Andrei,
Not, it’s not. This is Bell’s locality definition. But it doesn’t matter what Bell wrote, I’m critizing Hossenfelder and Palmer’s paper based on their own definitions. Read the paper. There they define (correctly) superdeterminism as the conjunction of determinism and violation of statistical independence, that is, failure of $p(\lambda|x)=p(\lambda)$. The assumption of determinism is uncontroversial, so I’m focussing my criticism here at the violation of statistical independence.
You haven’t understood my tobacco argument. First of all, I do state explicitly in the post that it is possible to reproduce QM’s predictions with a superdeterministic theory. Incidentally, the problem with superdeterminism is precisely that it can reproduce anything at all. I don’t see what does this have to do with the tobacco argument, though.
The logic of the argument is as follows: if $p(\lambda|x) \neq p(\lambda)$, then you can’t conclude anything from the data about smoking and cancer. On the other hand, if $p(\lambda|x)=p(\lambda)$, then the data implies that smoking causes cancer.
Nonsense. None of these theories is superdeterminisc (Newtonian gravity does violate Bell locality, as it is not relativistic).
I’m just illustrating the difficulty of the problem. How on Earth are you going to produce these conspiratorial correlations via a dynamical equation in such a complex system? Nobody has ever managed to do it. And nobody ever will.
Here is the explanation.
Dr Araujo has not read Sabine Hossenfelder and my paper quite carefully enough.
We do present a model where the Statistical Independence assumption is violated. To be specific, in Section 5.1 we consider a model (see reference 34 of our paper for mathematical details) which has the following properties:
1) Each entangled particle pair is labelled by a unique lambda. If you like, the lambda is like a passport number for that particle pair.
2) For a CHSH experiment on an ensemble of lambda values, if rho(lambda |XY) is non-zero, then rho(lambda |XY’)=rho(lambda|X’Y)=0
Here X’ =0 if X=1 and vice versa where X (and Y) denote the binary choices for setting the CHSH polarisers. From 2), statistical independence is violated. Condition 2) is a nontrivial consequence of the fact that in the model, Hilbert states are required to have rational squared amplitudes and phases.
From 1), if a particular lambda pair was measured with X and Y settings, then measuring it with X and Y’ settings, or with X’ and Y settings, is counterfactual. Hence condition 2) says nothing about the statistical independence of different ensembles of particles in the real world, subject, say, to different measurement settings. It is only a statement about whether certain specific counterfactual measurements are allowed by the model. That is to say, the issues about tobacco trials and so on are completely irrelevant to the way this particular model violates statistical independence.
It is for this reason that throughout the paper we go out of our way to try to distinguish free choice and causality from a purely space-time perspective, with free choice and causality from a counterfactual perspective. We are claiming that there are deep reasons to treat these approaches as inequivalent. Dr Araujo seems to have missed this key point about our paper.
In conclusion, by talking about tobacco trials and such like, Dr Araujo (and other commentators who dismiss superdeterminism so superficially) throw the baby out with the bathwater. We are claiming that one needs to be extremely careful when draining the bath: to repeat, in the model proposed the only place where violation of statistical independence is relevant is when considering very specific counterfactual measurements which occur when trying to interpret very specific quantum experiments (like, but not exclusively, CHSH).
Indeed, this conclusion answers the question often raised as to why counterfactual reasoning is so useful in general. The answer is that the specific types of counterfactuals rejected by this model (ones where Hilbert states might have irrational squared amplitudes or phases) only arise when trying to interpret these specific quantum experiments. The number-theoretic constraints have no relevance at all in more day-to-day counterfactual situations, e.g. in asking whether Bigfoot would have left a footprint in the snow, had he not been there earlier in the day.
Tim Palmer
Prof. Palmer has not read my blog post quite carefully enough.
I never claim you don’t present a model. What I do claim is that you don’t specify the distribution $p(\lambda|xy)$ (and the functions $p(a|x,\lambda)$ and $p(b|y,\lambda)$) that allows your superdeterministic model to violate a Bell inequality. If you can specify it, please do tell me, and I’ll update the blog post accordingly.
Indeed, you only violate statistical independence in the CHSH case, and I’m criticizing you specifically for doing that. You assume statistical independence when it is convenient, and violate it when it is not. Having an ad-hoc model is no excuse for that, I can also come up with an ad-hoc model to dismiss any correlation I want. I would take your model seriously if you could actually derive the violation of statistical independence in some situations but not other from a dynamical law. But you can’t.
Mateus,
“Nonsense. None of these theories is superdeterminisc (Newtonian gravity does violate Bell locality, as it is not relativistic).”
Can you please provide a refutation of Andrei’s argument that all field style classical theories are superdeterministic? I have seen this repeated by Andrei several times, but have not seen anyone challenge it. Just calling it ‘nonsense’ is not a refutation…
Also, I think it would behoove all involved to stop with the sniping and concentrate on actual models to get to the bottom of whatever disagreements there actually are. Too much ego defending and not enough trying to come to a meeting of the minds… what are the actual factual disagreements…
Andrei,
“Stochastic Electrodynamics: The Closest Classical Approximation to Quantum Theory -> https://arxiv.org/pdf/1903.00996.pdf”
You reference this as an example of an actual superdeterministic model that can explain the correlations that Mateus challenges… ie, the BIG Bell test… but a cursory glance at that paper does not indicate it has anything to do with Bell or superdeterminism or that test… what am I missing?
Is this just another instance of you claiming that all field theories are superdeterministic?
I think it would be good to actually focus on _real concrete models_ and precise definitions otherwise I am afraid that people are just talking past each other.
manyoso,
The burden of proof lies with him, not with me. The claim is so ridiculously wrong that it is not worth discussing. If you know the first thing about superdeterminism (and the classical theories) you know it’s nonsense.
I did explain it to Andrei here, but the explanation fell on deaf ears.
Tim Palmer,
In your paper you and Dr. Hossenfelder reference Chaitin’s Incompleteness Theorem. I do not understand the intent of this reference. My initial impression was that you were trying to say that for any universal theory (dynamical laws + initial conditions) that the uncomputability of Kolmogorov complexity means that we can never be sure there is not an equivalent reformulation that is simpler. But what does this have to do with whether the correlations are encoded in the initial conditions or the dynamical laws?
It seems to me that what everyone is really arguing about here is whether it is permissible to encode Bell violations into the initial conditions vs how it is done now via the dynamical laws.
From what I can tell… the superdeterminism skeptics are claiming that the complexity of any universal theory (by this I mean capable of predicting/computing all current QM results) where bell violations are encoded in the initial conditions vs given by the dynamical laws will NECESSARILY be:
a) More complex than any theory where they are encoded in the dynamical laws
b) Unscientific… since it is always possible to encode the meat of whatever problem you are trying to explain away into the initial conditions
And superdeterminism fans are claiming that this is not necessarily so and it may be possible to find concrete superdeterministic models that are more parsimonious than encoding the bell violations in the dynamical laws.
Frankly, I think that to resolve any of the above in a satisfactory manner actual concrete superdetermistic models that can both reproduce QM predictions *and* explain Bell violations are needed. Mateus here has thrown down the gauntlet and said producing such a theory is impossible. I’m also skeptical, but not nearly so sure and the best refutation would be to produce what he says is impossible.
Mateus, I get that you are frustrated, but I am _personally asking_ because even though you find it obviously ridiculous I’d like to see it spelled out. To be clear, I also feel it is a dubious claim, but have not been able to articulate *why* it is dubious.
Everyone here is intelligent and if we can buckle down the frustration and try in good faith where people disagree we might all _learn_ something. I have to believe that is still possible among intelligent people of good faith.
manyoso,
I did link you my comment with the explanation. What’s wrong with it?
Mateus, there was nothing wrong with it and I just went through and read more or less the whole thread. It helped me understand the difference between Andrei and you. In particular, it really helped me understand what Andrei is claiming. Namely, this:
“My claim was that CM is superdeterministic in the sense that the assumption of statistical independence required by Bell’s theorem contradicts the formalism of the theory.”
I am happy to provide more details about my model and to specify the distributions. But to make sense of these I need to give some background. Motivated by fractal attractors in nonlinear dynamical systems theory, the model is based on the concept that a state-space trajectory, or history, comprises, under suitable magnification, a helix of trajectories. Under further magnification, each of these more elemental trajectories itself comprises a further helix, and so on. Unravelling a piece of rope provides a simple analogy. Under interaction with the environment (decoherence) the trajectories of a helix unravel and cluster into distinct regimes (corresponding to measurement eigenstates). As discussed in Reference 34 (https://arxiv.org/abs/1804.01734), one can describe a helix of trajectories statistically using complex Hilbert vectors and tensor products where individual trajectories are labelled symbolically by the cluster to which they evolve. A critical feature of this Hilbert state representation of the helix is that the squared amplitudes and phases of the complex Hilbert states must be rational numbers.
Property 2) in my earlier email is a nontrivial consequence of this model. It arises from number-theoretic properties of trigonometric functions (ultimately due to the fact that if x is a rational not equal to zero, exp(x) is irrational). My claim is that these number theoretic properties are only ever pertinent when interpreting quantum no-go types of experiments. For example these number theoretic issues arise when interpreting in my theory, Bell and GHZ (see ref 34) and PBR (work unpublished) as well as simple complementarity properties of single quantum particles (ref 34 and see below).
I can simplify property 2) by writing Z=X+Y mod 2. Then 2) becomes
2) If rho (lambda |Z) =1/N where 1/N is a simple normalisation factor over an N-member ensemble, then rho (lambda | Z’) =0. Here (since you ask me to specify the distribution) I assuming a simple uniform measure on my finite ensemble of particles.
Once again, I want to emphasise that 2) not an assumption, it’s a nontrivial consequence of number theory and the requirement in my theory that realistic Hilbert states must have rational squared amplitudes and phases.
With this, we can address the factorisation issue in the Bell Theorem, which for a deterministic theory (and I insist that we deal with a deterministic theory) can be written
3) A_(XY)(lambda)=A_X(lambda)
4) B_(YX)(lambda)=B_Y(lambda)
To see that 3) and 4) follow from 2), note that according to 2), any sample space Lambda can be divided into disjoint subsets Lambda(Z=0) and Lambda (Z=1) depending on the value of Z. If some specific lambda in 3) belongs, say, to Z=0, then knowing X and lambda determines Y and hence the Y value on the left-hand side of 3) is redundant; hence the equality. Similarly for 4).
A precise model for A_(XY)(lambda) and B_(XY)(lambda) is provided in Section 4.1 of https://arxiv.org/abs/1804.01734. It would take too much space to write it here.
The way I like to frame this (see https://arxiv.org/abs/1903.10537) is to say that Statistical Independence and Factorisation hold on the invariant set (which is the fractal space of trajectories). Put another way, Statistical Independence and Factorisation are only violated when considering hypothetical putative counterfactual trajectories which do not lie on the invariant set (and are described by Hilbert states with irrational squared amplitudes and phases).
To repeat, these considerations are not specific to CHSH. For example, they apply when applying a Hadamard to a single qubit, taking it, say, from a position basis to a momentum basis. In fact, my view is that these number-theoretic properties generically lie at the heart of complementarity – non-commuting observables in quantum theory. These number-theoretic properties are simply irrelevant when considering classical states (where the whole of Euclidean state space is ontic, and there is no ontic distinction between rational and irrational numbers). I can expand further on this using theory of p-adic numbers, but feel I have gone on long enough already.
The key physical point to make, however, is that it is simply wrong to critique this particular violation of Statistical Independence by referring to real-world ensembles of particles measured differently. You have taken a sledgehammer to demolish a delicately constructed theory. The violation of SI in my model only happens for certain specific counterfactuals which happen to be relevant to many quantum experiments, but are not relevant in classical physics.
Prof. Palmer,
You haven’t specified $p(\lambda|xy)$. One needs four functions, $p(\lambda|00)$, $p(\lambda|01)$, $p(\lambda|10)$, and $p(\lambda|11)$. What are they? I haven’t found an example of $A_{XY}(\lambda)$ and $B_{XY}(\lambda)$ in Section 4.1 (or 4.2) of https://arxiv.org/abs/1804.01734 either. Which equation do you have in mind?
I’m not asking for the general theory, I’m just asking for a single example of such functions for which a Bell inequality is violated.
Mateus,
“I’m critizing Hossenfelder and Palmer’s paper based on their own definitions. Read the paper. There they define (correctly) superdeterminism as the conjunction of determinism and violation of statistical independence, that is, failure of p(λ|x)=p(λ).”
No problem here, I agree to use the definition in the paper, which, in plain English, is:
“the probability distribution of the hidden variables, ρ(λ), is not independent of the detector settings”
Let’s enumerate my previous points:
Refutation A.
You avoid circularity by admitting that:
” I do state explicitly in the post that it is possible to reproduce QM’s predictions with a superdeterministic theory.”
Great! The implication of this is that a hypothetical world described by such a superdeterministic theory would look identical to ours. Same chemistry, same biology, same medicine, and same statistics. So, even if I did not understand your “Tobaco” argument, as you claim, it’s still dead and buried, because it starts with the same observations but outputs contradictory results.
Refutation B. Your argument is not sound. (this was not addressed in your post so I’ll just paste it here using the new, agreed definition)
Your argument has the following structure:
P 1: superdeterminism is true
P 2 : if superdeterminism is true then statistical data does not show that smoking causes cancer in some patients.
P 3: statistical data shows that smoking causes cancer in some patients.
So, if you agree that P3 is true (which I agree) it follows that P1 is false.
Let’s reformulate P2 using the above provided definition:
P2′: If, in a Bell-type experiment, the probability distribution of the hidden variables, ρ(λ), is not independent of the detector settings, it follows that “the genome of the subjects was correlated with whether you forced them to smoke”.
I see no shred of evidence in your post linking the behavior of entangled elementary particles with the human genome and the tendency of smoking. I reject therefore P2, so the conclusion does not follow.
Refutation C. Bell’s independence assumption is false in any theory with long range forces, like classical EM, General Relativity, Newtonian gravity, etc.
Your rebuttal here was:
“Nonsense. None of these theories is superdeterminisc”, and (from your answer to manyoso) :
“I did explain it to Andrei here, but the explanation fell on deaf ears.”
OK, let’s clearly formulate the problem in classical electromagnetism (CEM):
1. The polarisation of an EM wave depends on the specific way the electron accelerates.
2. The only way an electron can accelerate is Lorentz force.
3. The Lorentz force is given by the electric and magnetic field configuration at the locus of the emission.
4. The electric and magnetic field configuration at the locus of the emission does depend on the position/momenta of distant charges.
5. The detectors are composed of charged particles (electrons and quarks).
Conclusion: From 1-5 it follows that the hidden variable, λ, depends on the detectors’ states.
Let’s see how did you address this argument in the 2018 post:
“It is true that in general changing some arrangement of charged particles will create some stray electromagnetic fields that will influence the position of other charged particles, but this does not make classical electrodynamics a superdeterministic theory.
First of all, in the loophole-free Bell tests, the generation of the detector settings was done with a space-like separation to the generation of the photon, so by relativity there couldn’t possible be any stray electromagnetic fields perturbing each other.
This does not need bother us, though, as since CM is a deterministic theory, we could just locate the change in the states of S, A, and B in the intersection of their past light cones, so that relativity doesn’t pose any obstacle there.”
So, it seems to me that you admit that λ does depend on the detectors’ settings because the state at the locus of the emission (not just the position/momenta of the particles, but also the E-M fields) does depend on the charge distribution/momenta of the detectors. From this point you start talking about random number generators, computers, seeds and the like. All those concepts do not exist in CEM, they are just names we use for specific types of charge configurations. So you did not refute my argument at all. In other words you did not show that any of the above premises (1-5) is false in your setup. Computers are still composed of charged particles. My argument does not depend in any way on how the detectors are built, as long as they are built from charged particles.
“CM allows for arbitrarily good shielding of electromagnetic fields, so we could put S inside a Farady cage, so that the fields that come from Alice and Bob’s computers are effectively zero.”
This assumes a continuous charge distribution, which in our universe is false. Sure, you can place the experiment inside a metal sphere but, at atomic scale, that sphere would be mostly holes. In fact I can prove that the field configuration at the source is uniquely determined by the charge distribution/momenta. One can notice that the E and M fields at a certain point give you an equation that depends on the distance to each charge and its momentum. But the number of charges in the entire universe is a finite number, while the number of points around any infinitesimal region around the accelerating electron is infinite (space is continuous). So, there will always be enough equations to completely “fix” the state of the outside world, Faraday cage or not.
You continue speaking about seeds and antennae but I can’t see the relevance of all that. Please reformulate your rebuttal so that it is relevant in the context of CEM, in other words show how your proposed circumstances imply that any of the 1-5 premises above is false!
“How on Earth are you going to produce these conspiratorial correlations via a dynamical equation in such a complex system?”
When I will see an argument, with some clearly formulated premises I will address it. “How is such and such possible?” is not a valid argument, just an appeal to emotion.
In reply to my question:
“How do you explain the perfect anti correlation observed in an EPR-Bohm experiment (spin-entangled particles) when the measurements are performed with similar detector orientation?”
You posted a link to a MWI explanation. Let me formulate my argument here and we will see how MWI will answer it! Here it is:
Let’s take a look at EPR’s reality criterion:
“If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity.”
Let’s formulate the argument in a context of an EPR-Bohm experiment with spin 1/2 particles where the measurements are performed in such a way that a light signal cannot get from A to B:
1. It is possible to predict with certainty the spin of particle B by measuring particle A (QM prediction).
2. The measurement of particle A does not disturb particle B (locality).
3. From 1 and 2 it follows that the state of particle B after A is measured is the same as the one before A is measured (definition of the word “disturb”)
4. After A is measured B is in a state of defined spin (QM prediction)
5. From 3 and 4 it follows that B was in state of defined spin all along.
6. The spin of A is always found to be opposite from the spin of B (QM prediction)
7. From 5 and 6 it follows that A was in a state of defined spin all along.
Conclusion: QM + locality implies that the the true state of A and B was a state of defined spin. The superposed, entangled state is a consequence of our lack of knowledge in regard to the true state. So, QM is either an incomplete (statistical) description of a local deterministic hidden variable theory or it is non-local.
Mateus,
“Indeed, you only violate statistical independence in the CHSH case, and I’m criticizing you specifically for doing that. You assume statistical independence when it is convenient, and violate it when it is not.”
Here the error of your argumentation is obvious. You assume that statistical independence (SI) is a physical principle that must hold in any situation, just like, say, momentum conservation. But such an assumption is obviously false. The positions of two charged particles orbiting each other are not independent. The positions of two stars in a galaxy are not independent. In fact, just as I’ve argued in Refutation 3 any system described by a field theory violates SI. So, Prof. Palmer is perfectly justified in accepting SI for medical tests and rejecting SI for Bell tests. The burden is on you to show that a Bell test is more like a medical experiment than gravitational or EM one.
manyoso,
“You reference this as an example of an actual superdeterministic model that can explain the correlations that Mateus challenges… ie, the BIG Bell test… but a cursory glance at that paper does not indicate it has anything to do with Bell or superdeterminism or that test… what am I missing?
Is this just another instance of you claiming that all field theories are superdeterministic?”
You are correct. I do not intend to prove here that SED actually reproduces QM, only that it cannot be ruled out by Bell. In fact, getting a prediction from SED for a Bell test would be a computational nightmare. So, explaining directly Bell correlations is not, I think possible. The only way would be to show equivalence in those situations that are computationally feasible (like atomic spectra for example) and, ultimately show that the theory gives you QM formalism in some limit.
In order to escape this sort of no-go theorems it is enough to show that the independence assumption fails and I think I’ve done that.
Andrei,
You are still missing the point. The observed data, $p(\text{cancer}|\text{smoke})$ and $p(\text{cancer}|\neg\text{smoke})$ is the same in both the real world and the superdeterministic world. The question is what the observed data implies about the unobserved data, $p(\text{cancer}|\text{smoke},\neg\text{predisposed})$ and $p(\text{cancer}|\text{smoke},\neg\text{predisposed})$. In the superdeterministic world, the answer is not much. It might be that smoking causes cancer, it might be that smoking prevents cancer, we cannot know. In the real world, the implication is clear: smoking causes cancer.
P3 is true only if you assume statistical independence.
Your P2′ bears no relationship to my argument. In the tobacco case I’m only demonstrating that without Statistical Independence you can’t conclude anything.
Regarding classical electromagnetism: you still haven’t understood what superdeterminism is. It is not enough for some stray EM field from the setting to make a minute change to the state of the source. In any realistic situation this change won’t be even detectable. In a superdeterministic theory this change needs to large, necessarily changing the result of the measurement, and unavoidable, such that it is not possible to make measurements on the same (or similar) state of the source with different settings. Nothing like this happens in EM.
Also, great job in completely ignoring my explanation of the Bell correlations.
By asking for an explicit function for rho etc it sounds like you want me to write down some formulae. Does this make sense? Does there exist a formula for the paths of three gravitationally bound bodies? No. Does that mean that the paths of three gravitationally bound bodies do not exist? No. Is there a formula for the weather? No. Does the weather exist? As I explained, my theory is motivated by the theory of fractal invariant sets in nonlinear dynamical systems theory. Can such sets be described by formulae? No. It therefore seems rather unlikely to me that I can give you the functions/formulae you crave. However, does that somehow mean my ideas are null and void? I don’t think so. I have come as close as I can to describing concisely why the rho, A and B, with certain specific properties, are mathematically consistent. These properties in turn explain why, in the model I am trying to propose, certain counterfactual states (e.g. corresponding to Hilbert states with irrational squared amplitudes and phases) are not real/physical/ontic. This explains why the model is superdeterministic.
To repeat, my key point is that in this regard, your arguments about tobacco trials and so on are completely irrelevant; requiring the subsamples of control and experimental volunteers to be statistical similar has no bearing at all on the issues I raise about whether certain counterfactuals are well defined or not. Do you at least accept that your arguments against Superdeterminism (with, frankly, rather insulting title) do not cover this particular point?
I want to make sure you understand another point I am making, in response to an earlier comment of yours. First, to repeat, the violation of Statistical Independence only arises when considering counterfactual worlds which correspond to points in state space which lie in the fractal gaps of the invariant set (and correspond to Hilbert States which have irrational squared amplitudes or phases). These gaps exist no matter how large is the finite parameter N which describes the number of iterated fractal pieces (here trajectory segments in a helix). However, in the singular limit at N=infinity (and only at this limit), these gaps vanish and all counterfactuals become ontic. This singular limit corresponds to classical theory. This is why, in my model, counterfactual reasoning is perfectly valid without restriction, in the classical limit. That is to say, there are perfectly sound mathematical reasons why the arguments I am using about counterfactual indefiniteness apply to the interpretation of quantum experiments but not to the interpretation of classical experiments. If you are not familiar with the notion of a singular limit, Michael Berry’s paper on maggoty apples is worth reading.
I have tried to describe my concern about your blog (and the title in particular) as carefully as I can. I have other things to do now. However, I would conclude by saying that if you insist that everything in science must be describable by explicit formulae, and perhaps this viewpoint reflects your background in a part of physics that is essentially linear, you will have a very limited perspective on science in general. (BTW Please see Sabine and my paper as to why we may be being fooled by the linearity of the Schrödinger equation, into thinking that the world is itself governed by linear theory.)
Prof. Palmer,
Your refusal to write down the function $p(\lambda|xy)$ is frankly ridiculous. Without doing that you can’t even claim that your model in fact violates a Bell inequality. Having a “non-linear” model is no excuse, you still need to make precise claims and provide proofs when working with them. Even if your model is too complicated for you to write down a function (which is news to me, in your previous comment you indicated that you could do it), at the very least you should provide a computer simulation to support your claims.
On the contrary, the law of total probability is valid in both cases, and in both cases you can postulate a correlation between the randomization and the measured variable in order to dismiss the inconvenient experimental result. Note that I didn’t use any counterfactuals in my argument (not that there is any problem with counterfactual reasoning).
Mateus,
“Your refusal to write down the function p(λ|xy) is frankly ridiculous.”
This is no way to communicate. It does nothing whatsoever to resolve whatever good faith disagreements you have and it is very disrespectful.
I think you have some good points, but they are being overshadowed by your hostile communication. One good faith question I have for you is what interpretation of QM do you place the most stock in?
Andrei,
I think you have a good point regarding field theories violating Statistical Independence. However, I would note that Mateus also has a good point in that you are _overstating_ this point. Yes, field theories are in your sense ‘superdeterministic’ in that they violate SI with their formalism… but just because they violate it via the formalism is not enough to show that the *particular* way in which they violate it is *responsible* for the Bell violations. In this case, I think Mateus is right even if he is using disrespectful language at times.
If your whole point is to show that the class of things that violate strict SI includes a whole lot of classical theories, sure I think I will grant that. However, this is different from showing how the lack of SI is used by some particular model to produce all the Bell violations that have been observed. So far I don’t think *any* model has shown this… mostly because to date NO actual superdeterministic model that can reproduce QM has actually been written down.
Tim Palmer,
I have been reading and watching all about Invariant Set Theory since I read your paper with Dr. Hossenfelder two days ago and I think your ideas are novel and striking. I also think it completely unfair to criticize the fact that your ideas are not fleshed out enough to actually write down what Iv actually is. However, Mateus has a point. You might have shown how a superdetermistic theory (yet to be developed) *could* be used to violate Bell and still be local and real, but to date I don’t actually see such a theory. Which is fine and honestly expected!
I guess what I’m asking is… are you working on finding some toy model that gives the equations for generating Iv? Without that … it makes it very hard to actually figure out if you have something workable…
The proofs are in:
https://arxiv.org/abs/1804.01734
Your very last point hits the nail on the head. You say “Note that I didn’t use any counterfactuals in my argument.” That is precisely my point!
There is nothing of the sort in your paper. You haven’t even done any calculation in your section about Bell’s theorem. What would be needed to show that your model can indeed violate a Bell inequality is to describe how the experiment goes. You know, Alice and Bob did this experiment, described by QM as measuring the observables $A_0,A_1,B_0,B_1$ on the state $\ket{\psi}$. According to Invariant set theory, when they made measurements in the setting 00 they were actually making measurements on the hidden variables $\lambda_{00},\lambda_{00}’,\lambda_{00}”,\ldots$, obtaining results $A_0(\lambda_{00}),B_0(\lambda_{00}),A_0(\lambda_{00}’),B_0(\lambda_{00}’),A_0(\lambda_{00}”),B_0(\lambda_{00}”),\ldots$. When they made measurements in the setting 01 they were actually making measurements on the hidden variables $\lambda_{01},\lambda_{01}’,\lambda_{01}”,\ldots$, obtaining results $A_0(\lambda_{01}),B_1(\lambda_{01}),A_0(\lambda_{01}’),B_1(\lambda_{01}’),A_0(\lambda_{01}”),B_1(\lambda_{01}”),\ldots$, and so on and so on and so on.
From such a proof it would be trivial to extract the functions I’m demanding from you.
manyoso,
The Many-Worlds interpretation is the way to go.
Mateus,
“Your P2′ bears no relationship to my argument. In the tobacco case I’m only demonstrating that without Statistical Independence you can’t conclude anything.”
It seems to me that your misunderstanding of what the status of statistical independence (SI) is in science runs much deeper than I thought.
SI is not a general physical principle, like momentum conservation. There are situations where SI holds (coin flips) and situations where it does not (stars in a galaxy, synchronized clocks, etc.). So, if your argument is of the form:
P1: If SI is violated in at least one situation it must be violated in all situations –
– it is clearly dead. I have already provided you with multiple examples of SI violations, so, even without mentioning Bell tests you need to conclude that you cannot prove that smoking causes cancer. In other words, there is no way to prove that smoking causes cancer if synchronized clocks exist. But, synchronized clocks do exist, right?
Your only chance to resurrect this argument is to show that the specific violation of SI in a Bell test somehow is relevant to smoking. Yet, your quote above shows that you are unable to do it.
Also, I don’t find your example too convincing. It might be the case that a gene that increases the incidence of cancer is also causing an increased appetite for nicotine. So, tobaco companies might have a point after all. This also shows that blindly assuming SI in all cases is a bad scientific practice.
“Regarding classical electromagnetism: you still haven’t understood what superdeterminism is. It is not enough for some stray EM field from the setting to make a minute change to the state of the source. In any realistic situation this change won’t be even detectable.”
How are you supposed to detect the change of the orbit of the electron that undergoes a transition? This makes no sense. What you need to show is that it is possible to change (counterfactually) the orientation of one detector without changing the state of the source to a significant degree. You have provided no reason that this can be done, so my argument stands. Let me help you here! Let’s say you move the detector very far, so that the E-M fields at the location of the source are changed insignificantly. This seems to work until you realize that the time it takes the particle to reach the detector increases as well. As a result you are forced to allow that “insignificantly” changed state to evolve for a long time. Such an evolution may very well increase the deviation from the original state as it usually happens in chaotic processes.
“Also, great job in completely ignoring my explanation of the Bell correlations.”
I admit I do not understand MWI at all. For example I do not understand what is the relationship between MWI’s ontology and spacetime, yet locality only makes sense in spacetime. The quantum state does not exist in spacetime. It’s not clear for me what actually exists in spacetime, according to MWI. If you help me understand this I would be more than willing to comment on your explanation.
manyoso,
“I think you have a good point regarding field theories violating Statistical Independence. ”
Thanks!
“However, I would note that Mateus also has a good point in that you are _overstating_ this point.”
No, he does not, I don’t overstate anything.
“Yes, field theories are in your sense ‘superdeterministic’ in that they violate SI with their formalism”
This is all I have ever claimed.
” just because they violate it via the formalism is not enough to show that the *particular* way in which they violate it is *responsible* for the Bell violations.”
I fully agree, and I have never claimed such a thing, nor am I required to do so. My claim is that superdeterminism is a valid scientific option, and showing that mainstream theories are superdeterministic is a definitive prove of this claim.
The only reason one should bother with Bell tests is if the theory under investigation can be ruled out in this way. If not, a Bell test becomes just another experiment for which prediction cannot be calculated for computational reasons. It’s trivial to find examples like that for standard QM as well. Take uranium spectrum for example. We cannot calculate it with QM so we don’t know if the QM prediction is correct. Is this a problem for QM? Why? Similarly, it is not possible to simulate a Bell test using classical electromagnetism, so I don’t expect to ever be able to extract a prediction from the theory. It does not matter. So, the reasonable way forward is to ignore Bell theorem and stick with simpler experiments where predictions can be calculated. In SED for example we have many good results (black body radiation, specific heat of solids, Lamb shift, Van der Waals forces, and recently, a classical explanation for the electron’s spin). The theory is now tested for the hydrogen atom. The good part is that the SED atom is much more stable than the originally proposed classical atom. Yet, it is not stable enough, but there are ways to make progress. Regardless, my only point here is that one should not dismiss a theory like SED because of Bell’s theorem, as it is superdeterministic.
Andrei,
Statistical independence is a methodological principle. If you don’t have evidence of an actual conspiracy (that somebody was sequencing genomes and poisoning the random number generator, in the smoking case, or that some dynamical law imposes the conspiratorial correlation, in the physics case), you’re not allowed to violate it.
No you haven’t.
Which part of “randomized trial” and “you forced them to smoke” didn’t you understand?
It would be clearly a waste of my time, you have consistently displayed an obstinate refusal to learn anything.
Mateus,
Andrei: “I have already provided you with multiple examples of SI violations…”
“No you haven’t.”
OK, so you claim that the positions of two orbiting objects are independent physical parameters, right?
“Which part of “randomized trial” and “you forced them to smoke” didn’t you understand?”
Sorry, you are right here, I retract my statement. Your example is good enough, just like Tim Maudlin’s medical tests.
“It would be clearly a waste of my time, you have consistently displayed an obstinate refusal to learn anything.”
Really? You have failed to provide any evidence for any of your claims. You claim that SI holds for all systems, yet you failed to address any of the multiple examples I provided. You have failed to show that SI holds in electromagnetism. And, probably you have no clue about how MWI relates to spacetime.
Andrei,
You still haven’t understood what superdeterminism is. Imagine a radio station transmits at 103.8 MHz, and you want to listen to it, so you set your tuner to 103.8 MHz. The action of setting your tuner changes the frequency the station is transmitting at to 106 MHz. You set your tuner to 106 MHz. This changes the frequency of the station to 110 MHz. This is superdeterminism.
Mateus,
I asked you a very simple question:
“OK, so you claim that the positions of two orbiting objects are independent physical parameters, right?”
Is there something you do not understand about it? Why don’t you answer it? We agreed on what SD means and I see no reason to redefine it.
I didn’t answer that because the answer is obvious, they’re correlated. You’re assuming that this correlation implies superdeterminism, because you don’t understand what superdeterminism is.
Mateus,
“I didn’t answer that because the answer is obvious, they’re correlated.”
Great, so you admit, after all, that there exist situations when assuming SI is not justified. And this depends on our knowledge of the systems under observation. We know how gravity works and we do not expect positions of orbiting objects to be independent, even if there is no conspiracy there. You don’t need to fine-tune the initial state or tamper with the objects in any way. The correlation is a direct consequence of the physical laws describing the motions of those objects.
On the other hand we know enough about DNA to understand that simply selecting a subject in a randomized trial cannot possibly change it. The DNA is virtually unchanged for the entire life of a human. So, in the case of a randomized trial, assuming SI is justified.
So, your argument does not work. The fact that SI is not justified in one situation (orbiting objects) does not imply that it cannot be justified in a different scenario (randomized trials). So, by maintaining that SI being unjustified in a Bell test implies that it must be unjustified in randomized trials, you just assume what you want to prove.
“You still haven’t understood what superdeterminism is. Imagine a radio station transmits at 103.8 MHz, and you want to listen to it, so you set your tuner to 103.8 MHz. The action of setting your tuner changes the frequency the station is transmitting at to 106 MHz. You set your tuner to 106 MHz. This changes the frequency of the station to 110 MHz. This is superdeterminism.”
Your example is wrong for the following reason:
You assume to know that the station transmits at 103.8 MHz, so the fact that you tune to that frequency and it changes appears to be a conspiracy. This is unlike a Bell test where you do not know what spin the entangled particles have prior to detection. In your example you expect to here something at 103.8 MHz. In a Bell test there is no reason to expect the spin to be up or down in any run. A better example would be trying to catch an iron ball with a magnetic bar. The position and momentum of the ball at the moment you catch it depends on the specific way in which you manipulate the bar.
Andrei,
There are correlations everywhere. They don’t imply a failure of Statistical Independence. That would be an unavoidable correlation between the measured and the randomization variable.
Unless, of course, there is an actual conspiracy going on, with the tobacco company sequencing the genome of the participants and putting the ones predisposed to have cancer in the non-smoking group.
The relevant calculations for the Bell Theorem are not described in 4.2 of https://arxiv.org/abs/1804.01734 because they have already been performed in Sections 2 and 3. The crucial point is that the calculations that imply a violation of Statistical Independence in Invariant Set Theory – note I do not assume it a priori – arise from the particular form of my finite discretisation of the Bloch sphere (for a single qubit) in Section 2. Please read this Section; it exploits number-theoretic properties of trigonometric functions and is therefore nontrivial. The generic form for the A and B functions is given by equation 38 in Section 4.1.
Prof. Palmer,
I don’t mean the calculations about the violation of Statistical Independence, I’ll take your word for that; I mean the calculations about violating a Bell inequality.
Equation (38) in section 4.1 describes a function from three integers to a bitstring; the functions A and B should take as input the setting, the hidden variable, and output the measurement result. Is perhaps the hidden variable the position in the bitstring, determining the measurement result?
It is unclear what is the measurement setting, though. In this section you describe how to encode the parameters of the quantum state in those three integers (equations (35) and (36)), for what it seems to be a fixed measurement in the computational basis. But in equation (39) you describe how such a bitstring represents the singlet state, with the integers representing the relative orientation $\theta$.
Mateus,
“There are correlations everywhere. They don’t imply a failure of Statistical Independence. That would be an unavoidable correlation between the measured and the randomization variable.”
1. Can you point me to a reference where this definition of SI is used? It seems to me that you are trying now to redefine SI in order to save your argument.
2. What do you mean by “randomization” in the context of a deterministic theory? Let’s go back to classical EM. What would be your understanding of the concept of randomization in a world completely described by this theory? Do you still maintain that in classical EM the hidden variable (photon polarization) is independent on the detectors’ states? Why?
I I am also very interested in your take on EPR (from my post at 19th December 2019 06:56) from the point of view of MWI. What premise is false in MWI and why?
Best regards!
Andrei,
Sure, this one for example. Why don’t you ask Hossenfelder and Palmer about it? I’m sure they love talking about superdeterminism. Since you’re now accusing me of lying about the definition, I don’t think it matters what I say anymore.
In a deterministic theory you replace “random” with “uncorrelated”. For example, it is common to use a (perfectly deterministic) pseudorandom number generator to determine the values of the randomization variable, because we believe there’s no correlation between the output of the pseudorandom number generator and the genome of the test subjects (in the smoking case) or the state of the photons (in the Bell case).
Of course I do, because that’s what the theory predicts. Not that there’s anything “hidden” about photon polarization, by the way.
Both 3 and 4 are false, “From 1 and 2 it follows that the state of particle B after A is measured is the same as the one before A is measured (definition of the word “disturb”)”, and “After A is measured B is in a state of defined spin (QM prediction)”.
3 ignores that there’s no such thing as “the state of particle B” alone. This is true regardless of Many-Worlds. A and B are in an entangled state, you have to describe both together. The best you could is assign B a reduced density matrix, but that doesn’t encode its correlations with A, which are precisely what is at issue here.
4 is the nonlocal collapse of the wavefunction. That doesn’t happen in Many-Worlds. After Alice makes her measurement, the wavefunction splits in two branches, one with Alice seeing spin up and another with Alice seeing spin down. Now because of the entangled state, the Alice that sees spin up will only ever meet the Bob that sees spin down, and the Alice that sees spin down will only ever meet the Bob that sees spin up. This is how the quantum mechanical prediction can be obtained in a completely local way. I explain this in detail in the post I linked to you.
Mateus Araújo,
“Since you’re now accusing me of lying about the definition, I don’t think it matters what I say anymore.”
I didn’t make such an accusation. I think you use a wrong definition (SI is not restricted to any “randomization variable”, is a generic property that two or more data sets can have.) But being wrong does not imply lying. I always assume that everybody I discuss with is honest.
“Of course I do, because that’s what the theory predicts. Not that there’s anything “hidden” about photon polarization, by the way.”
What evidence do you have for this claim? Where is your rebuttal of my argument – see below?
“OK, let’s clearly formulate the problem in classical electromagnetism (CEM):
1. The polarisation of an EM wave depends on the specific way the electron accelerates.
2. The only way an electron can accelerate is Lorentz force.
3. The Lorentz force is given by the electric and magnetic field configuration at the locus of the emission.
4. The electric and magnetic field configuration at the locus of the emission does depend on the position/momenta of distant charges.
5. The detectors are composed of charged particles (electrons and quarks).
Conclusion: From 1-5 it follows that the hidden variable, λ, depends on the detectors’ states.”
Which of the above premises is false? Does any one becomes false because of the use of a “randomization” protocol?
“3 ignores that there’s no such thing as “the state of particle B” alone. This is true regardless of Many-Worlds. A and B are in an entangled state, you have to describe both together. The best you could is assign B a reduced density matrix, but that doesn’t encode its correlations with A, which are precisely what is at issue here.”
I should have been more careful here. By “the state of particle B” I do not imply its quantum state. It’s a generic term used for the, let’s say, situation in the lab where Bob is going to perform the measurement. This situation might be anything. There might be the case that there is no particle there at all, or that the particle is there but it does not have the spin property, whatever. My point is that whatever the situation in B lab is, locality implies that it should not be changed by the measurement in A lab.
“4 is the nonlocal collapse of the wavefunction.”
Not necessarily. Even in MWI, once A performs the measurement and finds “spin-up” he knows that B must get “spin-down”. MWI adds the story that there exists a world, containing a copy of A that got “spin-down” and in that world B would get “spin-up”. But that world cannot be accessed anymore. From the point of view of the copy of A that found “spin-up” that world does not exists.
So, by disregarding the branch I cannot access, so, by scientific standards, is non-existing, my argument still holds.
Andrei,
So you’re merely claiming that I, an active researcher in quantum foundations, am wrong about a basic definition? Look, what I’m stating is not controversial. Read the Hossenfelder and Palmer paper. Ask them, even they get it right.
I have already refuted your argument several times, you just refuse to listen. To quote myself: “Regarding classical electromagnetism: you still haven’t understood what superdeterminism is. It is not enough for some stray EM field from the setting to make a minute change to the state of the source. In any realistic situation this change won’t be even detectable. In a superdeterministic theory this change needs to large, necessarily changing the result of the measurement, and unavoidable, such that it is not possible to make measurements on the same (or similar) state of the source with different settings. Nothing like this happens in EM.”
I should add that your direct-action argument is not even tenable in a relativistic theory, such as EM, as the emission event can be done with a space-like separation with regards to the setting of the detector (as is the case in Bell tests). You need a conspiratorial correlation in the past to get around that, but that’s not what you’re claiming.
“Situation” is too informal for my taste. If you define “situation” as the reduced density matrix in B then 3 is true, it is not changed by the measurement in A.
It is not a “story”, it is a prediction of the theory, and a completely local mechanism for the production of the Bell correlations.
Yeah, if you pretended that the other branches do not exist, then Many-Worlds doesn’t work. Who would have thought?
Let me ask you something, do you also claim that the region outside the observable universe is non-existing, since you cannot access it?
Mateus Araújo,
We have already agreed on what SD means (from the paper):
“the probability distribution of the hidden variables, ρ(λ), is not independent of the detector settings”
You claim that in classical EM the above is not true, in other words:
the probability distribution of the hidden variables, ρ(λ), is independent of the detector settings.
You present the following statements:
1. “It is not enough for some stray EM field from the setting to make a minute change to the state of the source.”
What you need to show here is that the “EM field from the setting” is not enough to change the polarization of the photon. I did not see any argument in this respect.
2. “In any realistic situation this change won’t be even detectable.”
This is irrelevant. It’s not important for you to be able to detect the change (you cannot measure the trajectory of an electron because of uncertainty), what’s important is for the change to significantly alter the hidden variable.
3. “In a superdeterministic theory this change needs to large, necessarily changing the result of the measurement”
I agree with the above.
4. “…and unavoidable, such that it is not possible to make measurements on the same (or similar) state of the source with different settings.”
I also agree.
5. “Nothing like this happens in EM.”
This came out of the blue. No evidence presented! I know that this is what you believe, but you did not present any evidence.
“I should add that your direct-action argument is not even tenable in a relativistic theory, such as EM, as the emission event can be done with a space-like separation with regards to the setting of the detector (as is the case in Bell tests).”
My argument holds perfectly when ” the emission event can be done with a space-like separation with regards to the setting of the detector”. Just give me something more than “the fields are too weak to matter”! I do not need a quantitative calculation, just some conditions you think are enough to make the change on the source vanish! Think about a counterfactual situation, a universe just like ours, where the source has a state that is not significantly different than during the”real” experiment but the detector has a significantly different state! I will prove you that such a situation is impossible!
“You need a conspiratorial correlation in the past to get around that, but that’s not what you’re claiming.”
No, nothing like that! I only need you to agree that the equations 21.1 at this page:
https://www.feynmanlectures.caltech.edu/II_21.html
hold in every case.
I have to go now, I’ll reply for the MWI part later!
Andrei,
So it is now my job to disprove your nonsense? No, I have better things to do. You’re the one claiming that EM is superdeterministic. Go on, write a paper about it.
Do you know what space-like separation means? It means that relativity forbids any signal from being transmitted from one event to the other, as it would have to travel faster than light.
What is the point of linking to the Maxwell’s equations? They are not under dispute here.
Mateus Araújo,
“So it is now my job to disprove your nonsense?”
It’s your job to show that Bell’s “vital assumption”, that the result B for particle 2 does not depend on the setting a of the magnet for particle 1, nor A on b” holds for classical EM. You have utterly failed to show that, and you also failed to disprove my argument for the contrary.
The burden of proof is on you to show that Bell’s theorem does apply to classical EM. to show that Bell’s assumptions are true for this theory. You failed!
“Do you know what space-like separation means? It means that relativity forbids any signal from being transmitted from one event to the other, as it would have to travel faster than light.”
I know. Take a look at my argument and point out where it depends on faster than light travel. Classical EM is local, you should know at least that!
Andrei,
That’s Bell locality, not superdeterminism. In any case, the proof is trivial: EM is a relativistic theory. You’re welcome.
What does this even have to do with anything? Nobody disputes that.
If you can’t understand that there’s no hope for you. You’re banned from my blog.
Mateus and manyoso,
The response to Andrei is very simple:
I. You have a measuring apparatus and something to be measured in Region A.
II. You have another measuring apparatus and something else to be measured in Region B.
III. Region A and B have a very nice spacelike separation throughout the experiment.
IV. Everything is classical electromagnetism, with of course retarded solutions to the field equations (no funny stuff, like Feynman-Wheeler).
Now, if you take a contrasting situation in which the measuring apparatus in Region B is somehow different (e.g., the pointer pointing at a different angle in a test of Bell’s inequality), can everything in Region A in this contrasting situation still be exactly the same as in the original situation.
The answer is, of course, obviously “Yes!” The different measuring apparatus in the second situation in Region B is obviously consistent with everything being the same in Region A in the two contrasting situations.
This follows simply from taking the charges, currents, and E and B fields in Region A and B in the two contrasting situations and “running time backwards” in term of Maxwell’s equations. It is a theorem in classical electromagnetism that you can always do that, assuming of course that your initial state (here, the final state in time) obeys the two divergence equations.
Now, Andrei will object that then the past is different between the two situations. Well, sure! If everything is deterministic, and something is different in the present, then of course something was different in the past. And the way light-cones work, something might therefore be different in the past light-cone of any other event in the present.
Does not matter, of course. The theorem is conclusive. By construction, whatever was different in the past does not change the current, charges, and E and B fields in Region A.
This is a rigorous proof.
It will have zero impact on Andrei.
From our past interactions, I know that Andrei likes words, lots and lots and lots of words. And Andrei can argue interminably that I do not understand the meaning of the word “superdeterminism” or “correlation” or some other word that I have not used at all.
And Andrei does not understand physics, so I doubt he understands the theorem in classical EM that something can differ in the past without affecting at all the current charges, currents, and E and B fields in a particular region now.
But the theorem is nonetheless true and Andre is nonetheless wrong, although he would rather die than admit it!
I guess Andrei is not able to “admit” anything now, since he has been blocked.
A disappointing culture of discussion, IMHO.
I’m not mantaining a blog in order to suffer fools. I’ll ban anybody else who displays such ignorance and unwillingness to learn.
Georg wrote:
>I guess Andrei is not able to “admit” anything now, since he has been blocked.
A disappointing culture of discussion, IMHO.
Oh, Andrei gets around! If he had any inclination to admit his error, he would do so elsewhere.
But he won’t.
If you’d interacted with Andrei as much as I have, you’d know that the one thing he cares about is never admitting he is wrong.
I have, incidentally, worked with or taken classes from five Nobel laureates as well as some prominent engineers (the inventor of TTL logic and of the Lange coupler). Really good STEM people can admit when they make a mistake.
I admit that I do not understand the proof above: to cause the changed region B, “something” needs to be changed for all t essentially back to the Big Bang, I would think. The region B past light cone then inevitably overlaps the region A past light cone at some time, and therefore also A will change.
Can you point me to that theorem?
Georg wrote to me:
> The region B past light cone then inevitably overlaps the region A past light cone at some time, and therefore also A will change.
> Can you point me to that theorem?
Georg: the theorem is there in what I wrote.
I’m afraid that you may not understand the underlying math and physics that I assume, and I do not know how to explain all that here in a comment thread: indeed, I do not know how to explain all of that in less than a few hundred pages!
If you really are interested, get a copy of J. David Jackson’s Classical Electrodynamics and master it all the way through, with special concentration on the sections concerning time-varying fields, Maxwell’s equations, and time reversal (this is specifically Chapter 6 in the second edition, but chapters are, I believe, relabeled in later editions).
I’ll try to give you hints as to what you need to understand. The basic points are:
First, the E and B fields and charges and the velocity of charges (and hence the currents) at one point in time (and everywhere in space) can be specified to be anything at all (provided they are consistent with the divergence equations) as the “initial conditions.”
Second, once you have done that, then Maxwell’s equations and the Lorentz force law uniquely determine the future or past values of the E and B fields and charges and currents.
A reasonable person would say, “Wait, what about non-electromagnetic forces on the charges??” but Andrei in his presentation of his views elsewhere has said he wants to rule those out, so I have followed his desire.
How do we know that just specifying E and B and charge locations and velocities at one point in time (and throughout space) suffice to determine future and past values of all of these things? It follows from basic facts about differential equations and specifically the fact that Maxwell’s equations are first order in the time derivatives of the E and B fields.
What about your and Andrei’s concern about the light-cones?
The differential equations control everything. The differential equations say that I can alter values in one region of space and not another in the present and then run time backwards and produce earlier conditions that will produce exactly those specified results in the present.
Light-cone considerations are relevant to looking to see what you might have to check to see what mightinfluence your state in the present. It does not guarantee that everything in your past light-cone does influence your present state, and it is easy to come up with counter-examples: e.g., imagine a pulse of light that cuts through your past light-cone but happens not to hit you in the present but rather somewhere else in the present.
How do I know that you can run Maxwell’s equations and the Lorentz force law backwards in time? Well, that is just obvious mathematically to anyone who understands differential equations. Formally, there is a time-reversal invariance (see Table 6.1 in Jackson’s second edition): the one possible surprise is that you have to replace B by -B (basically because the B field is a so-called “axial vector”).
Again, what I have presented against Andrei’s views is not just a hand-waving, suggestive argument but rather a mathematical proof. However, an ordinary grade-school child cannot understand a completely valid mathematical proofs involving calculus, simply because he does not know calculus.
Similarly, to understand my proof, you need to understand physics and math at an advanced undergrad physics major’s level.
I know this violates the Web’s “no background knowledge required / everything can be explained in 500 words” ethos.
But, alas, math and physics are not consistent with that ethos.
Dave
Dave,
I would not mind some mathematics, I am an electrical engineer and familiar with Maxwell’s equations (or was so, some years back;-) Therefore I was asking for a pointer to that theorem …
Georg
Georg wrote to me:
>Therefore I was asking for a pointer to that theorem …
I did more than that: I proved the theorem. And, if you have forgotten the background material, I told you where to brush up on that.
By the way, unless you underwent a much more rigorous introduction to Maxwell’s equations than any of the electrical engineers I have known who were educated in the US (and I have worked with a very large number of such guys), you were not exposed to the background material I described.
The problem with explaining the theorem is that most STEM people take it for granted: of course you can freely specify initial conditions at different points of space and change them in one region without having to change them everywhere! It hardly bears mentioning. Everyone — except our friend Andrei — just takes this for granted. So, yes, you will have trouble finding this theorem spelled out explicitly in textbooks, simply because the textbook author assumes that any student has enough brains to see that it is obvious.
But, since Andrei does not, I spelled the theorem out explicitly.
And, if my explanation is not clear to you, you really do not recall much about Macwell’s equations: again, at least in the States, EEs learn very little about all this.
And, again, the point is so obvious that it should not require an explicit proof: I merely provided the proof for those who do not see that it is obvious. But the proof does require background in diff equs and Maxwell’s equations beyond the education of many EEs.
Dave
EDIT [Mateus]: Fixed your HTML tag.
If “superdeterminism is unscientific” I have to accept that all those years that I have done research were “unscientific years”. Well, nice to read! ;-)
However, I agree that “superdeterminism” is a bit over the top. There is determinism and there is causality. “Superdeterminism” and “supercausality” are unscientific terms.
The hypothesis that non-locality is the result of “superdeterminism” is correct if the paper describes the mathematics of the field structure that is responsible for the “superdeterminism”. I have scanned the paper very fast but I couldn’t recognize the conceptual framework. Nevertheless, I am glad that the authors have published their paper (and I agree with the idea that “superdeterminism” and non-locality are identical at a fundamental level).
With kind regards, Sydney
Indeed, they were all “unscientific years” if you spent them on superdeterminism. I’m glad to have helped.
Dave,
Thanks for the explanation. I am unable to comment on EE education styles, and I am not even sure if there is a US vs European (i.e. mine) way. I would expect that EEs get a more technical introduction to the subject than physicists, and this may blur some of the founding principles for us EEs.
Anyway, from what I understand, Andrei and you are debating different settings:
Andrei is referring to two (today) space-like separated regions, which according to the deterministic evolution of classical EM are correlated, since they inevitably share a past light cone (if not later, than at least starting with Big Bang). Correct IMHO.
You are stating, that you can (today) change one of these regions (B) arbitrarily without immediately affecting the other (A). Also correct.
The apparent contradiction is resolved by recognizing that “your”change of region B cannot come about without changing conditions all the way back to Big Bang – again according to classical, deterministic EM – but such that region A remains “unchanged” and region B takes on the “new” state.
But this is a counterfactual situation, by positing different initial conditions than in Andrei’s case and therefore does not remove the correlation between regions A and B.
It’s just a different correlation than in Andrei’s case.
No, Georg, it is not correct. Maybe you should look again what correlation means.
Georg wrote to me:
>Anyway, from what I understand, Andrei and you are debating different settings:
Andrei is referring to two (today) space-like separated regions, which according to the deterministic evolution of classical EM are correlated, since they inevitably share a past light cone (if not later, than at least starting with Big Bang). Correct IMHO.
No, Georg, not correct at all: you are wrong.
The key error that you make is:
>which according to the deterministic evolution of classical EM are correlated, since they inevitably share a past light cone
That is false.
In classical EM, lots of stuff inside your past light cone has no effect on you at all in the present.
Indeed, the only thing you can “see” at all via EM fields is what happens precisely on your past light cone: anything inside your past light cone but not on it cannot be “seen” at all.
This is supposed to be obvious to anyone who passed undergrad physics. In fact, I could remove the “scare quotes” around “see” and it would still be true. With your actual visual system — your eyes! — you can only see things on your past light cone, but nothing inside the light cone that is not on it.
This is just another way of saying that light moves “at the speed of light.”
Most of the “surface” of observer B’s light cone is not part of the “surface” of observer A’s light cone. So, most of the past events that observer B can “see” right now cannot be “seen” by observer A right now. Most of the surface of B’s light cone that has an EM effect on B at present has no effect on A at present at all. No “correlation,” nada.
I know that you will be quite sure that I am not saying what I seem to be saying. Surely I am not really saying that everything inside (but not on the surface) of A’s light cone has absolutely no EM effect at all on A right now!
But, yes, I am saying that because it is obviously true once you realize that EM effects travel at (and only at) the speed of light.
Really: get a piece of paper, draw your past light cone, put a point down for an event inside but not on the past light cone, and then try to draw a line from that event to you at the apex of the cone which represents the EM effect — such a line must be light-like, of course.
I think if you actually try to do this, the “light bulb will go on,” and you will see why what I am saying is not only true but obviously true.
I know that you and most STEM people think they were taught that anything inside the past light cone does in fact affect your present. But you misunderstood: that is not how it works.
By the way, there are formulations of EM (e.g.,using the Feynman-Heaviside formulae) where this fact is obviously built into the math via a Dirac delta function restricted to the surface of the past light cone. But it is true regardless of which formulation you use, and this is supposed to be obvious when you understand what the “light cone” actually is.
I think part of the problem here is that the phrase “light cone” can be ambiguous in English. In technical math language, “light cone” refers only to the “surface” (technically “hyper-surface” since we are in 3+1 dimensions) of the light cone, but people often use it also to refer to the interior “volume” (technically, “hyper-volume”) inside the light cone.
And, thus people confuse themselves. Badly.
Funny how hard it is for people to overcome the erroneous conceptions they picked up from their initial physics classes!
Anyway, there is no deep subtlety here: what you and Andrei believe is unquestionably false. It is one of those common misconceptions that physics students pick up such as that an object released from circular motion continues on a curved path.
It seems almost impossible to correct this sort of misconception: as carefully as I have explained this here, I bet you yourself still do not believe that you are mistaken on this, do you? And I am quite sure Andrei will never believe it.
Dave
P.S. Apologies to everyone for failing to close an HTML tag and unintentionally bolding half of my previous comment — hopefully I’ll do better this time.
Superdeterminism solves QM problems in the same way as guillotine solves headache
Yes, you probably could. And like with any purported scientific theory, we ought to prefer those hidden variable theories which posit the most plausible set of hidden variables to explain the observations. A superdeterministic theory that posits a new hidden variable for every observation is clearly less plausible than one that posits only a few and explains the rest from that smaller basis. Analogously, the burden would then be on tobacco companies to demonstrate that independence is violated or at least far more plausible; you know, how regular science works.
It boggles my mind that “unscientific” is a charge that’s taken seriously here. Literally nothing in the scientific process would be changed by adopting superdeterminism. Your description of the macroscopic implausibility of various correlations is simply not convincing as you’re making assumptions about how all of these correlations would play out in an attempt to non-scientifically dismiss a scientific theory.
Furthermore, Sabine is not the only one pursuing this avenue. ‘t Hooft’s has been exploring superdeterminism with his cellular automata theory of QM for over 10 years now.
Have you read anything I wrote? The argument is very simple: superdeterminism is unscientific because it can explain anything at all. Allowing such a generic, empty, excuse would kill the scientific process. We wouldn’t be able to conclude anything anymore, because any correlation we obtain could be dismissed as a result of a hidden conspiracy.
The idea that ‘t Hooft has been exploring superdeterminism is laughable. He doesn’t engage with it at all, he just says “superdetermism!” to dismiss the fact that his cellular automata theory is destroyed by Bell’s theorem. Hossenfelder at least is tackling superdeterminism itself.
Using your argument, I must conclude that any formalization is unscientific because because you could just add any proposition you want to the axioms, therefore we should dismiss any formal argument as assuming the conclusion, and therefore useless. This is clearly false.
Your mistake is implicitly asserting that we have no way to evaluate the plausibility of a specific set of hidden variables in the case of a superdeterministic theory; or analogously, that we have no way to evaluate the plausibility of any specific axiomatic basis. I suspect that you do not actually believe the latter, despite asserting the same in the former case.
Finally, correlations are not “dismissed” in superdeterministic theories, they are explained. Whether these explanations are satisfactory must always be decided on a case by cases basis, and drives the debate over a specific theory’s plausibility. There is simply no way to conclude that *all* superdeterministic theories are *necessarily* implausible, or *necessarily* less plausible than non-superdeterministic ones.
On the contrary, I’m explicit that we can evaluate the plausibility of a conspiracy involving Bell tests, and that it is ridiculous, and I’m also explicit that conspiracies involving tobacco companies are perfectly conceivable. Did you even read the post you’re criticizing? Let me quote myself to save you the trouble:
What makes superdeterminism unscientific is that a conspiracy is postulated to dismiss the results without any evidence that there’s an actual conspiracy going on. It is postulated just because the results are inconvenient. Same thing about the smoking and cancer experiment: very inconvenient for tobacco companies. No, you don’t get to postulate a conspiracy to dismiss the results just because you find them inconvenient. This is unscientific. You need to actually show there’s a conspiracy.
At the end of section Superdeterminism rescues “Many Worlds minus the Many Worlds” in my review of Arnold Neumaier’s thermal interpretation, I guessed that “acknowleding the implication of what he once wrote would have a better chance to succeed. (Just like S. Hossenfelder and T.N. Palmer in Rethinking Superdeterminism directly address objections raised by Tim Maudlin and Mateus Araújo in section “4.4 The Tobacco Company Syndrome”.)”
Since I just realized that this comment thread still seems to be open, it seems like an appropriate place to quote the relevant part of my review. Tim Maudlin and Mateus Araújo are mentioned explicitly, because they were both directly responsible that I understood the relevance of the difficult question and Neumaier’s (old “inconvenient”) answer to it. However, because I read what Mateus wrote above, it is also somewhat dangerous. An irony is that he could just meet with Neumaier in person in Vienna, and I could just meet with Sandro in person in Ulm. But even without the pandemic it would somehow not happen, and even if it happened, it would somehow not change anything. And this comment will not change anything either, at least not towards the positive side. But anyway, here is my quote:
“My “inconvenient” opinion is that Neumaier’s “inconvenient” answer implicitly invokes (a valid form of emergent) superdeterminism, but still can’t prevent Many Worlds completely. His answer only seems to succeed to prevent Many Worlds for our world today, but doesn’t seem to exclude the possibility that the world initially splitted many times before our current macroscopic world emerged. Here is the translation of the relevant part of Neumaier’s “inconvenient” answer:
The implicit superdeterminism in this argument is that whenever we prepare a small system and measure it, the state of the measurement device together with the rest of the universe will be such that the measurement device ends up in a valid (i.e. non-superposed, neither coherent nor incoherent) macroscopic state. It is a valid form of emergent superdeterminism, because the macroscopic observables emerged such that they will never encounter superpositions from the evolution of the _one_ state of the universe.”
Hello. Thank you for these vivid and interesting discussion. I have been struggling with these concepts because I find them fascinating, albeit I have to preface this by saying I am NOT a physicist and the mathematics of it are way beyond me. I would humbly like to ask a few questions and give my understanding and I appreciate if I could be directed to relevant literature about this topic.
I am a physician and cancer biologist and approach the “intricacies” of quantum physics more as a philosophical discussion than a mathematical/theorem “truth”. I understand it should be a mathematical discussion alone, although I am unsure if it can be, at the level we can experimentally approach the problem right now.
Let me make a few postulations and I would love to know where or how I am wrong, because I am sure I am missing like a million things I do not fully understand.
1) The measurement of a quantum state is the interaction of such state/event with the measurement device. I struggle a lot with this because from what I understand we could for example have particles (photons) in a vacuum (near vacuum, as far as we can achieve experimentally) and measure their position/angular momentum using some sort of technology (e.g., laser). When this measurement will be performed is determined “randomly”. Collapsing the wavefunction (measurement) gives us the state of these particles in a statistical distribution. Thus, I would define the “measurement” here as this interaction that thus alters the original state and collapses the “possible” to the real/observed.
2) However, if I define it as such, then any particle interacting with another particle would also be automatically collapsed and positioned in reality. That is, if I do not have a photon in a complete vacuum, but it interacts with whatsoever other particle, it could not be in positions that would be colliding with the position of other particles at the same time. If I amplify this idea to macroscopic objects, the particles are being “measured” and are collapsed in reality by the mere fact of being in contact with other particles that are where they are, and not where they are not. I guess this would be a local realism interpretation or the cellular automaton?
3) If I understand it, “time” in physics does not exists as such but is the state of entropy of the Universe in any given time (beyond the human construct of time). Thus, as in chaos system even a minuscule variance in the initial conditions can produce vastly different results, lets assume at a single particle level, basically no experiment could ever be designed to test this. Each experiment can only be run once – it is not possible to design two devices that could have the exact identical conditions, occupy exactly the same local space in the Universe at the same exact entropic state (time).
4) For me, if the state of any given particle is “real” because it is collapsed by interactions with other particles/probabilistic functions that are, in effect, “where they really are” (just because they cannot be where other particles are), then any experiment that is run will be fully deterministic (within the same and only experiment). This is not helpful at all because it is a hypothesis that basically cannot be tested, because by definition it is not possible to replicate the same initial conditions at the same time due to entropy.
5) Maybe in an absolute vacuum, where no other forces such as gravity* nor other particles/interactions whatsoevr, two measurements could be obtained in exactly the same conditions seeing that changes would be amplified in time even under the “exact” same conditions.
6) Thus, for me free will is difficult to understand, as being a brain scientist, the act of deciding when to make a measurement (neuronal connections and activity) or let the interacting electrons in a piece of CPU generate a “random” number assigned to a timepoint, or anything else, is deterministic. E.g., Laplace and we are just missing information about the “real” state of the Universe.
7) At the same time, it would make sense to me that at a quantum level there is enough oscillation of probability even when particles are close that the future would be somewhat undetermined, within the degrees of freedom these particles are able to interact and the positions they occupy.
Is there any experimental evidence to support one or the other, or both? From my reading I do not think the experimental evidence could “rule out” that any measurement is predetermined by the mere fact that it can only happen once, and that if the time of measurement is dependent on pre-existing conditions, it is an untestable hypothesis.
It is possible we don’t have the answer yet, and in that case, I am perfectly happy to hear “we don’t have the answer yet”. In the end the concepts of quantum mechanics can be used to propel new technology, and we as humans live as if there is in fact free will, so it does not really matter. But its “matters” by the curiosity of human beings, we would love to know hehe as seen by the conversations in this thread.
Thank you very much.
Tom.
1-2) There’s no wavefunction collapse. This is one of the few things almost everybody in the foundations community agrees with (for different reasons). Two particles interacting does not qualify as a measurement. For a measurement to happen you need a particle to interact with a big, complex quantum system that amplifies some quantum properties that you are interested in (such as the position of a photon) into macroscopically distinigushable quantum states that are stable under decoherence.
3) Time definitely exists as such, both in general relativity and quantum field theory. What does not exist as such as the *direction* of time, that is given by the direction of increasing entropy. Chaos does not exist in reality. It’s a feature of classical theories that we know are wrong. In quantum mechanics there is no chaos.
4) Since the advent of quantum mechanics people have been trying to replace it with a deterministic theory. It turns out, it’s not easy. Merely postulating that it must be so is not helpful.
7) The idea has indeed always been to say the randomness is because of imperfectly known initial conditions. The problem is, we can quantify the randomness that should come from the variation of the initial conditions we know, and it turns out, the randomness of the measurement result does not come from there. The sequential Stern-Gerlach experiment is a great illustration. Furthermore, we can show that particles that are supposedly identical according to our theory are indeed identical. The Hong-Ou-Mandel experiment is the most dramatic illustration. The possibility remains that some ghostly property that works in a completely different way is responsible for the randomness. It is mathematically possible to conjure up such a thing, Bohmian mechanics does it. However, the theory is physically unsatisfactory, and forbids us from ever knowning what this ghostly property is.