Why I am unhappy about all derivations of the Born rule (including mine)

There are a lot of derivations of the Born rule in the literature. To cite just a few that I’m most familiar with, and illustrate different strategies: by Everett, Deutsch, Wallace, Żurek, Vaidman, Carroll and Sebens, me, and Mandolesi. I’m intentionally leaving out frequentist derivations, who should just be ashamed of themselves, and operationalist derivations, because I don’t think you can explain anything without going into ontology.

I’m not happy with any of them. Before going into detail why, I need to say what are my criteria for a satisfactory derivation of the Born role:

  1. It needs to use postulates that either come from the non-probabilistic part of quantum mechanics, or are themselves better motivated than the Born rule itself.
  2. The resulting Born rule must predict that if you make repeated measurements on the state $\alpha\ket{0}+\beta\ket{1}$ the fraction of outcome 0 will often be close to $|\alpha|^2/(|\alpha|^2+|\beta|^2)$.

The first criterion is hopefully uncontroversial: if the derivation of the Born rule depends on the probabilistic part of quantum mechanics it is circular, and if it uses more arcane postulates than the Born rule itself, why bother?

The second criterion tries to sidestep the eternal war about what is the definition of probability, focussing on the one point that everyone should agree about: probabilities must predict the frequencies. Should, but of course they don’t. Subjectivists insist that probabilities are the degrees of belief of a rational agent, leaving the connection with frequencies at best indirect. To the extent that this is a matter of semantics, fine, I concede the point, define probabilities to be subjective, but then I don’t care about them. I care about whatever objective property of the world that causes the frequencies to come out as they do. Call it propensity, call it chance, call it objective probability, doesn’t matter. This thing is what the Born rule is about, not about what is going on in the heads of rational agents.

And there is such a thing, there is a regularity in Nature that we have to explain. If you pass 1000 photons through a polariser, then a half-wave plate oriented at $\pi/4$ radians, and then measure their polarisation, you will often find the number of photons with horizontal polarisation to be close to 500.1 To simulate this experiment,2 I generated 8000 hopefully random bits from /dev/urandom, divided them into sets of 1000, and counted the number of ones in each set, obtaining $504, 495, 517, 472, 504, 538, 498$, and $488$. We often got something close to 500. This is the regularity in Nature, this is the brute fact that a derivation of the Born rule must explain. Everything else is stamp collection.3

I’ve emphasized the second criterion because it’s where most derivations fail. Consider the most fashionable one, by Deutsch and Wallace, which I reviewed here. They have some axioms about what it means to be a rational agent, some axioms about what is a measurement, and some axioms about how rational agents should be indifferent to some transformations done on the quantum state. From these axioms they conclude that rational agents should bet on the outcomes of a quantum experiment with probabilities given by the Born rule. Who cares? Should we really believe that the statistics of an experiment will be constrained by rationality axioms? And conversely, if you can show that the statistics of a quantum experiment follow the Born rule, doesn’t it become obvious that rational agents should bet that they do, making the whole decision-theoretic argument superfluous? It’s worth noting that this same criticism applies to my derivation, as it is just a cleaned up version of the Deutsch-Wallace argument.

Let’s move on to Vaidman, Carroll, and Sebens. Their derivations differ on several important points, but I’m interested here in their common point: they passionately argue that probability is about uncertainty, that a genuine source of uncertainty in Many-Worlds is self-locating uncertainty, and that locality implies that your self-locating uncertainty must be given by the Born rule. Arguing about whether probability is uncertainty is a waste of time4, but their second point is well-taken: after a measurement has been done and before you know the outcome, you are genuinely uncertain about in which branch of the wavefunction you are. I just don’t see how could this be of fundamental relevance. I can very well do the experiment with my eyes glued to the screen of the computer, so that I’m at first aware that all possible outcomes will happen, and then aware of what the outcome in my branch is, without ever passing through a moment of uncertainty in between. Decoherence does work fast enough for that to happen.5 What now? No probability anymore? And then it appears when I close my eyes for a few seconds? That makes sense if probability is only in my head, but then you’re not talking about how Nature works, and I don’t care about your notion of probability.

Żurek’s turn, now. His proof follows the pattern of all the ones I talked about until now: it argues that probabilities should be invariant under some transformations on entangled states, and from a bunch of equalities gets the Born rule. An important different is that he refuses to define what probabilities are, perhaps in order to avoid the intractable debate. I think it’s problematic for his argument, though. One can coherently argue about what agents can know about, what they should care about, as the other authors did. But an undefined notion? It can do anything! Without saying what probability is how can you say it is invariant under some transformation? Also, how can this undefined notion explain the regularity of the frequencies? Maybe it has nothing to do with frequencies, who knows?

Let’s go back to Everett’s original attempt. It is of a completely different nature: he wanted to derive a measure $\mu$ on the set of worlds, according to which most worlds would observe statistics close to what the Born rule predicted. I think this makes perfect sense, and satisfies the second criterion. The problem is the first. He assumed that $\mu$ must be additive on a superposition of orthogonal worlds, i.e., that if $\ket{\psi} = \sum_i \ket{\psi_i}$ then
\[ \mu(\ket{\psi}) = \sum_i \mu(\ket{\psi_i}). \] I have no problem with this. But then he also assumed that $\mu$ must depend only on the 2-norm of the state, i.e., that $\mu(\ket{\psi}) = f(\langle \psi|\psi\rangle)$. This implies that
\[ f\left( \sum_i \langle \psi_i|\psi_i\rangle \right) = \sum_i f(\langle \psi_i|\psi_i\rangle), \] which is just Cauchy’s equation. Assuming that $\mu$ is continuous or bounded is enough to imply that $\mu(\ket{\psi}) = \alpha \langle \psi|\psi\rangle$ for some positive constant $\alpha$, that can be taken equal to 1, and we have a measure that gives rise to the Born rule.

But why assume that $\mu$ must depend only on the 2-norm of the state? This unjustified assumption is what is doing all the work. If we assume instead that $\mu$ depends only on the $p$-norm of the state, the same argument implies that $\mu(\ket{\psi}) = \|\ket{\psi}\|_p^p$, an absolute value to the $p$ rule, instead of an absolute value squared rule. Everett would be better off simply postulating that $\mu(\ket{\psi}) = \langle \psi|\psi\rangle$. Had he done that, it would be clear that the important thing was replacing the mysterious the Born rule, with its undefined probabilities, with the crystal clear Born measure. Instead, the whole debate was about how proper was his derivation of the Born measure itself.

The last proof I want to talk about, by Mandolesi, agrees with Everett’s idea of probability as measure theory on worlds, and as such satisfies the second criterion. The difficulty is deriving the Born measure. To do that, he uses not only the quantum states $\ket{\psi}$, but the whole rays $R_\psi = \{c \ket{\psi}; c \in \mathbb C \setminus 0\}$. He then considers a subset $R_{\psi,U} = \{c \ket{\psi}; c \in U\}$ of the ray, for some measurable subset $U$ of the complex numbers, to which he assigns the Lebesgue measure $\lambda(R_{\psi,U})$. Given some orthogonal decomposition of a ray $c \ket{\psi} = \sum_i c \ket{\psi_i}$ – which is most interesting when the states $\ket{\psi_i}$ correspond to worlds – the fraction of worlds of kind $i$ is given by
\[ \frac{\lambda(R_{\psi_i,U})}{\lambda(R_{\psi,U})} = \frac{\langle \psi_i|\psi_i \rangle}{\langle \psi|\psi \rangle}, \] which does not depend on $U$, and coincides with the Born measure.

I think he’s definitely using postulates more arcane than the Born measure itself. He needs not only the rays and their subsets, by also the Lebesgue measure on the complex line, for which his only justification is that it is natural. No argument here, it is very natural, but so is the Born measure! It is just an inner product, and is there anything more natural than taking inner products in a Hilbert space?

To conclude, I’m skeptical that there can even be a satisfactory derivation of the Born rule. It obviously doesn’t follow from the non-probabilistic axioms alone; we need anyway an additional postulate, and it’s hard to think of a more natural one than $\mu(\ket{\psi}) = \langle \psi|\psi\rangle$. Perhaps we should focus instead in showing that such an axiomatization already gives us everything we need? In this vein, there’s this curious paper by Saunders, which doesn’t try to derive the Born rule, but instead carefully argues that the mod-squared amplitudes play all the roles we require from objective probabilities. I find it curious because he never mentions the interpretation of the mod-squared amplitudes as the amounts of worlds with a given outcome, which is what makes the metaphysics intelligible.

This entry was posted in Uncategorised. Bookmark the permalink.

15 Responses to Why I am unhappy about all derivations of the Born rule (including mine)

  1. Kelvin says:

    We respond to your worry about self-location uncertainty here:
    https://doi.org/10.1016/j.shpsb.2018.10.003

  2. jay says:

    The 2-norm can be singled out among the p-norms by requiring that there be non-trivial unitaries https://arxiv.org/abs/quant-ph/0401062

    This refines Everett’s argument, although one still needs to justify why the measure would depend only on the vector norm and not on its phase.

  3. Mateus Araújo says:

    Kelvin,

    No you don’t. You seem to concede Wallace’s point, which is the same as mine, that self-locating uncertainty is often absent, and focus instead on the situations where there is self-locating uncertainty. But no, I’m interest in when it is absent. Can’t we use probabilities anymore? Will the statistics somehow be different? And you haven’t touched on the main problem, that self-locating uncertainty is explicitly in your head; how on Earth do you connect that with statistics? Why should Nature care about your uncertainty? How does your uncertainty affect the decay of carbon-14 atoms?

    Also, I find it rather rude to just post a 20-page paper without even saying how does it address the problems.

  4. Mateus Araújo says:

    jay,

    Aaronson’s point is that if you want the dynamics of your theory to preserve some $p$-norm, then the only non-trivial dynamics you get are for $p=1$ and $p=2$, where they are given by stochastic and unitary matrices, respectively.

    In Everett’s argument the dynamics are already assumed to be unitary, which does single out the 2-norm. That’s fine, 2-norms are quite natural, my complaint is that Everett does not justify this assumption (or even state it explicitly), although it is the essential one for his argument. Also, I find it just as natural to postulate directly that $\mu(\ket{\psi}) = \langle \psi|\psi \rangle$, and doing that has the benefit of sparing us of the derivation.

  5. Davi Barros says:

    I’ll dare say one or two words just because this unsettling feeling also haunts me and I’m glad I’m not alone.

    I always come back to two points also when facing this issue

    1 – chicken versus egg issue, or who comes first: p=2 in the probabilistic part or the unitary nature in the non-probabilistic part of the theory?

    2 – what is probability before going into quantum stuff. I’m a guy who is into ontology and also a not very patient guy who waits until infinite processes end to take conclusions. Am I right in being lost in the frequentist versus subjectivist duality? Sometimes I feel that in the end one has to get back to Hume and understand the role of trying to understand the role of induction in all of this.

    Honestly, I don’t know which of these is of utermost importance or if the solution to this dilema would involve solving both issues at once.

  6. Mateus Araújo says:

    1 – I don’t see why there should be an ordering. They are different postulates, and both are needed for the theory.

    2 – I did try to make sense of probability before going into quantum stuff here, perhaps it will make you happy. And please, there is no duality with frequentism, frequentism is just a bad trip. The duality is between subjectivist and objectivist probability.

  7. gentzen says:

    I once read in Dieter Zeh’s book “Physik ohne Realität: Tiefsinn oder Wahnsinn?” that the many-worlds interpretation has to be “fixed” or “completed”. It was written in a way that you got the impression that Dieter Zeh knew what that fix was, and that it was a rather “harmless” fix. But despite spending some effort, I never found out what it was that Dieter Zeh actually had in mind.
    I appreciated it when Sean Carroll gave some “harmless” additional conditions for his derivation of the Born rule: “We base our derivation of the Born Rule on what we call the Epistemic Separability Principle(ESP), roughly: the probability assigned post-measurement/pre-observation to an outcome of an experiment performed on a specific system shouldn’t depend on the physical state of other parts of the universe (for a more careful discussion see [1]).”

    I don’t know which “harmless” additional condition is right or wrong, or whether additional conditions are required at all. And I would know even less how to “convince” others in discussions about the foundations of quantum mechanics, even if I knew the answer. But just as I appreciated it when Sean Carroll openly admitted the need for his “harmless” additional condition, I do thoroughly appreciate it that you did share you personal thoughts on such additional conditions.

  8. Davi Barros says:

    when people start to find natural justifications for the unitary nature of the dynamics from the inner product or to the p=2 rule from the unitary, a precedence of issues becomes apparent, even if, logically, probability and dynamics are completely different concepts.

  9. Mateus Araújo says:

    Ah, sure, if we’re not trying to argue that one logically follows from the other, it makes sense to talk informally about precedence.

    Historically, at least, the answer is clear: Schrödinger developed his equation, implying unitary dynamics, before Born came up with the probability rule.

    Leaving history aside, I think unitarity is anyway more fundamental. It follows from the idea that time evolution comes from infinitesimal translations of the Hamiltonian, which is pretty neat, and makes the theory reversible. Also, the purely unitary part of quantum theory can already do all its deterministic predictions, which is nothing to scoff at.

    The other direction seems strange: you’d have to start with a theory with probabilities given by the 2-norm, but without dynamics. I’ve seen people doing that for the unitarity argument, but never for a serious theory that had any purpose in itself.

  10. Lucas Morton says:

    I’m a fan of the Bohmian approach to deriving the Born rule. They have an interesting two-step method:

    1. Show that if the initial probability distribution satisfies the Born rule, then the dynamical evolution will ensure that the final distribution also satisfies the Born rule (this effect is called ‘equivariance’).

    2. Argue that the initial satisfaction of the Born rule is inevitable (this is referred to as the ‘quantum equilibrium hypothesis’).

    I haven’t been persuaded by the arguments I’ve seen for step #2. I think an argument from maximum entropy (a la E. T. Jaynes) would probably cinch the matter though. (Roughly, that we should always assume a maximum entropy distribution unless we have evidence for a more specific distribution, and since we have yet to observe any systematic violations of the predictions made when using this assumption, then we are justified to keep assuming it.)

  11. Mateus Araújo says:

    Yeah, I’m familiar with this argument. It would be very impressive if it worked, but it doesn’t: if you start out of quantum equilibrium, you don’t always end up in it. For example, it is proven that a Gaussian wavepacket will not equilibrate. The only hope is that quantum equilibrium will be achieved in an approximate, coarse-grained way for more complex systems.

    Now, the problem is that Bohmian mechanics out of quantum equilibrium is a very different theory, with faster-than-light signalling and everything. It offends my sensibilities to have all these fireworks only approximately excluded; barring actual experimental evidence that they exist you really need them to be fundamentally out of the theory.

  12. Lukas says:

    Hi Mateus,
    I’m a chemist but have some interest in quantum theory for computing the molecule. I had recently been told to take a look at preprint
    https://arxiv.org/abs/1905.03332
    about quantum Born’s rule. Here, a Russian guy seem to derive this rule, but I’m not well in math. What could you say about this? Is this a bullsh***?

    Thanks beforehand,
    Lukas

  13. Mateus Araújo says:

    Yes, it is bullshit. I’m surprised that Proc. Royal Soc. even published this insanity.

  14. Abitbol says:

    Dear Mateus,

    what about Gleason’s theorem? It asserts that any map assigning to any closed subspace of a Hilbert space a number between $0$ and $1$ satisfying some probabilistic postulates must be of the usual form we know from quantum mechanics.

    So, one can think of it as stating : “if one wants reasonable probabilities, they should be given according to the Born rule”. How does it not answer your requirements?

    Moreover, isn’t your second requirement supposing something contained in the word “often”?

  15. Mateus Araújo says:

    I actually wrote a paragraph on Gleason’s theorem, but in the end did not include it, because it boiled down to saying it’s not a proof of the Born rule, and I wanted to write about things that are actually proofs of the Born rule.

    The point is, Gleason’s theorem doesn’t say anything about what probabilities you get from a specific quantum state. It only says that if you want to extract a set of non-negative numbers that sum to one from a set of projectors that sum to identity, and you want your function to be non-contextual, and the Hilbert space dimension is at least 3, then the only choice you have is to take the trace of the projectors with some density matrix. It doesn’t say anything about which density matrix you should use.

    In other words, it fails egregiously my second criterion. It doesn’t even apply to dimension 2. Even if it did, it can’t tell me anything about a specific quantum state. Even if it did, it can’t tell me anything about relative frequencies.

    I didn’t understand your second question. I did expand what I meant by “often” a little below.

Comments are closed.