The many-worlds interpretation of objective probability

Philosophers really like problems. The more disturbing and confusing the better. If there’s one criticism you cannot levy at them is that they are not willing to tackle the difficult issues. I have argued with philosophers endlessly about Bell’s theorem, the Trolley problem, Newcomb’s paradox, Searle’s Chinese room, the sleeping beauty problem, etc. Which made me very surprised when I asked a couple of philosophers about objective probability, and found them strangely coy about it. The argument went along the lines of “objective probability is frequentism, frequentism is nonsense, subjective probability makes perfect sense, there’s only subjective probability”.

Which is really bizarre argument. Yes, frequentism is nonsense, and yes, subjective probability makes perfect sense. But that’s all that is true about it. No, objective probability is not the same thing as frequentism, and no, subjective probability is not the only probability that exists. Come on, that’s denying the premise! The question is interesting precisely because we strongly believe that objective probability exists; either because of quantum mechanics, or more directly from the observation of radioactive decay. Does anybody seriously believe that whether some atom decays or not depends on the opinion of an agent? There even existed natural nuclear reactors, where chain reactions occurred much before any agent existed to wonder about them.

In any case, it seems that philosophers won’t do anything about it. What can we say about objective probability, though? It is easy to come up with some desiderata: it should to be objective, to start with. The probability of some radioactive atom decaying should just be a property of the atom, not a property of some agent betting about it. Agents and bets are still important, though, as it should make sense to bet according to the objective probabilities. In other words, Lewis’ Principal Principle should hold: rational agents should set their subjective probabilities to be equal to the objective probabilities, if the latter are known1. Last but not least, objective probabilities should be connected to relative frequencies via the law of large numbers, that is, we need that
\[ \text{Pr}(|f_N-p|\ge\varepsilon) \le 2e^{-2N\varepsilon^2}, \] or, in words, the (multi-trial) probability that the frequency deviates more than $\varepsilon$ from the (single-trial) probability after $N$ trials goes down exponentially with $\varepsilon$ and $N$ 2.

I think it is also easy to come up with a definition of objective probability that fulfills these desiderata, if we model objectively random processes as deterministic branching processes. Let’s say we are interested the decay of an atom. Instead of saying that it either decays or not, we say that the world branches in several new worlds, in some of which the atom decays, and some of which it does not. Moreover, we say that we can somehow count the worlds, that is, that we can attribute a measure $\mu(E)$ to the set of worlds where event $E$ happens and a measure $\mu(\neg E)$ to the set of worlds where event $\neg E$ happens. Then we say that the objective probability of $E$ is
\[p(E) = \frac{\mu(E)}{\mu(E)+\mu(\neg E)}.\] Now, before you shut off saying that this is nonsense, because the Many-Worlds interpretation is false, so we shouldn’t consider branching, let me introduce a toy theory where this deterministic branching is literally true by fiat. In this way we can separate the question of whether the Many-Worlds interpretation is true from the question of whether deterministic branching explains objective probability.

This toy theory was introduced by Adrian Kent to argue that probability makes no sense in the Many-Worlds interpretation. Well, I think it is a great illustration of how probability actually makes perfect sense. It goes like this: the universe is a deterministic computer simulation3 where some agents live. In this universe there is a wall with two lamps, and below each a display that shows a non-negative integer. This wall also has a “play” button, that when pressed makes either of the lamps light up.

Kent's universe

The agents there can’t really predict which lamp will light up, but they have learned two things about how the wall works. The first is that if the number below a lamp is zero, that lamp never lights up. The second is that if the numbers are set to $n_L$ and $n_R$, respectively, and they press “play” multiple times, the fraction of times where the left lamp lights up is often close to $n_L/(n_L+n_R)$.

What is going on, of course, is that when “play” is pressed the whole computer simulation is deleted and $n_L+n_R$ new ones are initiated, $n_L$ with the left lamp lit, and $n_R$ with the right lamp lit. My proposal is to define the objective probability of some event as the proportion of simulations where this event happens, as this quantity fulfills all our desiderata for objective probability.

This clearly fulfills the “objectivity” desideratum, as a proportion of simulations is a property of the world, not some agent’s opinion. It also respects the “law of large numbers” desideratum. To see that, fist notice that for a single trial the proportion of simulations where the left lamp lights up is
\[p(L) = \frac{n_L}{n_L+n_R}.\] Now the number of simulations where the left lamp lights up $k$ times out of $N$ trials is
\[ {N \choose k}n_L^kn_R^{N-k},\] so if we divide by total number of simulations $(n_L+n_R)^N$, we see that the proportion of simulations where the left lamp lit $k$ times out of $N$ is given by \[\text{Pr}(N,k) = {N \choose k}p(L)^k(1-p(L))^{N-k}.\]Since this is formally identical to the binomial distribution, it allows us to prove a theorem formally identical to the law of large numbers:
\[ \text{Pr}(|k/N-p(L)|\ge\varepsilon) \le 2e^{-2N\varepsilon^2}, \]which says that the (multi-trial) proportion of simulations where the frequency deviates more than $\varepsilon$ from the (single-trial) proportion of simulations after $N$ trials goes down exponentially with $\varepsilon$ and $N$.

Last but not least, to see that if fulfills the “Principal Principle” desideratum, we need to use the decision-theoretic definition of subjective probability: the subjective probability $s(L)$ of an event $L$ is the highest price a rational agent should pay to play a game where they receive $1$€ if event $L$ happens and nothing otherwise. In the $n_L$ simulations where the left lamp lit the agent ends up with $(1-s(L))$ euros, and in the $n_R$ simulations where the right lamp lit the agent ends up with $-s(L)$ euros. If the agent cares equally about all its future selves, they should accept to pay $s(L)$ as long as \[(1-s(L))n_L-s(L)n_R \ge 0,\]which translates to \[s(L) \le \frac{n_L}{n_L+n_R},\] so indeed the agent should bet according to the objective probability if they know $n_L$ and $n_R$4.

And this is it. Since it fulfills all our desiderata, I claim that deterministic branching does explain objective probability. Furthermore, it is the only coherent explanation I know of. It is hard to argue that nobody will ever come up with a single-world notion of objective probability that makes sense, but at least in one point such a notion will always be unsatisfactory: why would something be in principle impossible to predict? Current answers are limited to saying that quantum mechanics say so, or that if we could predict the result of a measurement we would run into trouble with Bell’s theorem. But that’s not really an explanation, it’s just saying that there is no alternative. Deterministic branching theories do offer an explanation, though: you cannot predict which outcome will happen because all will.

Now the interesting question is whether this argument applies to the actual Many-Worlds interpretation, and we can get a coherent definition of objective probability there. The short answer is that it’s complicated. The long answer is the paper I wrote about it =)

This entry was posted in Uncategorised. Bookmark the permalink.

8 Responses to The many-worlds interpretation of objective probability

  1. Nice post, but I can’t let you get away with straw-manning the subjective Bayesians. A subjective Bayesian does not “believe that whether some atom decays or not depends on the opinion of an agent”. It’s the other way around: a subjective Bayesian’s opinions about the atom (i.e. about their future interactions with the atom or ones like it) depends on whether the atom decays. When you put it in the correct order, it sounds fairly reasonable.

    A subjective Bayesian might also agree that nuclear reactions happened before agents were around. They would merely disagree that there were any probabilities around at that time. Any probabilities assigned to those events would just represent an agent’s beliefs right now about what happened back then.

    A question: when you reject the idea of things being in principle unpredictable, do you really mean non-deterministic? Because there are reasons why things might be in-principle unpredictable even in a deterministic universe, eg, physical constraints on the computing power of the would-be predictor.

  2. Mateus Araújo says:

    Hi Jacques,

    Thanks for your comment. I don’t think I’m straw-manning the subjective Bayesians; I wrote “Does anybody seriously believe that whether some atom decays or not depends on the opinion of an agent?”, a rhetorical question implying that I don’t think anybody does. But you are saying that there were no probabilities around at that time. What was around then? The atoms were decaying or not, but not probabilistically? How then?

    And I don’t reject the idea of things being in principle unpredictable, on the contrary: I’m saying that they are in principle unpredictable, and I want to know why. I’m struggling to make more concrete your idea of in principle unpredictability because of computational limitations. You are suggesting that there would be an algorithm to predict whether a certain atom would decay in the next hour, but it would be in principle impossible to run this algorithm? Why? Couldn’t I just wait longer, and add more memory to the computer? It is immaterial whether the algorithm would only finish after the hour had passed; since it is deterministic, the answer couldn’t be influenced by whether the atom in fact decayed.

  3. Jacques Pienaar says:

    Maybe an analogy would help. Imagine you said “the ocean smells nice” and someone asked you “did it smell nice before anyone was around to smell it?”. The answer depends on whether you think “smelling nice” is a property of the ocean in itself, or is relative to someone or something with the ability to smell and to categorize smells as nice or stinky. If it is relative, then it is hard to see how one could talk about the smell of things prior to the existence of smelling organisms or people. You could do it via a counterfactual, i.e. if someone had been there, the ocean would have smelled nice to them — but you have to posit someone who is doing the smelling. The subjective Bayesian views claims like “X is more likely” as on the same footing as “X smells nice”. The statement is objective insofar as it is widely agreed upon and independent of our wishes. Notice that you can’t make poo smell good by any effort of the will; nor can you make yourself think it unlikely to rain when you see ominous grey clouds filling the sky. Probability is a relative property of how the world appears to beings that have a “nose for uncertainty”.

    About the physical limits of predictors, keep in mind that you don’t necessarily have all the time you need — eventually the universe will either thermalize, freeze or collapse. Another more speculative limitation would arise if some laws of physics are uncomputable, i.e. if being able to predict the state following any initial condition would require a computer that could solve the halting problem. In either case, these limits on prediction don’t exclude determinism per se, so the concepts are distinct.

  4. Mateus Araújo says:

    The smell example is a quite useful one; I grant you that it is subjective, but it is based on something that was there before the agent came to feel it, namely whatever odor molecules that were in the air. Subjective probabilities are subjective (duh), but what I’m interested in is what was there before an agent with a “nose for uncertainty” came along to assign them.

    To put it another way: in those natural nuclear reactors one can ascertain that chain reactions occurred when the concentration of uranium got pretty close to the critical mass. Afterwards, the fraction of radioactive isotopes that did decay is pretty close to what their half-life predicts. Why did this happen? How did this happen? We both agree that it doesn’t make sense to say that the critical mass is such and half-life is such because the subjective probability is such. But then what determines the critical mass or half-life? What do their numbers even mean?

    As for the limited predictor, I’m still very skeptical that you can make a remotely plausible theory out of it. If you couldn’t predict whether an atom would decay because the computation would take longer than the lifetime of the universe, then somehow the decay law would need to depend on the lifetime of the universe. And you would have a bunch of atoms decaying, each of which somehow encoding such a difficult problem, and the problems would need to be so distributed that the results turn out to approximate very well the half-life of that element?

    Its computability version doesn’t really help either. Keep in mind that it is still a matter of fact whether a giving program will halt or not; the problem being uncomputable merely means that no algorithm can decide that. But it can still be decided. The prototypical example, the program that lists all proofs in ZF and halts if it finds a proof of its consistency, will obviously never halt.

  5. Jacques Pienaar says:

    “…it is based on something that was there before the agent came to feel it…”
    If you keep insisting that things have properties before they are measured, you’re going to run into trouble when you get to EPR and Bell! A QBist might be willing to say that something existed before experience, but all substantive claims must be understood as shorthand for anticipations about future experiences conditioned on past ones. Saying there were molecules around with a certain shape before humans is, to a QBist, just a short-hand for the expectation that, eg, if you dig up samples from mines or ice cores, then you will find such molecules right there next to the dinosaur bones (or something like that). When you say “the moon is there when I don’t look at it”, you’re really saying that you expect to see it there whenever you do look. To ask “what is there when nobody is looking” is as much a non-sequitur to the QBist as asking “which outcome really happens” to a many-worlds-er. Measuring and happening are inextricable from each other.

    So the “half-life of a Uranium atom” is a statement about my expectation that a given atom will decay in a given time interval, or, than an atom taken at random from a sample will be found to have already decayed. You ask: how do I explain this? For a QBist, explaining the half-life of Uranium means explaining why you have come to believe that Uranium atoms will decay as a certain function of time (characterized by that half-life). The explanation means giving an account of how you have updated your beliefs in light of new evidence, starting from some prior, and how your belief is consistent with other beliefs that you hold. The whole mode of `explanation’ for a Bayesian focuses on the inter-relations and coherence between expectations conditioned on actual measurement events. Models of the world often play a role in this, as short-hand that captures a whole bundle of expectations about hypothetical measurements, but these models are never mistaken as descriptions of “the world in itself”.

    You might say that doesn’t account for why the atom’s decay is spontaneous and random. But for a QBist, spontaneity and “intrinsic uncertainty” of the world is an axiom, not something to be explained. Indeed, to a QBist, “the world” refers to that part of experience that doesn’t conform to our wishes and that defies our expectations. If the world were not spontaneous, we wouldn’t call it “the world”! A QBist takes for granted that the outcomes of our interactions with the world are intrinsically random, and this applies as well to observations of Uranium atoms. What is of interest is the non-trivial inferences we can make about them, using models based on accumulated experience of interactions with such atoms.

    About the limits of prediction, I think we’re talking past each other. My original question was motivated by what you said was an unsatisfactory feature of single-world objective probability interpretations, namely, that they don’t explain why something would be “in principle impossible to predict”. I just wanted to know if you really meant “in principle non-deterministic”. I’m not defending any thesis about the plausibility of physical predictors. I’m just pointing out that “determinism” and “unpredictability” are distinct concepts. The extent to which they coincide is only the extent to which you believe a “perfect predictor” is physically realizable within a deterministic universe. Maybe you think it is realizable, which is fine, but you could save yourself a lot of arguments with people who don’t think so, just by talking about whether the world is “deterministic”, if indeed that’s what you really meant.

  6. Mateus Araújo says:

    “If you keep insisting that things have properties before they are measured, you’re going to run into trouble when you get to EPR and Bell!”

    No, I won’t. This is a common misconception. Nothing about EPR and Bell disproves the idea of an objective reality. You can’t have determinism or local causality, in the most common interpretations, but that’s it.

    I’m afraid you might find this a bit rude, but I don’t see any explanation in the three paragraphs you wrote stating the QBist position. Yes, I know that the QBists have this deeply agent-centric view of the world. I’m not interested in that. I’m interested in what is actually happening, independently of any agent’s expectations. If QBism can’t provide such an objective explanation of what is going on in atomic decay, then I’m just not interested.

    About the limits of prediction, no, I really meant “in principle unpredictable”, not “in principle non-deterministic”. If you could actually pull off a deterministic theory that has in principle unpredictable outcomes in a single world, I would be satisfied. If you could pull off an explanation of in principle non-determinism in a single world, I would also be satisfied.

  7. Jacques Pienaar says:

    I don’t think it’s rude of you to dismiss my account of “explanation” — I expected you would. Besides, philosophers disagree about what it means, so we might as well join in the fun :).

    I did think it was rude to accuse me of making a “common misconception” about Bell’s theorem. I think you slipped up in equating my phrase “things have properties before they are measured” with the weaker notion of “objective reality”, by which I guess you mean any model that purports to represent the world as independent and external to us. I was not by any means claiming that Bell/EPR causes trouble for such a notion (much less that it “disproves” anything! Quit putting words in my mouth). What I meant was: if you assume there is some local property or `propensity’ of a system prior to its measurement that serves to determine the probabilities of the measurement outcomes, you will encounter “trouble”, by which I mean non-locality.

    Possibly with all your talk about `what is actually there before the agent comes along’ you instead meant something like the wavefunction of the universe, as in many-worlds. My critique of that interpretation is that it hardly does a good job of telling you `what is actually happening’. As far as I understand it, unitary evolution of the universal wavefunction doesn’t describe anything actually happening, except maybe in our minds. What does actually happen is that we hear a detector go `click’. But maybe I am just talking past you again, because as with `explanation’, you won’t agree with me about what it means for something “to actually happen”…

  8. Mateus Araújo says:

    Ah, ok, so you simply meant non-locality. I find it much less troubling than things not having properties before they are measured. And as you know, in Many-Worlds violation of a Bell inequality does not imply in non-locality.

    Also, I’m not equating “what is actually there before the agent comes along” with a universal wavefunction, even though the latter does a good job of providing an agent-free description. Agent-free descriptions were the norm in science for centuries before quantum mechanics came along, so we definitely know what this concept means independently of quantum mechanics. Note also that the subject of this blog post is how a decidedly no-quantum theory, Kent’s universe, can account for objective probability.

Comments are closed.