The many-worlds interpretation of objective probability

Philosophers really like problems. The more disturbing and confusing the better. If there’s one criticism you cannot levy at them is that they are not willing to tackle the difficult issues. I have argued with philosophers endlessly about Bell’s theorem, the Trolley problem, Newcomb’s paradox, Searle’s Chinese room, the sleeping beauty problem, etc. Which made me very surprised when I asked a couple of philosophers about objective probability, and found them strangely coy about it. The argument went along the lines of “objective probability is frequentism, frequentism is nonsense, subjective probability makes perfect sense, there’s only subjective probability”.

Which is really bizarre argument. Yes, frequentism is nonsense, and yes, subjective probability makes perfect sense. But that’s all that is true about it. No, objective probability is not the same thing as frequentism, and no, subjective probability is not the only probability that exists. Come on, that’s denying the premise! The question is interesting precisely because we strongly believe that objective probability exists; either because of quantum mechanics, or more directly from the observation of radioactive decay. Does anybody seriously believe that whether some atom decays or not depends on the opinion of an agent? There even existed natural nuclear reactors, where chain reactions occurred much before any agent existed to wonder about them.

In any case, it seems that philosophers won’t do anything about it. What can we say about objective probability, though? It is easy to come up with some desiderata: it should to be objective, to start with. The probability of some radioactive atom decaying should just be a property of the atom, not a property of some agent betting about it. Agents and bets are still important, though, as it should make sense to bet according to the objective probabilities. In other words, Lewis’ Principal Principle should hold: rational agents should set their subjective probabilities to be equal to the objective probabilities, if the latter are known1. Last but not least, objective probabilities should be connected to relative frequencies via the law of large numbers, that is, we need that
\[ \text{Pr}(|f_N-p|\ge\varepsilon) \le 2e^{-2N\varepsilon^2}, \] or, in words, the (multi-trial) probability that the frequency deviates more than $\varepsilon$ from the (single-trial) probability after $N$ trials goes down exponentially with $\varepsilon$ and $N$ 2.

I think it is also easy to come up with a definition of objective probability that fulfills these desiderata, if we model objectively random processes as deterministic branching processes. Let’s say we are interested the decay of an atom. Instead of saying that it either decays or not, we say that the world branches in several new worlds, in some of which the atom decays, and some of which it does not. Moreover, we say that we can somehow count the worlds, that is, that we can attribute a measure $\mu(E)$ to the set of worlds where event $E$ happens and a measure $\mu(\neg E)$ to the set of worlds where event $\neg E$ happens. Then we say that the objective probability of $E$ is
\[p(E) = \frac{\mu(E)}{\mu(E)+\mu(\neg E)}.\] Now, before you shut off saying that this is nonsense, because the Many-Worlds interpretation is false, so we shouldn’t consider branching, let me introduce a toy theory where this deterministic branching is literally true by fiat. In this way we can separate the question of whether the Many-Worlds interpretation is true from the question of whether deterministic branching explains objective probability.

This toy theory was introduced by Adrian Kent to argue that probability makes no sense in the Many-Worlds interpretation. Well, I think it is a great illustration of how probability actually makes perfect sense. It goes like this: the universe is a deterministic computer simulation3 where some agents live. In this universe there is a wall with two lamps, and below each a display that shows a non-negative integer. This wall also has a “play” button, that when pressed makes either of the lamps light up.

Kent's universe

The agents there can’t really predict which lamp will light up, but they have learned two things about how the wall works. The first is that if the number below a lamp is zero, that lamp never lights up. The second is that if the numbers are set to $n_L$ and $n_R$, respectively, and they press “play” multiple times, the fraction of times where the left lamp lights up is often close to $n_L/(n_L+n_R)$.

What is going on, of course, is that when “play” is pressed the whole computer simulation is deleted and $n_L+n_R$ new ones are initiated, $n_L$ with the left lamp lit, and $n_R$ with the right lamp lit. My proposal is to define the objective probability of some event as the proportion of simulations where this event happens, as this quantity fulfills all our desiderata for objective probability.

This clearly fulfills the “objectivity” desideratum, as a proportion of simulations is a property of the world, not some agent’s opinion. It also respects the “law of large numbers” desideratum. To see that, fist notice that for a single trial the proportion of simulations where the left lamp lights up is
\[p(L) = \frac{n_L}{n_L+n_R}.\] Now the number of simulations where the left lamp lights up $k$ times out of $N$ trials is
\[ {N \choose k}n_L^kn_R^{N-k},\] so if we divide by total number of simulations $(n_L+n_R)^N$, we see that the proportion of simulations where the left lamp lit $k$ times out of $N$ is given by \[\text{Pr}(N,k) = {N \choose k}p(L)^k(1-p(L))^{N-k}.\]Since this is formally identical to the binomial distribution, it allows us to prove a theorem formally identical to the law of large numbers:
\[ \text{Pr}(|k/N-p(L)|\ge\varepsilon) \le 2e^{-2N\varepsilon^2}, \]which says that the (multi-trial) proportion of simulations where the frequency deviates more than $\varepsilon$ from the (single-trial) proportion of simulations after $N$ trials goes down exponentially with $\varepsilon$ and $N$.

Last but not least, to see that if fulfills the “Principal Principle” desideratum, we need to use the decision-theoretic definition of subjective probability: the subjective probability $s(L)$ of an event $L$ is the highest price a rational agent should pay to play a game where they receive $1$€ if event $L$ happens and nothing otherwise. In the $n_L$ simulations where the left lamp lit the agent ends up with $(1-s(L))$ euros, and in the $n_R$ simulations where the right lamp lit the agent ends up with $-s(L)$ euros. If the agent cares equally about all its future selves, they should accept to pay $s(L)$ as long as \[(1-s(L))n_L-s(L)n_R \ge 0,\]which translates to \[s(L) \le \frac{n_L}{n_L+n_R},\] so indeed the agent should bet according to the objective probability if they know $n_L$ and $n_R$4.

And this is it. Since it fulfills all our desiderata, I claim that deterministic branching does explain objective probability. Furthermore, it is the only coherent explanation I know of. It is hard to argue that nobody will ever come up with a single-world notion of objective probability that makes sense, but at least in one point such a notion will always be unsatisfactory: why would something be in principle impossible to predict? Current answers are limited to saying that quantum mechanics say so, or that if we could predict the result of a measurement we would run into trouble with Bell’s theorem. But that’s not really an explanation, it’s just saying that there is no alternative. Deterministic branching theories do offer an explanation, though: you cannot predict which outcome will happen because all will.

Now the interesting question is whether this argument applies to the actual Many-Worlds interpretation, and we can get a coherent definition of objective probability there. The short answer is that it’s complicated. The long answer is the paper I wrote about it =)

This entry was posted in Uncategorised. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

two × = ten