Monthly Archives: September 2016

The sleeping beauty problem: a foray into experimental metaphysics

One of the most intriguing consequences of Bell’s theorem is the idea that one can do experimental metaphysics: to take some eminently metaphysical concepts such as determinism, causality, and free will, and extract from them actual experimental predictions, which can be tested in the laboratory. The results of said tests can then be debated forever without ever deciding the original metaphysical question.

It was with such ideas in mind that I learned about the Sleeping Beauty problem, so I immediately thought: why not simply do an experimental test to solve the problem?

The setup is as follows: you are the Sleeping Beauty, and today is Sunday. I’m going to flip a coin, and hide the result from you. If the coin fell on heads, I’m going to give you a sleeping pill that will make you sleep until Monday, and terminate the experiment after you wake up. If it falls on tails instead, I’m going also to give you the pill that makes you sleep until Monday, but after your awakening I’m going to give you a second pill that erases your memory and makes you sleep until Tuesday. At each awakening I’m going to ask you: what is the probability[1] that the coin fell on tails?

There are two positions usually defended by philosophers:

  1. $p(T) = 1/2$. This is defended by Lewis and Bostrom, roughly because before going to sleep the probability was assumed to be one half (i.e. that the coin is fair), and by waking up you do not learn anything you didn’t know before, so the probability should not change.
  2. $p(T) = 2/3$. This is defended by Elga and Bostrom, roughly because the three possible awakenings (heads on Monday, tails on Monday, and tails on Tuesday) are indistinguishable from your point of you, so you should assign all of them the same probability. Since two of them have the coin fallen on tails, the probability of tails must be two-thirds.

Well, seems like the perfect question to answer experimentally, no? Give drugs to people, and ask them to bet on the coins being heads or tails. See who wins more money, and we’ll know who is right! There are, however, two problems with this experiment. The first is that it is not so easy to erase people’s memories. Hitting them hard on the head or giving them enough alcohol usually does the trick, but it doesn’t work reliably, and I don’t know where I could find volunteers that thought the experiment was worth the side effects (brain clots or a massive hangover). And, frankly, even if I did find volunteers (maybe overenthusiastic philosophy students?), these methods are just too grisly for my taste.

Luckily a colleague of mine (Marie-Christine) found an easy solution: just demand people to place their bets in advance. Since they are not supposed to be able to know in which of the three awakenings they are, it makes no sense for them to bet differently in different awakenings (in fact, they should even be be unable to bet differently on different awakenings without access to a random number generator. If they have one in their brains is another question). So if you decide to bet on heads, and then “awakes” on Tuesday, too bad, you have to do the bad bet anyway.

With that solved, we get to the second problem: it is not rational to ever bet on heads. If you believe that the probability is $1/2$ you should be indifferent between heads and tails, and if you believe that the probability is $2/3$ you should definitely bet on tails. In fact, if you believe that the probability is $1/2$ but have even the slightest doubt that your reasoning is correct, you should bet on tails anyway just to be on the safe side.

This problem can be easily solved, simply by biasing the coin a bit towards heads, such that the probability of heads (if you believed in $1/2$) is now slightly above one half, while keeping the probability of tails (if you believed in $2/3$) still above one half. To calculate the exact numbers we use a neat little formula from Sebens and Carroll, which says that the probability of you being the observer labelled by $i$ within a set of observers with identical subjective experiences is
\[ p(i) = \frac{w_i}{\sum_j w_j}, \]
where $w_i$ is the Born-rule weight of your situation, and the $w_j$ are the Born-rule weights of all observers in the subjectively-indistinguishable situation.

Let’s say that the coin has a (objective, quantum, given by the Born rule) probability $p$ of falling on heads. The probability of being one of the tail observers is then simply the sum of the Born-rule weight of the Monday tail observer (which is simply $1-p$) with the Born-rule weight of the Tuesday tail observer (also $1-p$), divided by the sum of the Born-rule weights of all three observers ($1-p$, $1-p$, and $p$), so
\[ p(T) = \frac{2(1-p)}{2(1-p) + p}.\]
For elegance, let’s make this probability be equal to the objective probability of the coin falling on heads, so that both sides of the philosophical dispute will bet on their preferred solution with the same odds. Solving $p = (2 – 2p)/(2-p)$ gives us then
\[ p = 2-\sqrt{2} \approx 0.58,\]
which makes the problem quantum, and thus on topic for this blog, since it features the magical $\sqrt2$.[2]

With all this in hand, time to do the experiment. I gathered 17 impatient hungry physicists in a room, and after explaining them all of this, I asked them to bet on either heads or tails. The deal was that the bet was a commitment to buy, in each awakening, a ticket that would pay them 1€ in case they were right. Since the betting odds were set to be $0.58$, the price for each ticket was 0.58€.

After each physicist committed to a bet, I ran my biased quantum random number generator (actually just the function rand from Octave with the correct weighting), and cashed the bets (once when the result was heads, twice when the result was tails).

There were four possible situations: if the person betted on tails and the result was tails, they paid me 1.16€ for the tickets and got 2€ back, netting 0.84€ (this happened 4 times). If the person betted on heads and the result was tails, they paid me 1.16€ again, but got nothing back, netting -1.16€ (this happened 2 times). If the person betted on tails and the result was heads, they paid me 0.58€ for the ticket and got nothing back, netting -0.58€ (this happened 4 times). Finally, if the person betted on heads and the result was heads, they paid 0.58€ for the ticket and got 1€ back, netting 0.42€ (this happened once).

So on average the people who betted on tails profited 0.13€, while the people who betted on heads lost 0.61€. The prediction of the $2/3$ theory was that they should profit nothing when betting on tails, and lose 0.16€ when betting on heads. The prediction of the $1/2$ theory was the converse: who bets on tails loses 0.16€, while who beats on heads breaks even. In the end the match was not that good, but still the data clearly favours the $2/3$ theory. Once again, physics comes to the rescue of philosophy, solving experimentally a long-standing metaphysical problem!

Speaking more seriously, of course the philosophers knew, since the first paper on the subject, that the experimental results would be like this, and that is why nobody bothered to do the experiment. They just thought that this was not a decisive argument, as the results are determined by how you operationalise the Sleeping Beauty problem, and the question was always about what is the correct operationalisation (or, on other words, what probability is supposed to be). Me, I think that whatever probability is, it should be something with a clear operational meaning. And since I don’t know any natural operationalisation that will give the $1/2$ answer, I’m happy with the $2/3$ theory.