One of the most intriguing consequences of Bell’s theorem is the idea that one can do *experimental metaphysics*: to take some eminently metaphysical concepts such as determinism, causality, and free will, and extract from them actual experimental predictions, which can be tested in the laboratory. The results of said tests can then be debated forever without ever deciding the original metaphysical question.

It was with such ideas in mind that I learned about the Sleeping Beauty problem, so I immediately thought: why not simply do an experimental test to solve the problem?

The setup is as follows: you are the Sleeping Beauty, and today is Sunday. I’m going to flip a coin, and hide the result from you. If the coin fell on heads, I’m going to give you a sleeping pill that will make you sleep until Monday, and terminate the experiment after you wake up. If it falls on tails instead, I’m going also to give you the pill that makes you sleep until Monday, but after your awakening I’m going to give you a second pill that erases your memory and makes you sleep until Tuesday. At each awakening I’m going to ask you: what is the probability1 that the coin fell on tails?

There are two positions usually defended by philosophers:

- $p(T) = 1/2$. This is defended by Lewis and Bostrom, roughly because before going to sleep the probability was assumed to be one half (i.e. that the coin is fair), and by waking up you do not learn anything you didn’t know before, so the probability should not change.
- $p(T) = 2/3$. This is defended by Elga and Bostrom, roughly because the three possible awakenings (heads on Monday, tails on Monday, and tails on Tuesday) are indistinguishable from your point of you, so you should assign all of them the same probability. Since two of them have the coin fallen on tails, the probability of tails must be two-thirds.

Well, seems like the perfect question to answer experimentally, no? Give drugs to people, and ask them to bet on the coins being heads or tails. See who wins more money, and we’ll know who is right! There are, however, two problems with this experiment. The first is that it is not so easy to erase people’s memories. Hitting them hard on the head or giving them enough alcohol usually does the trick, but it doesn’t work reliably, and I don’t know where I could find volunteers that thought the experiment was worth the side effects (brain clots or a massive hangover). And, frankly, even if I did find volunteers (maybe overenthusiastic philosophy students?), these methods are just too grisly for my taste.

Luckily a colleague of mine (Marie-Christine) found an easy solution: just demand people to place their bets in advance. Since they are not supposed to be able to know in which of the three awakenings they are, it makes no sense for them to bet differently in different awakenings (in fact, they should even be be unable to bet differently on different awakenings without access to a random number generator. If they have one in their brains is another question). So if you decide to bet on heads, and then “awakes” on Tuesday, too bad, you have to do the bad bet anyway.

With that solved, we get to the second problem: it is not rational to ever bet on heads. If you believe that the probability is $1/2$ you should be indifferent between heads and tails, and if you believe that the probability is $2/3$ you should definitely bet on tails. In fact, if you believe that the probability is $1/2$ but have even the slightest doubt that your reasoning is correct, you should bet on tails anyway just to be on the safe side.

This problem can be easily solved, simply by biasing the coin a bit towards heads, such that the probability of heads (if you believed in $1/2$) is now slightly above one half, while keeping the probability of tails (if you believed in $2/3$) still above one half. To calculate the exact numbers we use a neat little formula from Sebens and Carroll, which says that the probability of you being the observer labelled by $i$ within a set of observers with identical subjective experiences is

\[ p(i) = \frac{w_i}{\sum_j w_j}, \]

where $w_i$ is the Born-rule weight of your situation, and the $w_j$ are the Born-rule weights of all observers in the subjectively-indistinguishable situation.

Let’s say that the coin has a (objective, quantum, given by the Born rule) probability $p$ of falling on heads. The probability of being one of the tail observers is then simply the sum of the Born-rule weight of the Monday tail observer (which is simply $1-p$) with the Born-rule weight of the Tuesday tail observer (also $1-p$), divided by the sum of the Born-rule weights of all three observers ($1-p$, $1-p$, and $p$), so

\[ p(T) = \frac{2(1-p)}{2(1-p) + p}.\]

For elegance, let’s make this probability be equal to the objective probability of the coin falling on heads, so that both sides of the philosophical dispute will bet on their preferred solution with the same odds. Solving $p = (2 – 2p)/(2-p)$ gives us then

\[ p = 2-\sqrt{2} \approx 0.58,\]

which makes the problem quantum, and thus on topic for this blog, since it features the magical $\sqrt2$.2

With all this in hand, time to do the experiment. I gathered 17 impatient hungry physicists in a room, and after explaining them all of this, I asked them to bet on either heads or tails. The deal was that the bet was a commitment to buy, in each awakening, a ticket that would pay them 1€ in case they were right. Since the betting odds were set to be $0.58$, the price for each ticket was 0.58€.

After each physicist committed to a bet, I ran my biased quantum random number generator (actually just the function rand from Octave with the correct weighting), and cashed the bets (once when the result was heads, twice when the result was tails).

There were four possible situations: if the person betted on tails and the result was tails, they paid me 1.16€ for the tickets and got 2€ back, netting 0.84€ (this happened 4 times). If the person betted on heads and the result was tails, they paid me 1.16€ again, but got nothing back, netting -1.16€ (this happened 2 times). If the person betted on tails and the result was heads, they paid me 0.58€ for the ticket and got nothing back, netting -0.58€ (this happened 4 times). Finally, if the person betted on heads and the result was heads, they paid 0.58€ for the ticket and got 1€ back, netting 0.42€ (this happened once).

So on average the people who betted on tails profited 0.13€, while the people who betted on heads lost 0.61€. The prediction of the $2/3$ theory was that they should profit nothing when betting on tails, and lose 0.16€ when betting on heads. The prediction of the $1/2$ theory was the converse: who bets on tails loses 0.16€, while who beats on heads breaks even. In the end the match was not that good, but still the data clearly favours the $2/3$ theory. Once again, physics comes to the rescue of philosophy, solving experimentally a long-standing metaphysical problem!

Speaking more seriously, of course the philosophers knew, since the first paper on the subject, that the experimental results would be like this, and that is why nobody bothered to do the experiment. They just thought that this was not a decisive argument, as the results are determined by how you operationalise the Sleeping Beauty problem, and the question was always about what is the correct operationalisation (or, on other words, what probability is supposed to be). Me, I think that whatever probability is, it should be something with a clear operational meaning. And since I don’t know any natural operationalisation that will give the $1/2$ answer, I’m happy with the $2/3$ theory.

Any betting argument is fallacious, because it is unclear whether you should accept only one, or conditionally one or two, wagers. But there is a way to answer the question unambiguously.

Pick four volunteers instead of one. Use just one coin flip, on Sunday Night. Use the same pills to make the volunteers each wake up at least once, and maybe twice, but on different schedules:

SB1 will stay asleep on Tuesday, if Heads was flipped.

SB2 will stay asleep on Tuesday, if Tails was flipped.

SB3 will stay asleep on Monday, if Heads was flipped.

SB4 will stay asleep on Monday, if Tails was flipped.

Keep them in separate rooms, and ask each “What is the probability (or your credence), *now*, that you will be wakened twice during this experiment?” (Note: you reversed the question from what Lewis, Elga, and Bostrom asked.)

It is trivial to see that SB1’s schedule, and the question she is asked, are completely equivalent to the original Sleeping Beauty experiment. It is also trivial to see that the other three experience a functionally equivalent experiment that must have the same answer.

But this one is trivial to answer: Any volunteer who finds herself awake knows that she is one of exactly three who are awake at the moment. She knows that exactly one of those three will be wakened only once, and two will be wakened twice. And, she knows that each of the three has the same information upon which to base a probability/credence.

The answer is 2/3.

Come on, my argument is not fallacious, I made it crystal clear that you need to bet at each awakening or, as you put it, conditionally once or twice.

But I’m afraid I did not understand your setup. What exactly do you mean by “SB1 will stay asleep on Tuesday, if Heads was flipped”? Will she be awakened twice in case of tails, and in case of heads she will be awakened once on Monday, and afterwards be left sleeping forever? And how about the people who will stay asleep on Monday?

Your betting argument proves that the answer is 1/2. To see that, consider what happens with the following assumptions:

1. If the coin is tails I will bet twice. If the coin is heads I will bet once.

2. I will bet the same thing both times.

Now you wake up. Suppose you believe the probability of tails is 2/3. But you also know that if and only if the coin is tails you get paid off double. Combining those separate pieces of information leads you to an incorrect bet. The correct bet results from a probability of 1/2 for tails and the knowledge that *if* it is tails you get paid double.

The thing that confuses people is that the probability is clearly 2/3 with infinitely many trials. That is to say, one Sunday followed by infinitely many flip/monday/tuesday. How can that be? The answer is that, with infinitely many trials, any particular coin tells you nothing about the total distribution. With infinite trials 2/3 of the days are tails, even if you happen to wake up on a heads day. So you say “I believe there is a 2/3 probability of tails” and that’s the end of it. It doesn’t allow you to infer anything new about the distribution.

If you want to argue against 1/2 being the clear, unambiguous answer then you’ll need to dispute the two assumptions. The first can’t be disputed. The second certainly can, but note that 2/3 would never be the answer. If we dispute the second assumption, we are allowing observable probabilistic distinctions between the days. For example SB may know that she sneezes with probability .15 when waking up. The problem would be impossible. It is only possible with the assumption of reliably indistinguishable awakenings, and the answer is 1/2.

I don’t see how your conclusion follows from the two assumptions you stated. Care to present an argument? Also, what is the incorrect bet you do if you believe that the probability of tails is 2/3?

Note that it is not accurate to say that you get paid double if the result was tails, since you need to pay for the bet again. So I would say you bet twice when the result is tails.

Let’s say heads pays \$3 and tails pays \$1 per correct guess.

If you wake up and believe there is a 2/3 probability of tails, on top of knowing that if you correctly guess tails you are literally guaranteed (by assumption) to guess tails twice, then:

1. The expected value of guessing heads is 3(1/3) = \$1

2. The expected value of guessing tails is 1(2/3)(2) = \$1.33

Of course, this is incorrect. It is better to guess heads in this situation. So a probability of 2/3 for tails is not consistent with the assumptions. Only a probability of 1/2 is consistent with them.

That brings us back to the assumption “I will bet the same thing both times.” I think most people accept that assumption, or something similar enough, and therefore should be answering 1/2 for the probability.

You are forgetting to take into account the price of the ticket. Lets says you pay ‘x’ to bet on heads and ‘y’ to bet on tails. Then the expected value of guessing heads is $3(1/3)-x=1-x$, and the expected value of guessing tails is $(1(2/3) -y)2 = 1.33-2y$.

Now the correct price of the ticket is precisely that which makes the expected value zero, so the ticket for heads should cost \$1 and the ticket for tails should cost \$0.66.

I’m not dismissing your argument , I’m saying that those who want the answer to the original problem be 1/2 will find a way to argue against it. A way that seems (to them) to be better reasoned than yours, and you will not be able to change their minds about it. The fallacy I mentioned is that comparing a wager’s outcome to a probability requires one measurement of the wager, and this problem really has 3/2 of an observation.

Here’s my setup, said a different way:

1) Put four volunteers, named SB1, SB2, SB3, and SB4, to sleep on Sunday.

2) After they are asleep, flip a coin.

3) On Monday, waken SB1 and SB2. If the coin landed Tails, also waken SB3 and leave SB4 asleep; otherwise, waken SB4 and leave SB3 asleep.

4) Ask each of the three awake volunteers for her confidence that she will be wakened twice before the experiment is over.

5) After they answer, wipe their memories and put them back to sleep.

6) On Tuesday, waken SB3 and SB4. If the coin landed Tails, also waken SB1 and leave SB2 asleep; otherwise, waken SB2 and leave SB1 asleep.

7) Ask the same question.

On each day, three volunteers are awake. Two will be awake on another day, and one will not. Each can only say the answer to the question is 2/3.

But SB1 is asked for the exact same confidence you described.

Ok, now I understand your scenario, and I agree that the question is equivalent to the original sleeping beauty one. I do not follow, however, your argument that the probability must be 2/3. Would you mind putting that into equations (you can use normal LaTeX in the comments)? Maybe I will then agree it is trivial ;p

And I disagree with your postulate that comparing a wager’s outcome to a probability requires one measurement of the wager. There is simply no such restriction in the (Bayesian) definition of probability! If one would take this restriction seriously, one would dismiss the Sleeping Beauty problem as meaningless, instead of arguing that it is 1/2.

“I do not follow, however, your argument that the probability must be 2/3.” You don’t seem to recognize that I am solving the equivalent problem, not the original.

“Would you mind putting that into equations?” I have no idea how, since it is too simple. To see what I mean, will you please put into equations why the probability of rolling a 1 on a standard six-sided die is 1/6? I say the reason is that there is no information that allows it to be different than any other of the six total results, so the probability is (#cases of interest)/(# cases). This is called the Principle of Indifference.

In my version of Sleeping Beauty, if I am an awake volunteer:

A) I know that there are exactly three who are awake.

B) I know that exactly two of these three will be awake on the other day.

C) I know that the Principle of Indifference applies.

So the answer is (#cases of interest)/(# cases)=2/3.

“I disagree with your postulate that comparing a wager’s outcome to a probability requires one measurement of the wager.”

I’m not postulating that. I’m saying that there is no established definitions that allow you to “prove” that one, or two, such measurements represent a correct probability. You are making assumptions to support your claim, and Anti-Joker is making different ones to support his. You need to address why one assumption is correct, and one incorrect, before it means anything.

But if there is only one measurement, both of you would agree.

I’m saying just to put it precisely, in terms of conditional probabilities. It doesn’t need to complicated. For example, Elga’s original argument is that

1 – p(monday|tails) = p(tuesday|tails), from the principle of indifference

2 – p(heads|monday) = p(tails|monday), since this is just the probability of the coin, which is assumed to be fair.

3 – multiplying both sides of the first equation by p(tails) gives you p(monday & tails) = p(tuesday & tails).

4 – multiplying both sides of the second equations by p(monday) gives you p(heads & monday) = p( tails & monday).

5 – Combining these two last equations gives you p(monday & tails) = p(tuesday & tails) = p(monday & heads) = 1/3.

6 – Summing p(monday & tails) and p(tuesday & tails) gives you p(tails) = 2/3.

Using your own calculation, you have a higher expected value for guessing tails if the guesses are free. So if the experimenter decided to make guesses free (granted, they will lose money doing that, but that isn’t SB’s concern), you would guess tails.

And that would be the wrong decision. Heads will make more money.

So it looks like you agree with me that a probability of 2/3 is inconsistent with my assumptions. If you accept my assumptions (which your original argument makes it sound like you do) you should be answering 1/2 for the probability.

“I’m saying just to put it precisely, in terms of conditional probabilities.” And I’m saying that the equivalent problem, as I posed it, is not a conditional probability problem. So I can’t put it, precisely, in terms of conditional probabilities.

There are three awake volunteers. For two of them, the answer to the question “will you be wakened twice during this experiment” is “yes.” None of the three possess information that distinguishes whether or not they are one of the two. So the answer to this non-conditional probability problem is 2/3.

But because this non-conditional probability problem is equivalent to the conditional probability problem you posed, the answers must be the same.

This is nonsense. There isn’t such a thing as a “non-conditional probability problem”. If your argument is at all correct – and I believe it is – then it can be formulated precisely.

I’ll do it myself when I have time.

Actually, thinking better about it, my calculation just showed what should be the price of the tickets if the probabilities of the coin were (2/3,1/3). Your point that in this situation it would be better to bet on tails was correct.

The actual problem is the difference between the probability of the coin (which is what you are using) and the credence that you have at each awakening, which I identify with the price of the ticket.

Maybe it is enlightening to analyse the problem from this outside view that you are considering. The expected gain of somebody that bets on heads is the probability that the coin comes out heads times the gain when it is heads (which is 1€ minus the cost of the ticket $c_H$), plus the probability that the coin comes out tails times the gain when it is tails, which is just twice the cost of the ticket, since you don’t gain anything. It is $(1/2)(1-c_H) – (1/2)(2c_H) = 1/2 – 3c_H/2$, which implies that the fair price is $c_H=1/3$, which I identify with your credence that the coin was heads.

The same analysis for tails shows that your gain is $(1/2)(-c_T) + (1/2)(2-2c_T) = 1 – 3c_T/2$, which gives $c_T = 2/3$.

Three women are locked in separate rooms. Each has a light outside the room. A six-sided die is rolled. On a 1 or a 2, the first room’s light is turn on. On a 3 or a 4, the second room’s light is turned on. And of a 5 or a 6, the third’s is.

Each woman is asked for her credence that her light is off. Each knows that there are three lights, that exactly two are off, and that non of the women possess information that distinguishes whether or not she is one of the two.

The answer is 2/3. This is a non-conditional probability problem because no information about any of the random variables is provided. It is just like my variation of the Sleeping beauty Problem.

Honestly, I can’t fathom what it is you are objecting to.

I’m objecting to lack of rigour, that’s all. But I think I nailed your argument down: the key thing is assuming that since all the sleeping beauties have the same probabilities, you might as well pretend you don’t know who you are and take a uniform distribution over all beauties. Then you calculate that, in the situation where the day is monday and the coin was tails the probability that you will be awaken twice in this experiment is

\[p(w|\text{monday}, \text{tails}, \text{awake}) = \sum_i p(w|\text{monday}, \text{tails},\text{awake},\text{SB}_i)p(\text{SB}_i|\text{monday}, \text{tails}, \text{awake})\]

where you take $p(\text{SB}_i|\text{monday}, \text{tails}, \text{awake})=1/3$ for $\text{SB}_1$, $\text{SB}_2$, and $\text{SB}_3$, even though you know you are $\text{SB}_1$. This implies that \[p(w|\text{monday}, \text{tails}, \text{awake}) = 2/3\]and since the same result follows for the the other three situations (monday and heads, tuesday and tails, and tuesday and heads) you get finally that $p(w|\text{awake}) = 2/3$.

Interestingly, you can run the same argument without conditioning on being awake, and the result is as expected $p(w) = 1/2$.

(Replying to the reply below)

“You might as well pretend you don’t know who you are and take a uniform distribution over all beauties.”

No need to pretend. Knowing “who you are” means knowing the values of {Day,Coin} that are assigned to you. But knowing this is significant to a solution if, AND ONLY IF, you have some information about what values exist when you are asked the question. You don’t, so knowing “who you are” is insignificant to answering the question.

Similarly, say 5 red beans, and 5 blue beans, are put in a bag. The first 10 audience members at a game show each take a bean from the bag at random. Then a coin is flipped: if it lands Heads, people with red beans will get to participate in a game. If Tails, blue beans will.

What you are saying in your reply, is tantamount to saying that if I am one of the audience members with a bean, I can’t answer the question “What are my chances to play a game” unless I first look at my bean, and use the color in the calculation.

(Well, I thought it would appear below. Each site does this differently)

No need to split hairs. I’m just saying that you know which of the sleeping beauties you are, that is, which setup will apply to you. Not that it matters, because as I said the probabilities are the same for everyone, so you might as well use this average probability, or simply change your setup, and don’t tell which sleeping beauty will be awakened in which conditions.

Just to clarify, I’m saying that your argument is sound and that I agree with it. I’m sorry if my language implied otherwise. With the sentence

“You might as well pretend you don’t know who you are and take a uniform distribution over all beauties.”

I was merely highlighting the part of the argument I had found counter-intuitive.

Remember, I’m claiming that the credence upon waking is 1/2, exactly the same as the probability on Sunday. To me, your calculation are not an “outside view”. I believe SB can make those same calculations when she wakes up inside the experiment.

The question is, do you agree? Do you think SB can make those calculations when she wakes up? If so, and if you accept my earlier assumptions, then you must agree that her credence is 1/2.

On the other hand, if you disagree, then how do you think SB calculates expected value *after* waking up? How does she decide what to bet?

Of course the sleeping beauty can make those same calculations when she wakes up, it’s just that the 1/2 that appears there is not her credence, but the probability of the coin.

But calculating the expected value after waking up is the original question, one gains $p_T – c_T$ when betting on tails and $p_H – c_H$ when betting on heads. Then to calculate $p_T$ in this case you can use for example Elga’s original argument, that I reproduced in comment 67.

There is a subtlety, however, if you want to compare the absolute amount of money won: in the inside view there is a single bet going on, but on the outside view one and half bets happen, so you need to multiply the gain on the inside view by 1.5 for the two to match.

Suppose I offer you a chance to bet on a coin flip. Betting is free. Heads wins 1, tails wins 2. After explaining the bet, I flip the coin and hide it under a cup. Even though your credence is 1/2 that the coin is heads, you still prefer to bet on tails, right? Of course, it has a bigger prize.

So we have established that you might choose to bet on one thing over another even if your credence is identical for each being right.

Now suppose heads wins 1, tails wins 1, but I tell you that you get to bet twice if you correctly guess tails the first time. Again, your credence is 1/2 that the coin is heads. The coin is right there under a cup, and you believe there is 1/2 probability that it is heads. But you still prefer to bet on tails. Because if it is tails you will automatically get to bet again for a sure win.

The question with betting on the Sleeping Beauty problem is whether it is a situation like the ones above, where we all agree that your credence is 1/2, yet you prefer to bet on tails anyway. Or if the set-up makes it something else.

And that’s where the assumptions I wrote come into play:

1. If the coin is tails I will bet twice. If the coin is heads I will bet once.

2. I will bet the same thing both times.

My suggestion is that if you accept these assumptions then it becomes like the above situations. When you wake up, even if you have a credence 1/2 that the coin, under the cup, shows heads, you will still prefer to bet on tails.

Here is an example to see why this doesn’t work: Suppose we have infinite people and order them by the natural numbers. None of them can see the numbers. Then we group them together so there are always two even and one odd number per group.

Group 1: 1,2,4

Group 2: 3,6,8

Group 3: 5,10,12

Everyone is put to sleep and Group N is woken up on day N. You wake up. What is your credence that you are even?

You know you are one of exactly three who are awake at the moment. You know that exactly one of those three is odd, and two are even. You know that each of the three has the same information on which to base a probability/credence.

And yet… all three of you will answer 1/2, right? You already knew you were going to be put in a group of two even and one odd. Waking up gives no information to update your credence.

But an outside observer would have a 2/3 credence that you are even! To an outsider, all that matters is that there are 3 of you and 2 are even.

This is also true for your sleeping beauties. If I walk by and observe one of them waking up, I believe there is a 2/3 chance they will wake up twice. But that is not their credence about themselves. Each of them has credence 1/2, and that’s the answer to the sleeping beauty problem.

Non sequiturs like “And yet… all three of you will answer 1/2, right?” are why you don’t use infinite lists to answer such questions. It doesn’t follow that the probability is (#evens)/(#people), since there is no definition of those “numbers,” or what division means.

Said another way, since a countably-infinite can be put into 1:1 correspondence with a strict subset of itself, it follows that there cannot be a definition of the proportion of members in two such sets. So you can’t base a probability on such a proportion, as you just tried to do.

But you can solve the problem you posed as a limit. Suppose only the first 4N people are considered for such groups. Those whose number is greater that 4N are turned away, but so are the last N whose number is odd. If you are not turned away, you will say your chances are based on 2N even numbers kept, and only N odd numbers kept.

Now, take the limit as N approaches infinity. Nobody is turned away, but once you are identified as a member of a group, your chances of being in that group and being “even” are 2/3. Even with an infinite number of groups.

Nobody would disagree with the examples you gave. They lack, however, the thing that makes the Sleeping Beauty problem interesting: you are awake now, and I ask you: what is your credence that the coin is tails?

One could do that with your second example: just make the person forget whether they have already bet or not; then you should really believe that the coin is more likely to be tails, because this situation will happen more often.

And note that you didn’t give an argument for deriving the 1/2 probability from your assumptions.

I’ll repeat my initial opinion: no betting argument proves anything. But that was a one-sided view. Let me explain what I mean by that.

Because of the One Bet/Two Bet dichotomy, you can always construct a betting argument that matches your preconceived idea of what the answer should be. The trick in the SBP, is to not enter it with a preconceived answer. You didn’t seem to notice that the Mateus’ blog was not a betting argument *for* either answer. It was a betting argument that demonstrated why 1/2 can’t be the solution. So if 1/2 and 2/3 are the only choices, 2/3 has to be the correct one. I apologize to Mateus for not making this clearer in my previous replies.

You have proven that you either cannot, or refuse to try to, address the problem without the preconceived answer of 1/2. You have proven it by only offering superficial variations of betting arguments that produce the 1/2 answer, and summarily dismissing (just as defendable, which means equally superficial ) betting arguments that produce the 2/3 answer. My point is that neither answer can be proven with a betting argument.

Matheus’ seems to be that one can be disproven, and he is correct.

You proven it also, by attempting to disprove my 2/3 solution (derived without preconception, and without a betting argument) with an argument based on the proportions of even and odd integers in the set of infinite integers. Anybody who understands the cardinality of infinite sets can see the flaw in that argument – that the proportion depends only on how your groups are constructed, not on a parameter within that construction.

You’re right of course. I actually was aware of that (which you might believe based on the way I constructed the groups), but was sloppy. I guess I could say everyone flips a coin to decide even or odd, but there is no way for that to work as a limit of real experiments.

Just to be clear, I would much rather have a clear, convincing understanding of the problem, and all the various quirks it can be given, regardless of the answer, than to have the answer be any particular way. That’s why I’m always interested in understanding the way other people do it, rather than just repeating how I do it, and that’s why I’ve been spending time grappling with your (admittedly attractive) solution.

To go to the more realistic example, suppose the numbers 1,2,3,4 are assigned at random to 4 people on Sunday. They can’t see the numbers. They are told that:

Group 1: 1, 2, 4

Group 2: 3, 2, 4

They are told that group N will wake on day N with no memory of other wakings. When you wake up, what is your credence of being even?

On Sunday everyone knows they will wake up in a group of two even and one odd. At that time we all have credence 1/2.

When I wake up, I not only know that I am in a group of 2 evens and one odd, I also know that I am in one “today”, and that the other 2 people in the group have the same information I do.

Based on that I conclude that we will all answer the same thing. I conclude that an outsider observing this would have credence 2/3 for each us. But I don’t think I, myself, am indifferent. That’s because there is still a difference between “me” and the others. I think I would conclude that all of us will answer 1/2.

For comparison, if “monday” is always revealed to the participants, then each would have credence 2/3 on hearing it. I could say “I am one of three in the monday group, of which 2 are even”. But I don’t think it works with “today.” When I say “I am one of three in today’s group”, my use of “today” is related to my number.

In other words, if “monday” is always named, and I hear “monday”, it is equivalent to asking “am I in the monday group?” and getting a positive answer. And that allows an update to 2/3. Information being asked for in different ways, or given without asking, affects probability.

But “am I in the today group?” doesn’t make sense. I can only say “today” to reference myself, and can’t be indifferent to others (even though I know they are thinking the same thing).

Well, I’ll keep thinking about it.

About your counterexample: There are two events we can name RM and RT, for “today is revealed as Monday” and “today is revealed as Tuesday.” Your assertion, ” if ‘monday’ is always revealed to the participants, then each would have credence 2/3 on hearing it” is saying that Pr(Even|RM)=2/3. But similar logic says Pr(Even|RT)=2/3.

If only one day is to be revealed, no matter how that day is decided, then non-revelation is equivalent to the revelation of the other day. That is, events NRM and NRT. Since the situations are symmetric, we don’t need to know which day it is when it isn’t revealed, just that it isn’t. Pr(Even|NRM ∪ NRT) = Pr(Even|NRM) = Pr(Even|(NRT) = 2/3.

Since the answer is 2/3 regardless of which revelation or non-revelation occurs, we can say the answer is 2/3 before the revelation would be made, or not made.

I agree that this is non-rigorous argument. The point is, that it is much more rigorous than yours (the only difference between “me” and the others, is which unknown values among a symmetric set of unknown values apply).

My point is that you again supplied an argument whose only basis is an unsupportable assertion. The fact that my random variable “today” has a different distribution than the others’, BUT ONLY WHEN CONSIDERED IN THE CONTEXT OF A REALIZATION OF ONE OF THOSE VALUES, is meaningless. Because we don’t know that value. The same probability space applies to each of the participants.

I thought of another way to re-cast this problem. One that more closely models what I think is the correct solution to the original, and points out the flaw in Lewis’.

Use the same four volunteers, and procedures, I described before, with one exception. Don’t tell them what their schedules are. On either day of the experiment, put the three who are wakened into the same room, and have them discuss their credence for the events that it is Monday after Heads (MonH), Tuesday after Tails (TueH), Monday after Tails (MonT), and Tuesday after Tails (TueT).

Lewis’ argument is perfectly valid in their discussion. All three volunteers are indifferent about whether it is Monday or Tuesday, about whether the coin landed Heads or Tails, and about how the two events relate to each other. So each credence, for each volunteer, is 1/4. And in fact, the four events I described constitute a sample space for the unknowns.

Anti-Joker’s argument above, that each of the three awake volunteers can say “there is still a difference between ‘me’ and the others,” can only be interpreted (more below) as saying that each feels that the probabilities for the events in her sample space are different than those of the other two. The flaw in this argument is that no such difference can be identified.

Now take each volunteer back to her room, and reveal her schedule to her. This does give her “new information,” in the classic sense that Lewis assumes applies when she reaches this exact same experiment state after being told her schedule on Sunday. This may be the difference Anti-Joker sensed; but if it is, he applied it the wrong way. It can’t affect the credences of the other three events in the sample space. It can only affect a volunteer’s “sleep” event. Specifically, the one who was supposed to sleep through Tuesday, after Heads, can now eliminate TueH from her sample space. She can now say, with confidence, that Pr(T|~TueH) = [Pr(MonT)+Pr(TueT)]/[Pr(MonH)+Pr(MonT)+Pr(TueT)]=2/3.

This illustrates the flaw in Lewis’ argument: he implicitly assumes that “Tuesday” and “Heads” cannot happen together; that is, that their intersection is outside of consideration the same way “14” is outside of consideration for the sum of two six-sided dice, as opposed to “12” being outside when you have the information that one die is not a 6.

“Awake” cannot happen in conjunction with those two, but the two can still happen together with “~Awake.” And an awake Sleeping Beauty, in the original experiment, can infer the same things.

`since I don’t know any natural operationalisation that will give the 1/2 answer, I’m happy with the 2/3 theory.’

Here’s the most natural operationalisation I can think of for the 1/2 answer:

Suppose that, on each awakening, S.B. is asked to make a bet on heads or tails. Specifically, at the beginning of the experiment, she buys a ticket costing 50c, that will pay her 1€ at the end of the experiment if she guesses correctly (and we are assuming a fair coin). On each awakening, her guess is recorded, and at the end of the experiment, the recorded guess(es) are then used to determine her payout.

In the case of the coin landing heads, she only produces one guess, so she is paid 1€ if this guess is correct. If the coin landed tails, then she produces two guesses, one for Monday and one for Tuesday, which could be different to each other. To resolve this, we throw the two guesses into a hat and draw one of them, and pay her 1€ if the selected guess is correct. I think you will agree that SB’s expected payout in this experiment is zero, regardless of her guessing strategy. So this supports the 1/2 theory.

Obviously, the point at issue is whether is is `natural’ to combine SB’s two guesses into one, when the coin landed tails. The argument is this: even though SB produces two guesses, these guesses were made by the same observer, albeit at different times. There is thus no reason to treat the guesses as though they came from two different observers. Imagine if a voter accidentally turned in a ballot paper with a tick next to the names of both candidates: should we really count this as two separate votes? Not at all: instead we should take it as a single vote that favours both candidates equally. Similarly, we should take the totality of SB’s two guesses from Monday and Tuesday and combine them to determine her payout as if for a single guess, as occurs in the above operationalisation.

I’m not saying that I prefer this operationalisation to yours, but just pointing out that what makes one operationalisation more `natural’ than another is quite contestable! It is precisely this question of `naturalness’ that lies at the bottom of all puzzles like this one, which is what makes them so intriguing.

> Suppose that, on each awakening, S.B. is asked to make a bet on heads or tails.

Any betting argument needs to address how the different paths, with different numbers of betting opportunities, fit within theory. Every betting argument I have seen, to date, ignores this issue and seems to choose the result that the author wants to achieve. Yours turns two opportunities into one. The opposing betting argument treats them separately. I know no reason to support either.

> I’m not saying that I prefer this operationalisation to yours, but just pointing out that what makes one operationalisation more `natural’ than another is quite contestable!

I agree. That’s why a solution that removes the issue, rather than skirts it, is preferable. A solution that applies to the state Sleeping Beauty finds herself in *now*, without combining the other possible day into its set of events in unsupportable ways.

I gave such a solution, and the answer is 2/3.

As of yesterday, I think you agree there is an operationalisation where the answer is 1/2:

On awakening, SB is given an ice-cream, or something else leading to an immediate pleasurable experience, if she guesses the coin correctly. SB is of the philosophy that an unremembered pleasure is worthless. Therefore she only cares about what happens on the final morning. For her, it’s 50/50.

Thanks, this is a nice operationalisation to get 1/2.

I don’t think it is the most natural, as usually in these betting scenarios the reward is a monetary one, and you get to keep it. But one could argue that she gets the money, and immediately decides to spend to get a pleasurable experience, which she will then forget. So I can’t reject it as wrong.

This reflects a more general problem with subjective probability: if we are to accept that it is in fact subjective, we can’t really reject somebody’s probabilities as incorrect. If we were to argue that there is a unique, correct answer, then this is sounding a lot like an objective probability.