Your question about quantum computing hits close to home for me. In the end, the ultimate test of whether an interpretation is accepted by the physics community is not whether it is right or wrong in some abstract philosophical way, but whether it can naturally explain the things we find in our laboratories. I don’t think QBism is likely to be proven wrong, absurd, or inconsistent on purely logical grounds — it is too sophisticated for that. The real danger to QBism is actually of an empirical nature (a fact that our critics seem unable to grasp): if quantum theory breaks down at some scale, then QBism will be unable to explain that, and would be thrown out. And even if quantum theory holds at all scales, QBism might still fail, if it cannot provide intuitive explanations for important quantum phenomena. For without that, it would never attract the attention of most of the physics community and will eventually be forgotten.

The question of why quantum computers give an advantage over classical computers is exactly the kind of thing we would hope be answerable by appealing to the ontology of quantum theory. On this particular issue, many-worlds seems to have the upper hand so far: folklore has it that David Deutsch was inspired to invent quantum computing because the parallel universes suggested to him an analogy with parallel processing in computation. There are subtleties with the analogy (we can’t simply access those parallel worlds without restrictions), but many people do see this as a point in favour of many-worlds. Can QBism answer to that?

I don’t know yet, but think QBism can definitely give us a different perspective on computation in general. We are used to thinking of computation as something akin to a property of a physical system. But QBists would say, along with Turing, that computation is primarily a human activity. Turing’s famous machine was not supposed to be an abstraction of a physical system — it was supposed to be an abstraction of a particular kind of human activity. It was Deutsch who placed the emphasis on physical systems. Of course, he would not have made a distinction between ‘human activity’ and ‘a physical system obeying natural laws’ — that distinction would only make sense to a QBist. But that is precisely my point: for a QBist, the way that we think about computation would have to be revised to bring it more in line with Turing’s original conception. Then the way in which QBism would update the idea of a ‘Turing machine’ in light of quantum theory might take us down a different route than the one followed by Deutsch. I don’t know if that route would explain the quantum speedup, but I definitely think it would be interesting to pursue.

]]>After reading the article, I learned that what counts as measurement for Maudlin is the interaction itself between the smallest system (which is observed) and the measurement device. (This part I guessed wrong in my previous comment.) I also learned that the quantum state Maudlin intends to talk about is the quantum state of the middle system. (This part I guessed right in my previous comment.)

I agree that the article itself is clear and very carefully formulated. The article also contains two explicit versions of the trilemma, but only the third point about measurements is formulated significantly more careful than your formulation. (The main difference between the two explicit versions is in this third point, which is probably the reason why Maudlin tried to formulate it very carefully.) Here are Maudlin’s two explicit versions of his trilemma:

The following three claims are mutually inconsistent.

The wave-function of a system is complete, i.e. the wave-function specifies (directly or indirectly) all of the physical properties of a system.

The wave-function always evolves in accord with a linear dynamical equation (e.g. the Schrödinger equation).

Measurements of, e.g., the spin of an electron always (or at least usually) have determinate outcomes, i.e., at the end of the measurement the measuring device is either in a state which indicates spin up (and not down) or spin down (and not up).

Formally, the following three claims are mutually inconsistent:

The wave-function of a system is complete, i.e. the wave-function specifies (directly or indirectly) all of the physical properties of a system.

The wave-function always evolves in accord with a deterministic dynamical equation (e.g. the Schrödinger equation).

Measurement situations which are described by identical initial wave-functions sometimes have different outcomes, and the probability of each possible outcome is given (at least approximately) by Born’s rule.

Independent of whether those two explicit versions would be enough to identify the measurement part as the weak point of the carricature, reading the article itself makes it clear that this would be Maudlin’s conclusion (which would be spot on). His analysis and proofs however don’t apply directly to the carricature. Still they do provide a good starting point for an analysis which does apply to the carricature.

]]>I don’t see what is the point you’re trying to make. If you do go for option 2 you can solve the measurement problem, this is well known. Now if whatever you propose actually does it or suffers some other problem is besides the point.

]]>three nested systems:

Many attempts to interpret quantum mechanics do so by looking at three nested systems. The largest system is essentially the universe or the environment. The smallest system is the one being observed and following the laws of the theory, and the middle system contains the measurement device or the observer.

So there are at least three different states which must be distinguished:

(1) The “known” quantum state of the smallest system which is observed. This one does have a collapse, so it does not (always) evolve linearly.

(2) The “unknown” quantum state of the middle system containing the smallest system and the measurement device (but probably not the observer), for which at least the “relevant” laws how the “unknown” quantum state evolves should still be postulated. This one has an evolution that should be approximately linear (no longer a collapse, just a weak coupling to an environment).

(3) Both the state and the evolution of the enviroment are unknown. Maybe gravity or something else induces some non-linear evolution, maybe not. However, they are mostly “irrelevant” for predictions concerning the smallest system.

The measurement happens on the measurement device in the middle system. Somehow this measurement device has some equivalence classes of macroscopic states which are distinguishable with near certainty. It also has states that don’t fall into one of those equivalence classes, but as long as it is in such an unclear state (like a superposition of a dead and an alive cat), it does not yet make sense to talk of a measurement.

This description has not given up (3) “the assumption that measurements have a single outcome”, because it only talks about measurement if there actually was a single macroscopic measurement result with near certainty.

It neither affirmed nor explicitly denied (2) “that the quantum state evolves linearly”. OK, it denied it for the quantum state of the smallest system, retreated to “approximately linear” for the middle system, and claimed to stay agnostic for the environment. So it basically gave up the assumption that the “relevant quantum states evolve exactly linearly”.

It claimed that the quantum state of the middle system (1) “encodes everything about the relevant physics”, where “relevant” has to be understood in an appropriate way.

Now you might claim that this analysis proves that Maudlin’s trilemma applies to this carricature, because point (2) did apply. But this misses the actual weak points of the carricature: (a) The “equivalence classes of macroscopic states which are distinguishable with near certainty” are subjective. The equivalence classes are not given objectively by themselves. (b) No indication is given why a measurement should actually occur, i.e. why the measurement device should evolve into one of the “macroscopic states which are distinguishable with near certainty”.

]]>The only implicit assumption he makes is that there *are* real properties. If you deny that, sure, you escape his trilemma. But then you’re also a solipsist.

Maudlin assumes that the quantum state is (directly or indirectly) a representation of a system’s physical properties. He takes this for granted and never acknowledges that it is an assumption. We physicists are all raised on this dogma: starting from the measured properties of a system, we are supposed to infer its present ‘state’. We then we apply the laws of physics to we predict the future state, and thereby predict its future measurable properties. Okay, so physics begins and ends with measured properties, and those at least correspond to reality. The ‘states following laws’ simply tells us how to get from presently measured reality to future measurable reality. But does that mean the states themselves represent anything real? There are ways to make the exact same predictions without using the notion of ‘states following laws’ at all.

To put it bluntly, in QBism, the state is not a representation of a system’s real properties — not directly or indirectly, not completely or partially. We don’t need a ‘state’ to be able to talk about reality. QBism is not susceptible Maudlin’s trilemma because it rejects the premises on which it is founded.

]]>From your explanation i am able to understand a little better how QBism views ‘multiple possibilities’ and that it considers them as not being real. In case of the coin flip example, it is clear.

But how does this view explain situations where interference is actively exploited, such as in the case of Shor’s algorithm. The result and the measurement of the output of the algorithm can be understood in terms of an agent and its experience. But how do we explain with QBism, the underlying reality that makes it possible for a quantum computer to perform those computations?

]]>- give up the assumption that the quantum state encodes everything about the relevant physics (e.g. by postulating hidden variables).
- give up the assumption that the quantum state evolves linearly (e.g. by having a physical collapse).
- give up the assumption that measurements have a single outcome (i.e. going Many-Worlds).

When you ask ‘what happens’ to the unrealized possibilities, it sounds like you are presuming that, in some sense, these possibilities ‘exist’ even before they are measured. But a QBist would say ‘possibile events’ do not exist, so it does not make sense to ask ‘what happens to them’ when some event occurs. When something happens, it is real, but we cannot say that it was real before it happened.

Think of a coin. Before you flip it, you say that there are two possibilities: either it will be heads, or tails. If you believe in an objective interpretation of probability (like Mateus does), then this is a statement about the physical properties of the coin itself: `the possibility of heads’ and `the possibility of tails’ correspond to something that really exists in the coin.

It might be that you think there is a variable that determines what the outcome will be, which is unknown to you, but which is the only possibility and it exists even before you flip the coin. This would be similar to the hidden variables interpretation. Or it might be that you think the coin has an intrinsic `tendency’ to land either heads to tails, and this tendency is an objective physical property of the coin. When you flip it and get, say, heads, then this property physically changes into `heads’, and the possibility of tails is `destroyed’ in the process. This is similar to an `objective collapse’ interpretation. Finally, you might think that `tails’ is not destroyed, but simply happens in a parallel part of the multiverse, to an exact copy of you. This would be similar to the ‘many worlds’ interpretation. All of these options treat the ‘possibilities’ as describing something real about the coin.

QBism rejects all of those interpretations. QBism says that before we flip the coin, the possibilities of heads and tails simply do not exist. A ‘possibility’ just means something we think might happen, but it does not correspond to anything actual in the world. The possibility of heads or tails before you flip the coin does not say anything about the coin itself: it only says something about what you think might happen in the future, and nothing more.

When you flip the coin, something just spontaneously happens, and whatever happens is real in the moment that it happens. However, it did not come from something that existed before. It was created in the moment that the coin was observed.

We like to think of it as similar to the Big Bang. In some cosmological models, the Big Bang was not caused by anything that existed previously. For QBism, every time you flip a coin and get a result, you have made a `little bang’ (Chris Fuchs calls it a QBoom). This little bang brings something new into reality that never existed before. By flipping the coin and observing the outcome, you have participated in creating a tiny piece of the universe.

Notice that I say ‘participated’. That is because although you played a part in it, you do not get to say what actually happens. That is up to the world. As Anton Zeilinger once said, ‘There are two fundamental freedoms: our freedom to define which measurement apparatus to use and thus to determine which quality can become reality; and Nature’s freedom to give the answer she likes’.

]]>Your question is right on spot. There isn’t a satisfactory answer, though, precisely because of what I’ve been complaining about all this time: QBism refuses to say anything about objective reality.

Since QBists insist that the collapse is only subjective, and the probabilities are only subjective, I think what it would make sense is to say that the experience that was instantiated was pre-determined by a hidden variable, and the probabilities are just ignorance of which value the hidden variable actually took. Just how it is in Bohmian mechanics.

You can be sure, though, that QBists will emphatically say that this is not the case, that theirs is not a hidden-variables theory. But they won’t say what is going on, they won’t go for real collapse, Many-Worlds, or hidden variables.

]]>