\[ t=0:00 \]

**SIMPLICIO**: These quantum gravity people! Always claiming that the world is fundamentally discrete! It’s so stupid!

**INGENUO**: Humm why is it stupid? They do have good reasons to think that.

**SIMPLICIO**: But come on, even the most discrete thing ever, the qubit, already needs continuous parameters to be described!

**INGENUO**: Well, yes, but it’s not as if you can take these parameters seriously. You can’t really access them with arbitrary precision.

**SIMPLICIO**: What do you mean? They are continuous! I can make any superposition between $\ket{0}$ and $\ket{1}$ that I want, there are no holes in the Bloch sphere, or some magical hand that will stop me from producing the state $\sin(1)\ket{0} + \cos(1)\ket{1}$ as precisely as I want.

**INGENUO**: Yeah, but even if you could do it, what’s the operational meaning of $\sin(1)\ket{0} + \cos(1)\ket{1}$? It’s not as if you can actually measure the coefficients back. The problem is that if you estimate the coefficients by sampling $n$ copies of this state the number of bits you get goes like $\frac12\,\log(n)$. And this is just hopeless. Even if you have some really bright source that produces $10^6$ photons per second and you do some black magic to keep it perfectly stable for a week, you only get something like 20 bits. So operationally speaking you might as well write

\[ 0.11010111011010101010\ket{0} + 0.10001010010100010100\ket{1}\]

**SIMPLICIO**: Pff, operationally. Operationally it also makes no difference whether the remains of Galileo are still inside Jupiter or not. It doesn’t mean I’m going to assume they magically disappeared. Same thing about the 21st bit. It’s there, even if you can’t measure it.

**INGENUO**: I would take lessons from operational arguments more seriously. You know, Einstein came up with relativity by taking seriously the idea that time is what a clock measures.

**SIMPLICIO**: ¬¬. So you are seriously arguing that there might be only 20 bits in a qubit.

**INGENUO**: Yep.

**SIMPLICIO**: Come on. Talk is cheap. If you want to defend that you need to come up with a toy theory that is not immediately in contradiction with experiment where the state of a qubit is literally encoded in a finite number of bits.

**INGENUO**: Hmmm. I need to piss about it. (*Goes to the bathroom*)

\[t = 0:10\]

**INGENUO**: Ok, so if we have $b$ bits we can encode $2^b$ different states. And as long as $b$ is large enough and these states are more-or-less uniformly spread around the Bloch sphere we should be able to model any experiment as well as we want. So we only need to find some family of polyhedrons with $2^b$ vertices that tend to a sphere in the limit of infinite $b$ and we have the qubit part of the theory!

**SIMPLICIO**: Hey, not so fast! How about the transformations that you can do on these states? Surely you cannot allow unitaries that would map one of these $2^b$ states to some state not encoded in your scheme.

**INGENUO**: Ok…

**SIMPLICIO**: So you have some set of allowed transformations that is not the set of all unitaries. And this set of allowed transformations clearly must satisfy some basic properties, like you can compose them and you do not get outside of the set, and it must always be possible to invert any of the transformations.

**INGENUO**: Yeah, sure. But what are you getting at?

**SIMPLICIO**: Well, they must form a group. A subgroup of $U(2)$, to be more precise. And since we don’t care about the global phase, make it a subgroup of $SU(2)$, for simplicity.

**INGENUO**: Oh. Well, we just need to check which are the subgroups of $SU(2)$, surely we’ll find something that works. (*Both start reading Wikipedia.*)

\[t=0.20\]

**SIMPLICIO**: Humm, so it turns out that the finite subgroups of $SO(3)$ are rather lame. You either have the platonic solids, which are too finite, or two subgroups that can get arbitrarily large, the cyclic and the dihedral groups.

**INGENUO**: Argh. What are these things?

**SIMPLICIO**: The cyclic group is just the rotations of the sphere by some rational angle around a fixed axis, and the dihedral group is just the cyclic group together with a reflection along the same axis. So you can put your states either in the vertices of a polygon inscribed in the equator of the Bloch sphere, or in the vertices of a prism.

**INGENUO**: Ugh. They are not nearly as uniform as I hoped. So I guess the best one can do is put the states in the vertices of an icosahedron.

**SIMPLICIO**: Beautiful. So instead of 20 bits you can have 20 states. Almost there!

\[t=0:21\]

This post is really fun to chew over. I guess the catch is that Simplicio is doing a sneaky substitution in the third sentence. When QG people say the universe is fundamentally discrete, I think they mean that the Hilbert space of the whole universe is finite dimensional, not that the amplitudes are discrete.

Hmm, I guess that also suggests their toy model might run into problems with the probability interpretation? After all, those amplitudes are supposed to correspond to probabilities. So making the amplitudes discrete seems to imply that actual probabilities only take values from a discrete set of rational numbers, which I think is a non-standard axiom. But I wonder if it is that much different to normal probability theory?

That’s the advantage of using stupid characters, I can use them to say stupid things without them being attributed to me. But what Simplicio has in mind there is that if one could actually access the amplitudes with infinite precision, like you can with the position of a classical particle, then these amplitudes would be a infinite-dimensional Hilbert space.

But as far as I know you don’t run into much trouble by restricting normal probability theory to this finite version. It boils down to a choice of what is your sigma-algebra. You will not be able to assign a probability to arbitrary measurable sets, and will not be able to consider infinite sequences of events, but all this can be taken into account, but not anything tragic. Or can you imagine worse problems?

Great post. Recently, I was talking to a colleague about state tomography, who was trying to convince me that we should be using a region estimator of the estimator instead of the point estimator. He said that because a point estimator results in a set of measure zero, it’s almost guaranteed to be wrong. I took from the conversation that he imagines tomography as a game where you have to guess the exact density matrix, and if you are correct in the first 20 bits but wrong on the 21st bit, you are wrong, you are as wrong as if you had been wrong in the first bit.