Meaningful Bell inequalities part 2: examples

Here I’ll show examples of the meaningful Bell inequalities I talked about in the previous post, where they are rephrased as nonlocal games and the gap between the local bound and the Tsirelson bound is made as large as possible.

First we have to talk about notation. Instead of writing Bell inequalities like
\[ \sum_{abxy} M^{ab}_{xy}p(ab|xy) \le L(M),\]I’m going to focus on the coefficients $M^{ab}_{xy}$. Since the meaningful Bell inequalities are nonlocal games, the coefficients have a very specific form,
\[M^{ab}_{xy} = \mu(x,y)V(a,b,x,y),\] where $\mu(x,y)$ is the probability that the referee asks questions $x$ and $y$ to Alice and Bob, and $V(a,b,x,y)$ is the probability that the referee accepts answers $a$ and $b$ to questions $x$ and $y$. Usually $V(a,b,x,y)$ is restricted to be either 0 or 1, so that it simply determines which answers will make them win or lose the game. Unfortunately we cannot make this restriction, as it would make it impossible to transform every Bell inequality into a nonlocal game. This might seem obvious, but curiously we couldn’t find any proof in the literature, so we had to do one ourselves in our paper, and it wasn’t easy. Even in the cases where the Bell inequality can be transformed into a nonlocal game with such a deterministic predicate, it will not necessarily be optimal, as we’ll see below in the CGLMP case, so there’s really no getting around that.

Braunstein-Caves inequalities

The first example I’d like to show are the Braunstein-Caves inequalities, known to the computer scientists as the Odd Cycle game (even though it works just as well for even cycles)1. Since they can be formulated as an XOR game, which is a particular case of unique games, their optimal version is just the XOR game, as I mentioned in the previous post. The game is very simply: it has $m$ inputs and 2 outputs per party, the inputs are uniformly distributed over integers $x,y \in \{0,\ldots,m-1\}$ that are either equal or differ by one mod $n$, and Alice and Bob win if they give equal outputs for all inputs except for $x=y=0$. More formally, the game has distribution over inputs
\[\mu(x,y) = \frac1{2m}([x=y]+[x=y+1 \mod m]),\] where $[\Pi]$ is 1 if the proposition $\Pi$ is true and 0 otherwise, and predicate
\[V(a,b,x,y) = [a\oplus b=\delta_{x0}\delta_{y0}],\]where $\delta_{ij}$ is the usual Kronecker delta.

The local bound of this nonlocal game, the maximal probability of winning it with local hidden variables, is
\[\omega_\ell = 1-\frac1{2m},\]and its Tsirelson bound, the supremal probability of winning it with quantum mechanics, is
\[\omega_q = \cos^2\left(\frac\pi{4m}\right).\] At first sight this is very disappointing, as $\omega_q-\omega_\ell$ decreases monotonically with $m$, so it would seem that the best game is the one for $m=2$, where it reduces to the CHSH game. What matters though is not really the gap $\chi$ between the local and Tsirelson bounds, but the expected $p$-value you get after playing $n$ rounds of the nonlocal game. The expected $p$-value is upperbounded by $(1-\chi^2)^n$, which reinforces the impression that the Braunstein-Caves game sucks, but a more refined upper bound is
\[p \le \left(\sqrt{\omega_\ell \omega_q}+\sqrt{(1-\omega_\ell)(1-\omega_q)}\right)^{2n},\] which is actually minimized for $m=4$. So yes, the Braunstein-Caves game is actually better than the CHSH game for rejecting local hidden variables! This is not all that matters, though; even though the expected $p$-value is smaller, the Braunstein-Caves game for larger $m$ is still useful for self-testing and proving measurement incompatibility.

$I_{mm22}$ inequalities

The second example I’d like to show are the $I_{mm22}$ inequalities. Like the Braunstein-Caves inequalities, they have 2 outputs and $m$ inputs per party, but unlike them, they are facets of the local polytope, so we could expect the $I_{mm22}$ inequalities to be better in all aspects. Well, not for the gap. We only have numerics here, as the Tsirelson bounds are not known exactly, and the $I_{mm22}$ inequalities cannot be transformed into a unique game or a CGLMP game, the only cases where we know the optimal form analytically, but the numbers are pretty disheartening:

$n$ $\omega_\ell$ $\omega_q$ $\chi$
2 0.7500 0.8536 0.1036
3 0.8000 0.8502 0.0502
4 0.8333 0.8654 0.0321
5 0.8571 0.8795 0.0224

In this case the small gap is not misleading: the expected $p$-value does go down monotonically with $m$, and thus the best $I_{mm22}$ inequality for rejecting local hidden variables is in fact the one for $m=2$, where it also reduces to the CHSH. Again, the $p$-value is not everything: the $I_{3322}$ inequality is violated by quantum states that do not violate the CHSH inequality, and its Tsirelson bound violation is probably attained only with infinite-dimensional quantum states, so it is still a pretty interesting inequality.

The example of the $I_{3322}$ inequality is also a good illustration of the arbitrariness in how you write a Bell inequality: pretty much all papers that talk about it formulate it differently, and it is rather tedious to switch from one form to another. To solve this problem we formulated it as a somewhat nice nonlocal game with the bounds mentioned above: if Alice and Bob are given inputs $(0,1)$, $(1,0)$, $(0,2)$, or $(2,0)$, which happens with probability $1/10$ each, they have to give different answers. If they are given instead inputs $(1,2)$ or $(2,1)$, which again happens with probability $1/10$, they have to give equal answers. The last case is when they get inputs $(0,0)$ or $(1,1)$, which happens with probability $1/5$ each. In this case they have to answer anything other than $(0,0)$. We couldn’t find even mildly nice nonlocal games for higher $m$, but since the solution is far from unique hopefully somebody else will manage to do it.

CGLMP inequalities

The third example I’d like to show is the CGLMP inequalities. Unlike the Braunstein-Caves and $I_{mm22}$, the gap finally gets larger than CHSH’s. Apparently it is easier to get a larger gap by increasing the number of outputs than by increasing the number of inputs. In any case, that’s what CGLMP does: it has 2 inputs and $d$ outputs per party. Using the form presented here, it’s easy to massage them into the nonlocal game with input distribution $\mu(x,y) = 1/4$ and predicate
\[V(a,b,x,y) = \sum_{k=0}^{d-1} \left(1-\frac{k}{d-1}\right)\left[a-b = (-1)^{x\oplus y}k + xy \mod d\right],\]which as mentioned before is already the optimal one.

Curiously, this predicate is probabilistic: if the answers satisfy the condition $\left[a-b = (-1)^{x\oplus y}k + xy \mod d\right]$ for some $k$, the referee accepts them with probability $\left(1-\frac{k}{d-1}\right)$. Even more curiously, it is always possible to make this predicate deterministic, at the cost of having a non-optimal nonlocal game. The best deterministic predicate we found2 is
\[ V(a,b,x,y) = \left[ (a-b)(-1)^{x\oplus y} \ge xy \right],\] which reduces the size of the gap by a factor of $(d-1)/d$.

The optimal version has a local bound of $\omega_\ell = 3/4$ for all $d$. The Tsirelson bounds are pretty much known: the lower bounds found here match the upper bounds from the NPA hierarchy within numerical precision, so with all probability they are the actual Tsirelson bounds. There’s no proof, however, except for the $d=2$ case, which is as usual just the CHSH inequality. For $d=3$ there’s an almost nice expression, $\frac{1}{24}(15+\sqrt{33})$, but for higher $d$ the lower bound is just the norm of a $d \times d$ matrix with irrational coefficients, there’s little hope for a simple form. The limit is known, though: for large $d$ the Tsirelson bound converges to 1.

Here are some value for small $d$:

$d$ $\omega_\ell$ $\omega_q$ $\chi$
2 0.7500 0.8536 0.1036
3 0.7500 0.8644 0.1144
4 0.7500 0.8716 0.1216
5 0.7500 0.8770 0.1270

CHSH-$d$ game

The fourth example I’d like to show is the CHSH-$d$ game, introduced here. It looks rather innocent: it has $d$ inputs and $d$ outputs per party, with uniform distribution over the inputs $\mu(x,y) = \frac{1}{d^2}$ and predicate
\[V(a,b,x,y) = [a+b=xy \mod d],\]
so it is a straightforward generalisation of the CHSH game. It is also a unique game, so this form is already the optimal one. Seems simple, no? Nothing could be further than the truth. Neither the local bound nor the Tsirelson bound is known; the only analytical result is an upper bound to the Tsirelson bound,
\[ \omega_q \le \frac1d + \frac{d-1}{d\sqrt{d}},\] valid only when $d$ is a prime number or a power thereof, and known not to be tight in general. It was first proved here, but I find the later proof much easier to understand. A fascinating variant of CHSH-$d$ for which the above expression is in fact the Tsirelson bound was found here.

The local bound is known only for small $d$, found here together with good lower bounds on the Tsirelson bound:

$d$ $\omega_\ell$ $\omega_q$ $\chi$
2 0.7500 0.8536 0.1036
3 0.6667 0.7124 0.0457
5 0.4800 0.5375 0.0575
7 0.3878 0.4587 0.0709
11 0.3058 0.3325 0.0267
13 0.2781 0.2987 0.0206

We see that the gap is maximal for $d=2$, with a smaller peak at $d=7$, and seems to go only down afterwards. So for all its beauty, the CHSH-$d$ game is useless for the bare purpose of rejecting local hidden variables.

Tailored inequalities

Introduced here, these are an ingenious generalisation of the Braunstein-Caves and together with a variant of the CGLMP inequalities, with $m$ inputs and $d$ outputs per party. They are tailored for the purpose of achieving maximal violation with the maximally entangled state of dimension $d$, and have the very convenient feature that the local and Tsirelson bounds are known exactly for all $m$ and $d$.

At first sight they look quite complicated, but when reformulated as a nonlocal game they are actually rather straightforward: the distribution over the inputs is the same as for the Braunstein-Caves inequalities,
\[\mu(x,y) = \frac1{2m}([x=y]+[x=y+1 \mod m]),\]and like in the CGLMP case the predicate is probabilistic, being given by
\[V(a,b,x,y) = \sum_{k=0}^{d-1} \alpha_k [ a-b + (-1)^{\delta_{xy}+1}k + \delta_{x0}\delta_{y0} = 0 \mod d],\]and the $\alpha_k$ are where all the fun is happening:
\[\alpha_k = \frac{g(k)-g(d-1)}{g(0)-g(d-1)},\] where
\[g(k) = \cot\left(\pi\frac{2mk+1}{2md}\right).\]Luckily this game shares the key property of the CGLMP that makes it already be the optimal solution: the predicate depends only on $a-b \mod d$, so we don’t need to do any numerics.

The non-signalling bound is simply 1, unlike in the paper, where it is a horrible expression, the local bound is
\[\omega_\ell = 1-\frac1{2m},\]the same as for the Braunstein-Caves inequalities (and also unlike in the paper, where it is another horrible expression), and the Tsirelson bound is
\[\omega_q = \frac{d g(\frac{d-1}{2m})-g(d-1)}{g(0)-g(d-1)},\]which I must admit is more complicated than the expression in the paper. Well, we cannot have everything. For fixed $d$ and large $m$ the Tsirelson bound converges to 1, and for fixed $m$ and large $d$ it converges to $\frac1{2m} + \frac{2m-1}{4m^2}\pi \cot(\frac{\pi}{2m})$.

The gap here decreases monotonically as a function of both $d$ and $m$, but not the expected $p$-value: it is minimised at $d=2$ and $m=4$, when the tailored inequality reduces to Braunstein-Caves. For a fixed $m$ increasing $d$ only decreases the $p$-value, but for a fixed $d$ the minimum seems to be attained at some $m$ that increases with $d$. If one wants a larger gap one could set $\alpha_k = (1-k/(d-1))$, so that the tailored inequalities strictly generalise CGLMP, but that would defeat both points of having a maximal violation for a maximally entangled state and having a simple expression for the Tsirelson bound.

Żukowski, Gisin, and Tsirelson

These are two closely related XOR games with the curious property that the measurement setting is a continuous variable, $x,y\in[0,1]$. One might hope that one can do better than CHSH by allowing this extravagance, but alas, it is not the case. I find them interesting anyway, because these games make it impossible to make the usual mistake of doing the experiment by estimating each individual conditional probability appearing in the Bell inequality. There are infinitely many! You actually need to do the experiment by playing the game: sampling the input according the probability distribution $\mu(x,y)$, and checking whether you won that round by looking at the predicate $V(a,b,x,y)$.

The first variant was introduced by Żukowski in 1993, and the second by Gisin in 1999. Żukowski proved the local bound for his variant, but not the Tsirelson bound. Gisin proved neither, and Tsirelson proved both bounds for Gisin’s variant.

Let’s start with Gisin’s version, the simplest one. Here the probability density over the inputs is uniform, $\mu(x,y) = 1$, and the predicate is given by
\[V(a,b,x,y) = [a\oplus b = [x \ge y]].\] The local bound is $\omega_\ell = 3/4$, and the Tsirelson bound is $\omega_q = 1/2+1/\pi$. The gap is thus $1/\pi-1/4 \approx 0.068$, smaller than CHSH’s.

In Żukowski’s version the probability density of the inputs is nontrivial,
\[\mu(x,y) = \frac1{8\pi}|\sin(2\pi(x+y))|,\]and the predicate a bit weird3,
\[V(a,b,x,y) = [a\oplus b = [x+y \ge 1/2 \mod 1]],\] but that pays off with a slightly larger gap: the local bound is $\omega_\ell = 1/2+1/\pi$, and the Tsirelson bound respects $\omega_q \ge 1/2+\pi/8$, so the gap is at least $\pi/8-1/\pi \approx 0.074$. Also, this nonlocal game has a nice generalisation for multiple parties, that in some sense is stronger than the usual WWŻB inequalities4

That’s it! I plan to use this post as a catalogue of the meaningful forms of cool Bell inequalities, so if you know some that I missed please let me know. I’m specially interested in Bell inequalities for which the gap increases with increasing number of inputs; I’m a bit surprised that I couldn’t find a single example of this happening in the physics literature, as the computer science literature is full of them.

This entry was posted in Uncategorised. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *