DIQKD is here!

This has been a momentous week for quantum information science. The long-awaited for experimental demonstration of device-independent quantum key distribution (DIQKD) is finally here! And not one only demonstration, but three in a row. First, the Oxford experiment came out, which motivated the München and the Hefei experiments to get their data out quickly to make it clear they did it independently.

To give a bit of context, for decades the community had been trying to do a loophole-free violation of a Bell inequality. To the perennial criticism that such an experiment was pointless, because there was no plausible physical model that exploited the loopholes in order to fake a violation, people often answered that a loophole-free Bell test was technologically relevant, as it was a pre-requisite for DIQKD.1 That was finally achieved in 2015, but DIQKD had to wait until now. It’s way harder, you need less noise, higher detection efficiency, and much more data in order to generate a secure key.

Without further ado, let’s look at the experimental results, summarized in the following table. $\omega$ is the probability with which they win the CHSH game, distance is the distance between Alice and Bob’s stations, and key rate is the key rate they achieved.

Experiment $\omega$ Distance Key rate
Oxford 0.835 2 m 3.4 bits/s
München 0.822 700 m 0.0008 bits/s
Hefei 0.756 220 m 2.6 bits/s


I’ve highlight the München and the Hefei key rates in red because they didn’t actually generate secret keys, but rather estimated that this is the rate they would achieve in the asymptotic limit of infinitely many rounds. This is not really a problem for the Hefei experiment, as they were performing millions of rounds per second, and could thus easily generate a key. I suspect they simply hadn’t done the key extraction yet, and rushed to get the paper out. For the München experiment, though, it is a real problem. They were doing roughly 44 rounds per hour. At this rate it would take years to gather enough data to generate a key.

Why is there such a drastic difference between Hefei and München? It boils down to the experimental technique they used to get high enough detection efficiency. Hefei used the latest technology in photon detectors, superconducting nanowire single-photon detectors,2 which allowed them to reach 87% efficiency. München, on the other hand, used a completely different technique: they did the measurement on trapped atoms, which has efficiency of essentially 100%. The difficulty is to entangle the atoms. To do that you make the atoms emit photons, and do an entangled measurement on the photons, which in turns entangles the atoms via entanglement swapping. This succeeds with very small probability, and is what makes the rate so low.

What about Oxford? Their experimental setup is essentially the same as München, so how did they get the rate orders of magnitude higher? Just look at the distance: in Oxford Alice and Bob were 2 metres apart, and in München 700 metres. The photon loss grows exponentially with distance, so this explains the difference very well. That’s cheating, though. If we are two metres apart we don’t need crypto, we just talk.

One can see this decay with distance very well in the Hefei paper: they did three experiments, with a separation of 20, 80, and 220 metres, and key rates of 466, 107, and 2.6 bits/s. In the table I only put the data for 220 metres separation because that’s the only relevant one.

It seems that the Hefei experiment is the clear winner then, as the only experiment achieving workable keyrates over workable distances. I won’t crown them just yet, though, because they haven’t done a standard DIQKD protocol, but added something called “random post-selection”, which should be explained in a forthcoming paper and in the forthcoming Supplemental Material. Yeah, when it appears I’ll be satisfied, but not before.

EDIT: In the meanwhile the Hefei group did release the Supplemental Material and the paper explaining what they’re doing. It’s pretty halal. The idea is to use the full data for the Bell test as usual, as otherwise you’d open the detection loophole and compromise your security, but for the key generation only use the data where both photons have actually been detected. Which gets you much more key, as the data where one or both photons were lost is pretty much uncorrelated.

There’s an interesting subtlety that they can’t simply discard all the data where a photon has been lost, because they only have one photodetector per side; Alice (or Bob) simply assigns outcome ‘0’ to the photons that came to this photodetector, and ‘1’ to the photons that didn’t arrive there. Now if there was no loss at all, the ‘1’ outcomes would simple correspond to the photons with the other measurement result. But since there is loss, they correspond to a mixture of the other measurement result and the photons that have been lost, and there is no way to distinguish them. Still, they found it’s advantageous to discard some of the data with outcome ‘1’, as this improves the correlations.

Now they don’t have a full security proof for this new protocol with random post-selection, they only examined the simplified scenario where the source emits the same state in each round and the eavesdropper makes a measurement in each round independently. I suppose this is just a matter of time, though. Extending the security proof to general case is hard, but usually boils down to proving that the eavesdropper can’t do anything better than attacking each round independently.

EDIT2: It turns out the Hefei experiment didn’t actually use a random setting for each round, as is necessary in DIQKD, but just did blocks of measurements with fixed settings. It’s straightforward to change the setup to use randomized settings, the standard method is to use a Pockels cell to change the setting electronically (rather than mechanically) at a very high rate. However, Pockels cells are nasty devices, which use a lot of power and even need active cooling, and are bound to increase the noise in the setup. They also cause bigger losses than regular waveplates. It’s hard to estimate how much, but it’s safe to assume that the keyrate of the Hefei experiment will go down when they do it.

This entry was posted in Uncategorised. Bookmark the permalink.

15 Responses to DIQKD is here!

  1. Daniel Harvey says:

    But the Hefei experiment it all on the same table, how does it help with loop holes?

  2. Mateus Araújo says:

    The locality loophole is not relevant for DIQKD, we need to assume anyway that the laboratories are shielded, so we don’t gain anything by doing Alice and Bob’s measurements with space-like separation. It is relevant for Alice and Bob to be distance so that doing crypto is not pointless. Now the Hefei experiment simulated this distance by having a lot of optical fibre spooled inside their laboratory. It doesn’t matter, the photon loss is the same if 220 metres of fibre are spooled or going straight, it does show that you can generate the key at a distance.

    The München experiment made an effort to physically put Alice and Bob far away, which I find cute but not really relevant. Note that the geodesic distance between their labs is 400 metres; the 700 metres that I quoted is the distance along the fibre, and I think the latter is what matters.

  3. Scott Glancy says:

    I’m sorry to be late to this discussion. I just found it from the arXiv’s trackback link.

    Even if one is willing to assume that laboratories are shielded and therefore leak no information to eavesdroppers, I think that there is at least one good reason to impose space-like separation. In the absence of space-like separation, mundane experimental errors can produce data that violates a Bell inequality, even when no entanglement is present. For example, I have heard of experiments in which back-reflection can travel from Alice to Bob, and the back-reflection can depend on Alice’s polarization setting. Of course one can engineer optics to prevent such back-reflections, but that engineering will be device dependent. Requiring spacelike separation provides absolute assurance that no experimental errors, no matter how complicated, subtle, or insidious, are effecting the experiment.

    In my opinion, spacelike separation should be required for a truly device-independent QKD system.

  4. Mateus Araújo says:

    I think this level of paranoia is only appropriate for Bell tests, not DIQKD.

    First of all, could this back-reflection actually help to increase the probability of victory in the CHSH game? Not in some crazy hidden-variable model, but in actual physics? I really doubt it. I’d love to be proven wrong, though.

    Secondly, such a back-reflection is only plausible if Alice and Bob are inside the same laboratory, like in the Oxford and Hefei experiments. But in any real DIQKD deployment they will be in separate laboratories, like in the München experiment. Now how could this back-reflection travel from Alice to Bob? It seems specially strange because they’re doing it via entanglement swapping, not an all-optical setup.

  5. Scott Glancy says:

    Yes back-reflection types of errors can increase the probability of victory in the CHSH game. We have seen such effects in the Bell Test at NIST Boulder, with measurement stations separated by ~180 meters, when spacelike separation is not enforced. The amount of genuine Bell inequality violation is very small, and the whole experiment is calibrated and optimized to increase the inequality violation. It can be surprisingly easy to trick yourself.

    I agree that back-reflection errors seem implausible in the entanglement swapping experiments, but I don’t know enough about those experiments to be certain that spurious communication between measurement stations is absolutely impossible.

    The great promise of device independent quantum key distribution is that, given the input and output statistics and the locations and times of events, one can certify that the key is secure without ANY other information about the physics or engineering of the devices. Relying on what effects are “plausible” or not, based on the construction of the devices, misses the point of device independence. We would like the key to be secure even if completely implausible things are happening and even if the devices were constructed by an adversary.

    The paranoia is the point of DIQKD,

  6. Mateus Araújo says:

    Yes back-reflection types of errors can increase the probability of victory in the CHSH game. We have seen such effects in the Bell Test at NIST Boulder, with measurement stations separated by ~180 meters, when spacelike separation is not enforced.

    Sorry, I can’t think of a polite way to put it: I don’t believe you. Show me data, show me calculations. Are you telling me that some photons travel back from Alice’s laboratory to Bob’s laboratory, that these photons carry information about Alice’s setting, and that these photons make Bob’s detector more likely to give the same outcome as Alice when the settings are 00, 01, and 10, but more likely to give the opposite outcome when the settings are 11? Nope, I’m not buying that.

    We would like the key to be secure even if completely implausible things are happening and even if the devices were constructed by an adversary.

    That’s just not possible. If your devices were constructed by an adversary they could just broadcast the bits of your key. And if completely implausible things are happening then this broadcast device is built out of only a few atoms and broadcasts through neutrinos, so you’ll never going to find it.

    You need a lot of information about physics and engineering to do secure crypto anyway. And if you’re more worried about lack of space-like separation than about somebody bugging your lab then paranoia is not getting you security, it’s just a mental disease.

  7. Scott Glancy says:

    Yes, I am telling you that very few photons can travel from Alice’s laboratory to Bob’s laboratory, carrying information about Alice’s setting. The optics that implement different measurement settings have slightly different reflectivities depending on the measurement setting, and there is a very small probability that a photon reflecting off of Alice’s optics can find its way to Bob’s detector.

    I’m sorry, but we have not published the data showing spurious violation of a Bell Inequality. At the time, we considered it to be an error, and not worth publishing. However, given recent interest we have been considering replicating the situation and writing a new paper about it.

    I should have been more specific about what exactly I meant when I wrote that “We would like the key to be secure even if completely implausible things are happening and even if the devices were constructed by an adversary.”. The DIQKD security model requires complete isolation and security of the devices that produce the measurement settings choices and the devices that process and store the keys. However, even if the entanglement sources, measurement settings devices, and photon detectors (or analogous equipment for atomic or solid-state systems) are all constructed by an adversary, the DIQKD key is secure, provided that all of the Bell-Test-related loopholes are closed.

    I have to agree that the particular paranoia that motivates DIQKD is completely impractical and not useful for any application that I know of. I’m not arguing that one needs to close all loopholes to have practical security; I’m only arguing that one should close all possible loopholes in order to claim that DIQKD has been achieved.

  8. Scott Glancy says:

    I should add that, even if you don’t believe that we have seen the back-reflection error in an experiment, that and other kinds of errors that exploit the spacelike separation loophole are possible in principle, and they should motivate us to close that loophole when the goal is device independence.

  9. Mateus Araújo says:

    Yes, I am telling you that very few photons can travel from Alice’s laboratory to Bob’s laboratory, carrying information about Alice’s setting. The optics that implement different measurement settings have slightly different reflectivities depending on the measurement setting, and there is a very small probability that a photon reflecting off of Alice’s optics can find its way to Bob’s detector.

    No, that I find plausible. What I don’t believe, and you still haven’t answered, is that this can actually increase the probability of winning the CHSH game.

    However, even if the entanglement sources, measurement settings devices, and photon detectors (or analogous equipment for atomic or solid-state systems) are all constructed by an adversary, the DIQKD key is secure, provided that all of the Bell-Test-related loopholes are closed.

    I’m familiar with the DIQKD slogan. I’m just saying that it is hogwash. What is true is that you can’t fake a Bell violation, even if your devices are built by your adversary. But this doesn’t imply that you have a secure key, unless you also assume that your adversary is playing nice. Which is of course not an acceptable assumption in crypto.

  10. Scott Glancy says:

    Yes, back-reflection types of errors did increase the probability of victory in the CHSH game in our laboratory. When I wrote that earlier, you responded with “I don’t believe you.”. If you need more evidence than my words, you will have to watch the arXiv. Maybe we will write a paper about it some day.

    I am curious about why you don’t believe me. Do you think that laws of physics prevent back-reflection types of errors from ever increasing the probability of victory in the CHSH game? Or do you think that our experimental apparatus prevented any back reflection, and when we falsely thought that the back-reflection was causing CHSH victory, in fact it was only caused by the measurement of the genuine entangled photons? Or do you think that although back-reflected photons were present, they would necessarily decrease the CHSH victory probability, and we made mistakes when we estimated the victory probability? Or do you think that back-reflection photons may be present, and they may increase victory probability, but that the amount of increase cannot be statistically significant? Or something else? (I’ve been following your example of using the “CHSH victory” vocabulary, but we actually tested a different Bell inequality.)

    When I wrote the sentence you quote above “However, even if … the DIQKD key is secure, provided that all of the Bell-Test related loopholes are closed.”, I was not being sufficiently careful. I was assuming that in the context of this conversation, all other security requirements of DIQKD had been met and we were only concerned with the closure of the Bell-Test spacelike separation loophole. I also find many of the DIQKD slogans to be hogwash, but I believe that the detailed security models in the DIQKD theory papers are correct.

  11. Mateus Araújo says:

    Let’s assume, as you claimed, that the reflectivity of the setup is slightly different depending on the setting. Say the reflectivity is higher if Alice’s setting is 0. Thus Bob’s detector has a higher probability of clicking if Alice’s setting is 0. How does that help? To win the CHSH game they must both click or both not click if the settings are 00, 01, and 10, and if the setting is 11 one must click and the other must not. You’re not changing these correlations or anti-correlations at all with your higher reflectivity.

    Perhaps the issue is that you are using the CH inequality; it is not equivalent to the CHSH inequality if the correlations are signalling, which is the case here. The expression in your paper is
    \[ p(00|00)-p(01|01)-p(10|10)-p(00|11) \le 0.\] Now if Bob’s probability of clicking is higher when Alice’s setting is 0 (let’s say a click is outcome 0 and a non-click is outcome 1), then the first term increases, while the other three terms decrease. If you had a sum equal to zero before, than a slight increase is indeed capable of producing a spurious violation.

    This is the argument I was asking you to make. In any case, ok, I changed my mind, back-reflection can indeed produce a spurious violation.

    I bet, though, that if you get the same data that produced the spurious violation and plug it into the CHSH inequality instead you won’t get a spurious violation. Could you do that?

    Also, please don’t use the CH inequality at all. Even if the correlations are non-signalling, the same data will produce a smaller p-value for the CHSH inequality as it will for the CH inequality. I’ve proven this here.

  12. Scott Glancy says:

    The back-reflection error was only observed when the time-window for the measurement outcomes was allowed to be excessively large, during some setup and calibration tests. It was easy to fix the error by limiting the time-window so that the choices and the ends of the measurement windows were spacelike separated. Enforcing spacelike separation is safer than allowing timelike separation and finding an inequality, such as the CHSH inequality, that is not sensitive to the particular back-reflection error that we saw. Without the spacelike separation one can never be absolutely certain that some even more contrived and implausible error (or adversary) might be causing violation of whatever inequality one might be using.

    Our data did not violate the CHSH inequality at all, so using the CHSH inequality would surely produce a larger p-value than using the CH inequality. If I understand your paper correctly, it shows that if data is near the Tsirlson bound, then the p-value will be smaller when the gap between the Tsirlson bound and the local bound is larger. However, the data from our optical experiment was (and is) just barely outside the local bound when using the CH inequality. Using the CHSH inequality, as you advise, should be more useful for atomic and NV-center experiments that can get much closer to the Tsirlson bound. (I only read over what seemed to be the most relevant section of your paper, so please correct me if I missed something. It looks like there are several other interesting results in that paper, so I should spend some more time with it. Thanks for the recommendation!)

  13. Mateus Araújo says:

    Our data did not violate the CHSH inequality at all,

    No no, that shouldn’t happen. I’m certain your data is non-signalling to a very good approximation, so you should violate the CHSH inequality if and only if you violate the CH inequality. I took the data from your supplemental material, and indeed, it got a value of $3/4 + 5.1 \times 10^{-6}$ in the CHSH inequality. I also put it into the CH inequality, renormalized to make the violation comparable, and got $3/4 + 3.5 \times 10^{-6}$, so you actually got a larger violation of CHSH than CH. In this form a larger violation does imply a smaller $p$-value.

    I’m glad you appreciated my paper. It’s not about data near the Tsirelson bound, though; I wrote in terms of the Tsirelson bound because I was interested in the smallest possible $p$-value that a given inequality can give you, but $\omega_q$ there can be any probability of victory that you expect from quantum mechanics. The theorem is true for any $\omega_q$, putting it into the CHSH form always decreases the $p$-value, for any non-signalling data. The intuitive idea is to make the inequality blind to spurious signalling, which increases the statistical distinguishability of any two non-signalling points (of course, it’s not possible to make the inequality blind to adversarial signalling).

  14. Scott Glancy says:

    Apparently, my memories of these events is alot more hazy that I realized. As you said, our published data does violate both the CHSH and the CH inequality, and it violates CHSH more than CH. When I said that it did not violate CHSH, I incorrectly believed that violating CHSH required greater than 83% efficiency, whereas our experiment only had ~75% efficiency. My friends Yanbao Zhang and Peter Bierhorst reminded me that this is not completely true, and that old works that proved this 83% requirement were based on a particular model of a CHSH experiment that does not describe the modern Bell tests. (Someone should update the “Loopholes in Bell tests” Wikipedia page.)

    To our (nonsignaling) loophole free data (in Table S-III of arXiv:1511.03189), Peter applied his p-value calculation method using both the CH and the CHSH inequality and obtained a much lower p-value using the CH inequality.

    I do not know if the data with back-reflection errors that appeared when we allowed timelike connection violated the CHSH inequality or not. Judging from the physics of the error mechanism, it probably did violate the no-signaling constraints.

    Your use of the CHSH inequality to remove the influence of signaling when the possibility of causal connection cannot be avoided is very interesting. It reminds me of work by people studying contextuality (for example Ehtibar Dzhafarov’s arXiv:2108.05480), who decompose distributions into a signaling part, a nonsignaling contextual part, and a noncontextual part.

  15. Mateus Araújo says:

    Indeed, the Wikipedia page is completely wrong. It’s based on an old work by Jan-Åke where he proposed a way to test the CHSH while postselecting on coincidences, by estimating the detection efficiency and modify the inequality. Nope, you just don’t post-select on coincidences if you want a loophole-free test.

    I’m very surprised that Bierhorst got the opposite result from me. To be precise, my statistical test consists of simply counting the number of victories in the CHSH (or CH) game, and he is doing something more complicated, so it is possible that the results would flip. But still I would expect the general intuition to hold true, that if you are sensitive to signalling you are just polluting your data with noise and reducing the statistical significance of your violation.

Comments are closed.