## Do not project your relative frequencies onto the non-signalling subspace

It happens all the time. You make an experiment on nonlocality or steering, and you want to test whether the data you collected is compatible with hidden variables. You plug them into the computer and the answer is no, they are not. You examine them a bit more closely, and you see that they are also incompatible with quantum mechanics, because they are signalling. After a bit of cold sweating, you realize that they are very close to non-signalling, all the trouble happened because the computer needs them to be exactly non-signalling. You then relax, project them onto the non-signalling subspace, and call it a day.

Never do this. Experimental data is sacred. You can’t arbitrarily chop it off to fit your Procrustean bed.

First of all, remember that even if your probabilities are strictly non-signalling, the probability of obtaining relative frequencies that respect the no-signalling equations exactly is effectively zero. There’s nothing wrong with “signalling” frequencies. On the contrary, if some experimentalist reported relative frequencies that were exactly non-signalling I’d be very suspicious. What you should get in a real experiment are frequencies that are very close to non-signalling, but not exactly1.

“That doesn’t help me”, you reply. “I can accept signalling frequencies all day long, but the computer still needs them to be non-signalling in order to test hidden variable models.”

Sure, but what the computer needs are non-signalling probabilities, that you should infer from the signalling frequencies.

“Exactly, and to infer non-signalling probabilities I just project the frequencies onto the non-signalling subspace.”

No! Inferring probabilities from frequencies is the oldest problem in statistics. People have studied this problem to death, and came up with several respectable methods. There’s no point in reinventing the wheel. And if you do insist in reinventing the wheel, you’d better be damn sure that it’s round.

To make it clear that this projection technique is a square wheel, I’ll examine in detail a toy version of the problem of getting non-signalling probabilities. The simplest case of the real problem involves getting from a 12-dimensional space of frequencies to a 8-dimensional non-signalling subspace, which is too much to do by hand for even the most dedicated PhD students2. Instead I’ll go for the minimal scenario, a 2-dimenionsal space of frequencies that goes down to a 1-dimensional subspace.

Consider then an experiment with 3 possible outcomes, 0,1, and 2, where our analogue of the no-signalling assumption is that $p_1 = 2p_0$. The possible relative frequencies we can observe are in triangle bounded by $p_0 \ge 0$, $p_1 \ge 0$, and $p_0 + p_1 \le 1$. The possible probabilities are just the line $p_1 = 2p_0$ inside this triangle. Again, if we generate data according to these probabilities they will almost surely not fall in the $p_1 = 2p_0$ line. Let’s say we observed $n_0$ outcomes 0, $n_1$ outcomes 1, and $n_2$ outcomes 2. What is the probability $p_0$ we should infer from this data?

Let’s start with the projection technique. Compute the relative frequencies $f_0 = n_0/n$ and $f_1 = n_1/n$, and project the point $(f_0,f_1)$ onto the line $p_1 = 2p_0$. Which projection, though? There are infinitely many. The most natural one is an orthogonal projection, but that already weirds me out. Why on Earth are we talking about angles between probability distributions? They are vectors of real numbers, sure, we can compute angles, but we shouldn’t expect them to mean anything. Doing it anyway, we get that
$p_0 = \frac15(f_0 + 2f_1)\quad\text{and}\quad p_1 = \frac25(f_0 + 2f_1),$which do not respect positivity: if $f_0=0$ and $f_1=1$ we have that $p_0+p_1 = 6/5$, which implies that $p_2 = -1/5$.3 What now? Arbitrarily make the probabilities positive? Invent some other method, such as minimizing the distance from the point $(f_0,f_1)$ to the line $p_1 = 2p_0$? Which distance then? Euclidean? Total variation? No, it’s time to admit that it was a bad idea to start with and open a statistics textbook.

You’ll find there a very popular method, maximum likelihood. We write the likelihood function
$L(p_0) = p_0^{n_0} (2 p_0)^{n_1} (1-3p_0)^{n_2},$which is just the probability of the data given the parameter $p_0$, and maximize it, finding
$p_0 = \frac13(f_0 + f_1)\quad\text{and}\quad p_1 = \frac23(f_0+f_1).$Now maximum likelihood is probably the shittiest statistical method one can used, but at least the answer makes sense. The resulting probabilities are normalized, and they mean something: they are those which assigned the highest probability to the observed data. My point is that even the worst statistical method is better than arbitrarily chopping off your data. Moreover, it’s very easy to do, so there’s no excuse.

If you want to do things properly, though, you have to do Bayesian inference. You have to multiply the likelihood function by the prior, normalize that, and compute the expected $p_0$ from the posterior in order to obtain a point estimate. It’s a bit more work, but in this case is still easy, and for a flat prior it gives
$p_0 = \frac13\frac{n_0 + n_1+1}{n+1}\quad\text{and}\quad p_1 = \frac23\frac{n_0 + n_1+1}{n+1}.$Besides getting a more sensible answer and the ability to change the prior, the key advantage of Bayesian inference is that it gives you the whole posterior distribution. It naturally provides you a confidence region around your estimate, the beloved error bars any experimental paper must include. It’s harder to do, sure, but none of you got into physics because it was easy, right?

Posted in Uncategorised | Comments Off on Do not project your relative frequencies onto the non-signalling subspace

## Redistribution

Stuck at home with corona, I decided to try my hand at writing science fiction to pass the time. The result was not science fiction at all, but I think it’s still fun to read, so I’m posting it here.

Trevor Norquist, the world’s first trillionaire, died in a fiery explosion. His private jet was hit by a Stinger missile when it was taking off the Köln/Bonn airport. Panic was immediate and widespread: the entire EU closed its airspace in fear of another terrorist attack. Germany erected roadblocks in the area around the airport, searching every single car, and generating monstrous traffic jams.

Videos from the attack made it easy to pinpoint where the missile was fired from: a hunter’s watchtower in the nearby Königsforst. The police was there half an hour after the attack, but the assassin was long gone. He had abandoned the missile launcher there, and nothing else. Forensics went over it with zeal, but couldn’t find anything. No fingerprints, not even a drop of sweat. Clearly they were dealing with a professional.

—/—/—

“How the fuck these fucking eco-terrorists got their hands on a fucking Stinger missile?!” – exclaimed Ernst Dieter, investigator of the Bundeskriminalamt. He was in a terrible mood. Just a week before the eco-terrorists had dynamited the iconic Bagger 288. He was already working overtime to coordinate security for Norquist’s visit. With the threat escalation, the work started cutting his sleep time. He had to get reinforcements from the nearby Hessen. Did anybody know how many forms did he need to sign to get police from another Bundesland to come? And all that because the stubborn prick wouldn’t accept postponing his visit. Norquist’s only concession to reality was giving up on visiting the mines themselves. Still, that left him with the problem of escorting him from the airport to the hotel through thousands of protesters. When he saw the jet taking off he finally relaxed a bit, and dared to dream that he would take the rest of the day off and sleep. “Norquist is not my problem anymore!”, he celebrated, only to have his stomach drop when he saw a bloody Stinger missile hitting the jet.

“We shouldn’t jump to conclusions” – said Robert Weil, his partner – “It doesn’t fit the style of Vergeltung der Klimaopfer, that you insist on calling eco-terrorists. They have never killed anyone before. They insist they are saboteurs, not terrorists”.
“And do you believe their propaganda now? Get serious. They hated the guy more than anything!” – countered Dieter angrily.
“Then why destroy Bagger 288 just before? This only served to increase security” – Weil pointed out.
“Distraction manoeuvre.” – answered Dieter – “It focused our attention on the ground, on the mines, when they knew that the real danger was in the air”
“You are really overestimating their competence.” – dismissed Weil – “We caught the clowns responsible for Bagger 288 in less than 2 days. Both by tracing the explosives they used and the IP address from which they posted the manifest.”
“They are several people, Rob.” – replied Dieter – “They let the idiots handle Bagger 288, and got the real pros to get Norquist, which was the target that actually mattered.”
“I’m just asking you to keep an open mind, Ernst, plenty of people wanted Norquist dead. It could also be the Montenegrins.” – conjectured Weil.
At this moment, both their phones vibrated at the same time. This could only mean something from work, and indeed, it was an email from forensics about the Stinger launcher. It had been tracked to a batch of Stingers that Italy had provided to Ukraine as military help against Russia.
“I knew it!” – Dieter allowed half a smile to flicker through his face – “The fucking Russians gave them the missile.”
“The Russians? Not the Ukrainians?” – asked Weil.
“All the Stingers we gave the Ukrainians are accounted for.” – replied Dieter – “Either safely in storage, used in the war, or captured by the Russians. Guess which ones could end up here?”
“Fair enough, but I doubt the Russians would deal directly with Vergeltung der Klimaopfer” – countered Weil – “Doesn’t fit their ideology, and besides, why not just put the Stingers in the black market? They make a neat profit and don’t get involved in any messy affair.”
“They are involved, and they will pay for this! Selling Stingers to terrorists puts the blood on their hands!” – raged Dieter.
“I’m not sure how we could make them pay. There’s nothing left to sanction.” – replied Weil – “In any case, how many Stingers did they capture in Ukraine?”
After pausing a bit to think, Dieter answered slowly – “Fourteen.”
“Scheiße”.
“Ja, Scheiße.”

—/—/—

After a week of interrogating Vergeltung der Klimaopfer members, Ernst Dieter had to admit they probably had nothing to do with the death of Trevor Norquist. His interrogations were tough and produced results, or so he would say. Others would say that he was nothing but a torturer.
“Nonsense,” – he thought – “Torture is illegal, and I’m strictly following the new counter-terrorism law, that allows for enhanced interrogation of terrorism suspects.”
But all the enhancement was for nothing, he still could get no information out of them. “Maybe they really don’t know anything.” – he thought as he released another traumatised student.
“Maybe they are indeed the poorly-organised students that can barely afford legal explosives that they claim to be.” – he thought – “A black-market Stinger must cost millions of euros. And they aren’t point-and-click as an automatic camera, one needs training to handle one. No, we are after a wealthy, well-organised terrorist group with military background.”

Reluctantly, he turned to the other suspects Robert Weil had mentioned, the Montenegrins. That was harder work, as their criminal background was clean, so he had to restrict himself to “gentle” interrogations. Still, they were even worse fits for the terrorists behind Norquist’s assassination than Vergeltung der Klimaopfer. To start with, there weren’t many of them. Even in Montenegro itself, there were less than a million Montenegrins. In Köln he managed to find 10 that joined the anti-Norquist protests. Some of them knew each other, but they were not an organisation in any way, shape, or form, just families making a living. Most had come to Köln a long time ago, during the Yugoslav wars, and a couple more had arrived after Norquist’s rise to power. Besides being elated by Norquist’s death, the only thing they had in common was a lack of money. The rich Montenegrins were those that stayed in Montenegro and profited from Norquist’s regime.

“No,” – he thought – “I have to look at the wealthy people that wanted Norquist dead.” There was no lack of those either. In fact, it was hard to find someone who didn’t want Norquist dead. Even he himself couldn’t shed a tear for Norquist, he was only angry at the assassination because it had been his job to keep him alive. “Maybe his ex-wives and children?” – he considered. Norquist had 9 children by 4 different women, that had already started an epic fight for the inheritance. – “Plenty of people would kill for hundreds of billions of euros, but kill their own father? Just to get the inheritance a bit sooner? No, that’s too absurd. Besides, his family could just poison or stab him. Using a Stinger indicates an external enemy.”

—/—/—

Dieter started considering Norquist’s trajectory from the start. He had been a mostly unknown investor, mainly busy with multiplying the couple of billions he had inherited, until he saw a golden opportunity in fossil fuel divestment. As the planet warmed, more and more big banks decided that the damage to their reputation was not worth the profit, and so cut off financing to fossil fuel projects. To Norquist this meant he could charge them higher interest rates, as they didn’t have much of a choice. He had the capital to spare and no use for a reputation, so he slowly specialized in financing coal power plants and oil drilling in Africa and Asia. The more polluting the better; not because Norquist wanted to pollute the planet per se, it’s just that the specially dirty projects had the most trouble in finding financing, and thus he could charge the highest interest rates.

And so it started his hostile takeover of Montenegro. The country’s GDP was only 15 billion euros, or about 5% of his wealth. Money was therefore not a problem. The difficulty lay in that Montenegro was not for sale, and Norquist had the curious idiosyncrasy that he always respected the letter of the law. He started by buying the aluminium smelter in Podgorica and making a major expansion. The investment didn’t seem to make much sense as electricity in Montenegro was not particularly cheap or reliable, but the government of Montenegro was anyway overjoyed with the huge investment. So much that it didn’t think twice about allowing Norquist to build a massive extension of the port of Bar, under his private ownership, necessary to export the increased production of aluminium. Or about allowing Norquist to buy the Podgorica-Bar railway, in exchange for a renovation. At this point Norquist was in a position similar to Nokia in Finland: he was so important for the economy of the country that when he asked for something, the government listened. And what he asked for was so little: a mere reform of the campaign finance laws, so that anybody could donate as much they want to any politician, without having to make the donation public. In other words, legalized bribery.

After this law went through, things went much quicker. The media market was completely deregulated, and Norquist promptly acquired all the major newspapers, radio stations, and television channels. This ensured his portrayal as the saviour of Montenegro, the man that had doubled the country’s GDP overnight, and the silencing of his critics. The next step was privatising the whole electricity generation system of the country, coupled with complete deregulation, allowing Norquist to deny power to whoever he wanted for any reason. Legalized extortion. This was followed by abrogating the international treaties Montenegro had entered to fight global warming, allowing fossil fuel power plants to be built again. Norquist then built massive coal and oil power plants to power his aluminium smelter. With the price of coal and oil so low, he managed to produced aluminium at a price that even Iceland couldn’t beat. He didn’t stop there: Bitcoin mining, hydrogen electrolysing, Norquist started any energy-intensive industry that he could think of, and built more fossil power plants for them.

By the time the Montenegrins realised that they were losing their country, it was too late. Anybody that dared to oppose Norquist’s plans found themselves subject to a barrage of negative media coverage, highlighting real or imaginary corruption affairs. The recalcitrant ones were forced into economic ruin by strategically timed power cuts. Social media became tightly controlled, and protest was criminalised. Montenegro became a Singapore-style “democracy”: elections were still held, and votes were still counted with strict correctness. It’s just that the government openly retaliated against those that voted against it, ensuring that it always won with huge majorities. Norquist had no taste for fake elections or murdering the opposition like in Russia. No, it was vital to him that the rule of law was strictly respected.

The final touch was a radical reform of the tax code. Norquist had an almost religious opposition to income or property taxes, and changed the state to rely only on consumption taxes, effectively eliminating his own tax burden. This proved wildly popular with the global super-rich, who parked their wealth in Montenegro en masse. The country quickly overtook Switzerland as the country of choice for tax avoidance, and thus became a massive financial centre, rivalling London, New York, and Tokyo. This sudden influx of cash would easily wreck the currency of such a small country, but Montenegro had the unique advantage of using the euro without being a member of the European Union. It thus enjoyed the stability of the currency without having to obey any of the European Union’s regulations.

And thus it came to pass that Montenegro rose from poverty and stagnation to be one of the most wealthy and dynamic nations in the world. It also achieved such an astounding air pollution that one often couldn’t even see the Sun, a feat that had only been achieved before by China in 2012. Internationally, it was a scandal. The brazen disregard for international norms caused it to be hit with sanctions after sanctions after sanctions. It was slowly becoming as isolated as Russia. There was even talk of war. Norquist didn’t care, he had already made his profit. Après moi, le déluge.

He had been yet another billionaire asshole, and now became the most hated person in the entire world. He wore his infamy with relish. He always bragged about being a self-made trillionaire, having started as a mere billionaire. To those that accused him of being unethical, he always replied that he had never broke any law. Deep down he believed that making money was the only measure of morality that mattered. Since he was so rich he must have been doing it right. It seemed to be working, until Norquist flew to Germany to make a deal to buy their brown coal at negative prices, and got hit by a Stinger.

—/—/—

He was proven wrong when another Stinger hit a private jet. This time the victim was Zhang Shaopeng, a Chinese electric vehicles tycoon, that was landing in Warsaw to close a deal to convert its bus fleet from diesel to electric ones. Now Zhang wasn’t a nice person – the reason his vehicles were consistently cheaper than the competition was his passion for using Uyghur forced labour – but he was hardly a prime target for environmentalists. His electric buses alone were responsible for cutting oil demand significantly, and he was the first manufacturer of electric cars that managed to produced them at lower cost than comparable fossil cars. This was a critical point in the transition away from oil: fossil cars became rich people’s toys, and electric cars the financially sensible choice.

“Scheiße!” – exclaimed Dieter – “Not again!”
“It was bound to happen” – said Weil sombrely – “I told the Kanzler that we couldn’t reopen the airspace before we recovered the thirteen Stingers on the loose, or caught the terrorists. But no, the planes must keep flying! Everybody believed that Norquist was the only target. That was just wishful thinking.”
“Fools!” – concurred Dieter – “There’s nothing we can do to defend civilian aircraft, the whole idea is that there will be nobody shooting at them!”
“Indeed.” – added Weil – “Civilian aircraft are sitting ducks. They broadcast their position, don’t have flares, can’t do high-g manoeuvres…”
“Yeah no shit Sherlock.” – interrupted Dieter – “Instead of blathering about military tactics, tell me what the Polish found out.”
Impervious to Dieter’s rudeness after long years of working together, Weil answered calmly – “As you know, everybody has been watching like crazy the forests near airports, and Las Kabacki was no exception. The Stinger was not fired from there, but from a communal garden, Kępa Służewiecka, right next to the Chopin airport. As before, the launcher was abandoned on the spot, but this time somebody saw the assassin.”
“Aha!” – exclaimed Dieter – “So watching the forests was not in vain!”
“Indeed it wasn’t” – agreed Weil – “A Polish pensioner was trimming his hedge when he saw somebody climbing on the roof of a shed a few blocks away. He was a bit surprised, people in the communal garden are usually too old for that. He was even more surprised when the guy put something on his shoulder and stared directly at the airport. He thought it was a TV camera. Then the “TV camera” spit a Stinger and he saw the private jet exploding. Then he got really scared and hid inside the hedge. Got a few scratches from that.”
“I don’t care about his scratches!” – interrupted Dieter – “Do we have a description of the assassin?”
“I was getting there” – complained Weil – “White, tall, strong, short dark hair. The Polish couldn’t get more out of the pensioner, he wasn’t very close and his eyesight isn’t particularly good.”
“That describes half of Europe” – grumbled Dieter – “Doesn’t help much”.
“It does exclude the other half” – Weil pointed out – “You know how Bild has been making noises about Zombie ISIS being behind it.”
“Bild can write whatever nonsense they want” – Dieter replied curtly – “We’re in charge of the investigation, not them”.
“There’s more” – Weil closed his eyes and breathed deeply, his patience wearing thin – “While among the bushes, the pensioner heard a petrol engine starting up and going away.”
“That does make things easier, petrol engines are getting quite uncommon” – Dieter got a tiny bit happier.
“This is Poland we’re talking about, Ernst.” – countered Weil – “Petrol engines are still the majority there. But it allows us to exclude the environmentalists as the culprits.”
“You’ve got to be joking. You think some eco-terrorists would go as far as requiring their assassinations to be carbon-free?” – replied Dieter incredulously.
“I’m dead serious.” – replied Weil – “They are downright religious about being carbon-free. Have you forgotten that time when Partigiani Padani tried to hijack a cruise ship, but were quickly turned into minced meat by the police? It turns out that they were using a carbon-free alternative to gunpowder, which meant that their guns failed more often than not.
“Meh. It’s not as if the eco-terrorists would be after Zhang anyway.” – replied Dieter – “I’m still struggling to see any connection between him and Norquist. Do they have any enemies in common at all? Why would anyone want both of them dead?”
At this point an assistant barged in: “Ernst, Robert, you’ve got to see this. They posted a manifesto.”
“Where? How?” – asked Dieter.
“They uploaded a torrent to ThePirateBay, and have been posting links to it all over social media: Twitter, Facebook, Reddit… it’s spreading like wildfire. Here is the file” – the assistant showed in his tablet.
“For fuck’s sake, it’s 200 pages! And it’s written in German, French, English, and Spanish! Self-important lunatics.” – exclaimed Dieter.
A while later, Weil started summarising:
“They’re promising to “eradicate billionairism” in Europe. They’ve started with the “worst offenders”, but they emphasize that nobody with more than a billion euros is safe. They write that they don’t want to kill anybody, just redistribute wealth, so anyone can stop being a target by giving money away. Then there’s some blah blah about class war, media brainwashing people, democracy being a tool of oppression, and taking direct action. Lots of pages lamenting the death of the staff in the private jets, praising their “heroic sacrifice”, and warning anybody working for billionaires to get away.”
“Communists!” – cursed Dieter – “We’re in 2041 and have to deal with communist terrorists?! What is this, a 20th century revival?”
“It does give a hint to our next target: the richest man in Europe is now Jules Hermet, a French banker.” – replied Weil. He turned to the assistant: “Call Mr. Hermet. His security is not our responsibility, but we can help with intelligence.”
“Come on, the target will certainly not be Hermet!” – interjected Dieter – “Norquist and Zhang were caught by surprise, but how could they possible get Hermet when everybody is expecting them?”
“They could have started with no-name billionaires instead of Norquist and Zhang. It would have been easier, but they went for the spectacle instead.” – countered Weil – “For terrorists it’s only the spectacle that matters. And what would be more spectacular then getting Hermet now?”
“Maybe.” – grumbled Dieter – “But they must also hit the poorest billionaires at some point. If only the richest one is in danger there is no terror. And without terror they have nothing, there’s no way they’ll manage to kill hundreds of billionaires.”
“Maybe.” – concurred Weil – “I wonder what are they hoping to achieve. Do they seriously expect billionaires to give money away?”
Dieter laughed – “Not even these lunatics can believe that. They did specify that they’ll only kill in Europe, so they probably expecting them to flee to the Caribbean or their bunkers in New Zealand or whatever.”
“That makes sense.” – said Weil pensively – “As the number of billionaires here dwindles, the terror increases among the remaining ones, so they might really believe they can make Europe free of billionaires.”
The assistant chimed in again – “Just got a message from the linguistics analyst. She says that the German version of the manifesto has some grammatical mistakes that are typical of native French speakers. She is now trying to find experts in English in Spanish to see if the pattern repeats”.
“French communists, hä!” – exclaimed Dieter – “We are after a French communist soldier. Someone that probably has experience with Stingers, and is no longer active in the military.”
“There can’t be so many.” – concurred Weil – “Let’s ask the French military for a list. Emphasis on those that are working on private security after leaving the military, it’s the perfect cover for a terrorist.”

—/—/—

“Ah these loudmouths. They couldn’t resist posting the damn manifesto.” – thought Pierre Barère annoyed – “at least they agreed to wait until I got the second target down. Otherwise the mission would certainly be a failure. Now it will just probably be a failure.”
His thoughts were interrupted by the train announcing the stop at the station of Lille.
“Argh how I hate trains. Stuck here with all this noise all these peasants. With an airplane at least the torture is over quickly. But no, the airspace is closed. Motherfuckers.” – he chuckled as he realise the non-sequitur – “Ok, no, that’s my own fault.” – he laughed out loud. The feeling of power was good. His good mood quickly evaporated as his thoughts turned to the mission ahead.
“It had to be this fucking country! I had to show my real passport to board the train, and can’t even bring any weapons.” – cursed Barère mentally – “Knives! I have to terminate the target using bloody knives like a street thug. And I don’t even get a car. My “getaway car” is the bloody metro. Argh. Just want to get this over with and move on to the next target, that will be a proper operation with glamour.”

—/—/—

“Two days! Two days to get us the damn list! The bloody French are as stubborn and cranky as always. They act as if they’re doing us a big favour, when we’re the ones trying to save their asses!” – complained Dieter.
Weil ignored the whining and read the report from the IT specialist:
“It’s very promising. There are less than a hundred former soldiers employed in private security, we can tail them all. Most have social media accounts, and of these the vast majority have expressed right-wing opinions. Excluding these, we’re left with 27 suspects. One of them is employed as a bodyguard by the French Communist Party!”
“The French Communist Party?!” – snorted Dieter – “Do they still exist? I thought La France Insoumise had taken over the French far-left completely.”
“Not completely.” – answered Weil – “The communists refused to join. Maybe they still dream about the power they had back in the 20th century.”
“Let’s interrogate this… ” – Dieter looked at the file – “Pierre Barère then.”
“He took the Eurostar to London a couple of days ago.” – replied Weil – “and Jules Hermet is in France. We should focus on the other suspects now. We can always get Barère when he comes back to Schengen.”

—/—/—

“Doesn’t this freak ever leave his house!?” – thought Barère more annoyed than ever. He had installed a microcamera across the street from Frederic Hoyle, to establish his routine. It turned out, there wasn’t one. He saw a man coming every morning and leaving every evening, presumably his butler, but that was it. No sign of Hoyle. Lots of packages being delivered, though.
“I’d never seen such a low-key billionaire.” – thought Barère. “He owns a detached house in London, so he obviously has millions, but there’s nothing else! Where are the armies of servants, the fancy cars, the women, the parties? Is he trying hard to pretend to be poor, or is he just a freak?”
From the satellite pictures he could see a pool in the backyard, but that was pretty much it, no hidden helipad or similar extravagances. The roof was mildly interesting: it consisted entirely of solar panels. Such a large roof must generate a serious amount of power. Barère didn’t think much of it, and assumed that Hoyle was just a clean energy freak.
“This guy is supposed to be richest man in Europe, the secret identity of Satoshi Nakamoto.” – thought Barère dismissively – “At least that’s what the Party says. Not my problem. They gave me a job, I’ll dispatch it, and go back to having fun. The first two targets were a lot of fun, I can’t complain.” – he basked in the memories of firing Stingers for a while, and slowly let his mood sour again as he came back to the reality of being holed up in a hotel in London with no end in sight.
“Am I going to have to break in his house?” – he thought with a tinge of desperation – “The Party couldn’t get the blueprints, or any information about the security system. That would be just foolish. Maybe I’ll catch the butler and beat some info out of him? That’s still dangerous, I’ll just have one night before Hoyle discovers that something is up and vanishes.”
At this moment, his camera showed what he had almost given up on seeing, Hoyle leaving the house.
“That’s my lucky break!” – Barère thought – “He’s alone, I might even manage to end this tonight.”
He ran out of his room, and calmly walked out of his hotel, that was strategically situated between Hoyle’s house and the nearest underground station.
“He’s walking straight towards me!” – Barère couldn’t believe his luck, and tightened the grip on the knife in his pocket.
“This is going to be so easy! Nobody on the street either, even the escape will be easy” – Barère could almost sing, until he noticed that Hoyle noticed him. First Hoyle slowed, then stopped, and then started walking back to his house.
“What a paranoid freak! Who turns around just because they saw someone looking at them on the street?!” – cursed Barère. He kept walking towards Hoyle with the same pace. Until Hoyle looked back discretely, and tightened his pace. Barère also tightened his pace. Hoyle looked back again, and started running at full speed.
“Merde! It’s now or never!” – exclaimed Barère, and started running at full speed as well.
Hoyle was no match for Barère’s fitness, but he had a dozen metres of advantage, with the result that both reached Hoyle’s house at the same time. Barère had jumped with his knife out, aiming for Hoyle’s heart through his back, when the scene became visible to Hoyle’s security cameras. In less than a millisecond Hoyle’s AI correctly deduced that Hoyle was in mortal danger, and fired the house’s weapons.

Those were laser effectors that Rheinmetall had developed for the German Navy, but never used in combat. It turned out that atmospheric dispersion limited their range too much, and the power supply for a 60 kW pulse was too bulky. Neither were problems for a house. Laser effectors also had the critical advantage that they were unknown, and their components passed for research lasers to untrained eyes. As such Hoyle had no problem smuggling them through the British customs, that were extremely effective at blocking explosives and firearms.

The lasers instantly vaporised Barère’s head, while making Hoyle’s skin only slightly warmer. In this respect they were completely successful. The fatal problem was completely unforeseen. While photons do carry momentum, despite being massless, the momentum of even a 60 kW laser pulse was still too small to make any measurable difference for Barère’s headless body, that drove the knife into Hoyle’s heart purely out of inertia.

Posted in Uncategorised | 13 Comments

## Thou shalt not optimize

It’s the kind of commandment that would actually impress me if it were in the Bible, something obviously outside the knowledge of a primitive people living in the Middle East millennia ago. It’s quite lame to have a book that was supposedly dictated by God, but only contains stuff that they could come up with it themselves. God could just dictate the first 100 digits of $\pi$ to demonstrate that people were not just making shit up. There’s a novel by Iain Banks, The Hydrogen Sonata, that has this as key plot point: as a prank, an advanced civilization gives a “holy book” to a primitive society from another planet containing all sorts of references to advanced science that the latter hadn’t developed yet. The result is that when the primitives eventually develop interstellar travel, they become the only starfaring civilization in the galaxy that still believes in God.4

Why not optimize, though? The main reason is that I have carefully optimized some code that I was developing, only to realize that it was entirely the wrong approach and deleted the whole thing. It wasn’t the first time this happened. Nor the second. Nor the third. But somehow I never learn. I think optimization is addictive: you have a small puzzle that is interesting to solve and gives you immediate positive feedback: yay, your code runs twice as fast! It feels good, it feels like progress. But in reality you’re just procrastinating.

Instead of spending one hour optimizing your code, just let the slow code run during this hour. You can then take a walk, relax, get a coffee, or even think about the real problem, trying to decide whether your approach makes sense after all. You might even consider doing the unthinkable: checking the literature to see if somebody solved a similar approach already and their techniques can be useful for you.

Another reason not to optimize is that most code I write is never going to be reused, I’m just running it a couple of times myself to get a feeling about the problem. It then really doesn’t matter if it takes ten hours to run, but you could optimize it to run in ten minutes instead. Just let it run overnight.

Yet another reason not to optimize is that this really clever technique for saving a bit of memory will make your code harder to understand, and possibly introduce a bug.2 What you should care about is having a code that is correct, not fast.

On the flipside, there are optimizations that do make your code easier to understand. I saw a code written by somebody else that was doing matrix multiplication by computing each matrix element as $c_{i,j} = \sum_k a_{i,k}b_{k,j}$. That’s slow, hard to read, and gives plenty of opportunity for bugs to appear. You should then use the built-in matrix multiplication instead, but the optimization is incidental, the real point is to avoid bugs.

There are two exceptions to the “thou shalt not optimize” commandment that I can think of: the first is if your code doesn’t run at all. Sometimes this indicates that you’re doing it wrong, but if you really can’t think of something better, yeah, optimizing it is the only way to get an answer. The second is if your paper is already written, the theorems are already proven, you’re certain your approach is the correct one, and you’re about to publish the code together with your paper. Then if you optimize it you are a great person and I’ll love you forever. Just remember to use a profiler.

Posted in Uncategorised | Comments Off on Thou shalt not optimize

## Tsirelson Memorial Workshop

(thanks to Flavio Baccari for the photo)

After more than two years of the world conspiring against me and Miguel, we finally managed to pull it off: the workshop in honour of Boris Tsirelson happened! In my opinion and of the participants that I asked it was a resounding success: people repeatedly praised the high level of the talks, and were generally happy with finally having a conference in person again. An often-overlooked but crucial part of every conference is the after-hours program, which is rather lame in online conferences if it happens at all. I didn’t really take part here as I have a young child, but the participants told me that it was quite lively. We did have problems because of the recent corona wave in Austria and the war in Ukraine3, but still, it happened.

Initially we had planned to have a small workshop, but we ended up with 78 registered participants (and some unregistered ones). It was great having so much interested, but it did create problems. The idea was to have a small amount of long talks, where the authors could go through their results in depth, and people would have plenty of time for discussion. We kept the talks long (45 minutes), but we ended up with a completely packed schedule (9 invited talks + 22 contributed). We thought this wouldn’t be a problem, as people could simply skip the talks they were not interested in and use that time for discussion. That didn’t work. It turns out students felt guilty about skipping talks (I never did), and there wasn’t a good place for discussing in our conference venue. We apologize for that. Another issue is that we had to review a lot of contributions (44); thanks to a small army of anonymous referees we managed to get through them, but we still had to do the painful job of rejecting good contributions for lack of space.

A curious feedback I got from some participants is that the talks were too long. The argument was that if you are not familiar with the topic already you won’t be able to understand the technical details anyway, so the extra time we had to go through them was just tiresome. I should do some polling to determine whether this sentiment is widespread. In any case, several long talks on the same day are indeed tiresome; perhaps we could reduce the time to 30 minutes. What I won’t ever do is organize a conference with 20-minute talks (which unfortunately happens often); most of the time is spent in introducing the problem, the author can barely state what their result was, and there’s no chance of explaining how they did it.

There were two disadvantages of organizing a conference that I hadn’t thought of: first, that even during the conference I was rather busy solving problems, and couldn’t pay much attention to the talks, and secondly that I couldn’t present my own paper that I had written specially for it.

As for the content of the talks, there were plenty that I was excited about, like Brown’s revolutionary technique for calculating key rates in DIQKD, Ligthart’s impressive reformulation of the inflation hierarchy, Farkas and Łukanowski’s simple but powerful technique for determining when DIQKD is not possible, Plávala’s radically original take on GPTs, Wang’s tour de force on Bell inequalities for translation-invariant systems, Sajjad’s heroic explanation of the compression technique for a physics audience, among others. But I wanted to talk a bit more about Scarani’s talk.

He dug out an obscure unpublished paper of Tsirelson, where he studied the following problem: you have a harmonic oscillator with period $\tau$, and do a single measurement of its position at a random time, either at time 0, $\tau/3$, or $2\tau/3$. What is the probability that the position you found is positive? It turns out that the problem is very beautiful, and very difficult. Tsirelson proved that in the classical case the probability is at most $2/3$, but ironically enough couldn’t find out what the bound is in the quantum case. He showed that one can get up to 0.7054 with a really funny Wigner function with some negativity, but as for an upper bound he only proved that it is strictly below 1. Scarani couldn’t find the quantum bound either; he developed a finite-dimensional analogue of the problem that converges to the harmonic oscillator in the infinite limit and got some tantalising results using it, but no proof. The problem is still open, then. If you can prove it, you’ll be to Tsirelson what Tsirelson was to Bell.

Posted in Uncategorised | Comments Off on Tsirelson Memorial Workshop

## The horrifying world of confidence intervals

We often see experimental results reported with some “error bars”, such as saying that the mass of the Higgs boson is $125.10 \pm 0.14\, \mathrm{GeV/c^2}$. What do these error bars means, though? I asked some people what they thought it was, and the usual answer was that the true mass was inside those error bars with high probability. A very reasonable thing to expect, but it turns out that this is not true. Usually these error bars represent a frequentist confidence interval, which has a very different definition: it says that if you repeat the experiment many times, a high proportion of the confidence intervals you generate will contain the true value.

Fair enough, one can define things like this, but I don’t care about hypothetical confidence intervals of experiments I didn’t do. Can’t we have error bars that represent what we care about, the probability that the true mass is inside that range? Of course we can, that is a Bayesian credible interval. Confusingly enough, credible intervals will coincide with confidence intervals in most cases of interest, even though they answer a different question and can be completely different in more exotic problems.

Let’s focus then on the Bayesian case: is the intuitive answer people gave correct then? Yes, it is, but it doesn’t help us define what the credible interval is, as there will be infinitely many intervals that contain the true value with probability (e.g.) 0.95. How do we pick one? A nice solution would be to demand the credible interval to be symmetric around the estimate, so that we could have the usual $a\pm b$ result. But think about the most common case of parameter estimation: we want to predict the vote share that some politician will get in an election. If the poor candidate was estimated to get 2% of the votes, we can’t have the error bars to be $\pm$4%. Even if we could do that, there’s no reason why it should be symmetric: it’s perfectly possible that a 3% vote share is more probable than a 1% vote share.

A workable, if more cumbersome, definition is the Highest Posterior Region: it is a region where all points inside it have a posterior distribution larger than the points outside it. It is well-defined, except for some pathological cases we don’t care about, and is also the smallest possible region containing the true value with a given confidence. Great, no? What could go wrong with that?

Well, for starters it’s a region, not an interval. Think of a posterior distribution that has two peaks: the highest posterior region will be two intervals, each centred around one of the peaks. It’s not beautiful, but it’s not really a problem, the credible region is accurately summarizing your posterior. Your real problem is having a posterior with two peaks. How did that even happen!?

But this shows a more serious issue: the expectation value of a two-peaked distribution might very well be in the value between the peaks, and this will be almost certainly outside the highest posterior region. Can this happen with a more well-behaved posterior, that has a single peak? It turns out it can. Consider the probability density
$p(x) = (\beta-1)x^{-\beta},$ defined for $x \ge 1$ and $\beta > 2$. To calculate the highest posterior region for some confidence $1-\alpha$, note that $p(x)$ is monotonically decreasing, so we just need to find $\gamma$ such that
$\int_1^\gamma \mathrm{d}x\, (\beta-1)x^{-\beta} = 1-\alpha.$Solving that we get $\gamma = \frac1{\sqrt[\beta-1]{\alpha}}$. As for our estimate of the (fictitious) parameter we take the mean of $p(x)$, which is $\frac{\beta-1}{\beta-2}$. For the estimate to be outside the credible interval we need than that
$\frac{\beta-1}{\beta-2} > \frac1{\sqrt[\beta-1]{\alpha}},$which is a nightmare to solve exactly, but easy enough if we realize that the mean diverges as $\beta$ gets close to 2, whereas the upper boundary of the credible interval grows to a finite value, $1/\alpha$. If we take then choose $\beta$ such that the mean is $1/\alpha$ it will always be outside the credible interval!

A possible answer is “deal with it, life sucks. I mean, there’s a major war going on in Ukraine, estimates lying outside the credible interval is the least of our problems”. Fair enough, but maybe this means we chose our estimate wrong? If we take our estimate as the mode of the posterior then by definition it will always be inside the highest posterior region. The problem is there’s no good justification for using the mode as the estimate. The mean can be justified as the estimate that minimizes the mean squared error, which is quite nice, but I know of no similar justification for the mode. Also, the mode is rather pathological: if our posterior again has two peaks, but one of them is very tall and has little probability mass, the mode will be there but will be a terrible estimate.

A better answer is that sure, life sucks, we have to deal with it, but note that the probability distribution $(\beta-1)x^{-\beta}$ is very pathological. It will not arise as a posterior density in any real inference problem. That’s fine, it just won’t help against Putin. Slava Ukraini!

Posted in Uncategorised | 5 Comments

## What happened to QIP?

QIP is one of the most famous and prestigious conference series in quantum information. It has been going on since 1998, and getting a paper accepted by them is considered very good for your career. I have often attended, and can confirm, it’s a good conference. This year I didn’t submit a paper because it will be in the United States and getting a visa is a huge pain in the ass.

Some friends did, though, and were quite shocked by the registration fees they are charging. The cheapest student fee available is 475 dollars! If you’re not a student you have to pay at least 575 dollars, and if you’re late the fee goes up to 850 dollars! This is for attending in person. If you’re a student attending online it’s free, fair enough, but if you’re not a student attending online costs 250 dollars!

In contrast, last year’s QIP was in München, completely online, and completely for free. What happened? Did QIP suddenly become a for-profit organization? They even list some deep-pocketed sponsors like Amazon, IBM, and Microsoft. Where is all this money going?

Of course an in-person conference must be more expensive than an online conference, but this is completely out of the ordinary. I’m organizing an in-person conference now, the Tsirelson Memorial Workshop, and we are not even charging a conference fee, just 50€ for dinner. We are counting with the generous support of IQOQI, but QIP apparently has much richer sponsors. Our conference is also much smaller, but the price should be a concave function of the number of participants, not convex!

EDIT: A friend of mine was at the budget talk of QIP and reported me the details. The answer is clear: QIP just doesn’t give a shit about wasting the participants’ money. The total income was 790 k\$, with 460 k\$ coming form the sponsors, and 330 k\$from registration fees. The total amount of expenses was 750 k\$, with the breakdown being 220 k\$for “Freeman”, 220 k\$ for “Convention center”, 150 k\$for the conference dinner with the rump session, 80 k\$ for “Virtual platform”, and 80 k\$for other stuff. Now this “Freeman” is a logistics company, and maybe logistics do cost a lot in California, so I’m not going to criticize that. But 220 k\$ for using the convention centre for one week? That’s insane. They booked the “Pasadena Civic Auditorium”, which is a really nice place, but come on. Couldn’t Caltech offer the Beckman Auditorium for a reasonable price? It is also gigantic and really beautiful. I paid roughly 2000€ for booking the conference venue for 80 people for one week, including the coffee break. QIP had roughly 800 participants, so 20 k\$would be reasonable for the costs, with 200 k\$ being just profit. And 150 k\$for the conference dinner? So roughly 187 \$ per person? That’s a seriously fancy dinner. Now 80 k\$for the “Virtual platform” is just stupid. They could have done like every conference in the world and used Discord, that costs peanuts. But no, they insisted in paying a huge amount of money for developing their own platform, which the participants tell me was a piece of shit. Well done. Another cost that was not detailed in the breakdown was renting the auditorium of the Hilton Hotel for streaming the Zoom talks. Sounds expensive, and a bit insane. I can afford it, I have plenty of grant money for travelling to conferences. But I’m not going to set it on fire like this. If the next QIP doesn’t come back to reality I’m not going to attend. Posted in Uncategorised | Comments Off on What happened to QIP? ## A satisfactory derivation of the Born rule Earlier this year I wrote a post complaining about all existing derivations of the Born rule. A couple of months later yet another derivation, this time by Short, appeared on the arXiv. Purely out of professional duty I went ahead and read the paper. To my surprise, it’s actually pretty nice. The axioms are clean and well-motivated, and it does make a connection with relative frequencies. I would even say it’s the best derivation so far. So, how does it go? Following Everett, Short wants to define a measure on the set of worlds, informally speaking a way of counting worlds. From that you can do everything: talk about probability, derive the law of large numbers, and so on. Let’s say your world branches into a superposition of infinitely many worlds2, indexed by the natural numbers$\mathbb{N}$: $\ket{\psi} = \sum_{i\in \mathbb{N}} \alpha_i \ket{\psi_i}.$ Then the probability of being in the world$i$is understood as the fraction of your future selves in the world$i$, the relative measure $p_{\ket{\psi}}(i) = \frac{\mu_{\ket{\psi}}(i)}{\mu_{\ket{\psi}}(\mathbb{N})}.$The law of large numbers states that most of your future selves see frequencies close to the true probability. Mathematically, it is a statement like $p_{\ket{\psi}^{\otimes n}}\left(|f_i\,-\,p_{\ket{\psi}}(i)| > \varepsilon \right) \le 2e^{-2n\varepsilon^2},$which you can prove or disprove once you have the measure2. Now, to the axioms. Besides the ones defining what a measure is, Short assumes3 that if$\alpha_i = 0$then$\mu_{\ket{\psi}}(i) = 0$, and that if a unitary$U$acts non-trivially only on a subset$S$of the worlds, then$\mu_{U\ket{\psi}}(S) = \mu_{\ket{\psi}}(S)$. The first axiom is hopefully uncontroversial, but the second one demands explanation: it means that if you mix around some subset of worlds, you just mix around their measures, but do not change their total measure. It corresponds to the experimental practice of assuming that you can always coarse-grain or fine-grain your measurements without changing their probabilities. I think it’s fine, I even used it in my own derivation of the Born rule. It is very powerful, though. It immediately implies that the total measure of any quantum state only depends on its 2-norm. To see that, consider the subset$S$to be the entire set$\mathbb{N}$; the second axiom implies that you can apply any unitary to your quantum state$\ket{\psi}$without changing its measure. Applying then$U = \ket{0}\bra{\psi}/\sqrt{\langle \psi|\psi\rangle} + \ldots$we take$\ket{\psi}$to$\sqrt{\langle \psi|\psi\rangle}\ket{0}$, so for any quantum state$\mu_{\ket{\psi}}(\mathbb{N}) = \mu_{\sqrt{\langle \psi|\psi\rangle}\ket{0}}(\mathbb{N})$. It also implies that if a unitary$U$acts trivially on a subset$S$then we also have that$\mu_{U\ket{\psi}}(S) = \mu_{\ket{\psi}}(S)$, because$U$will act non-trivially only on the complement of$S$, and $\mu_{U\ket{\psi}}(S) + \mu_{U\ket{\psi}}(\mathbb{N} \setminus S) = \mu_{U\ket{\psi}}(\mathbb{N}) = \mu_{\ket{\psi}}(\mathbb{N}) = \mu_{\ket{\psi}}(S) + \mu_{\ket{\psi}}(\mathbb{N} \setminus S).$ We can then see we don’t need to consider complex amplitudes. Consider the unitary$U_{i_0}$such that$U_{i_0}\ket{i_0} = \frac{\alpha_{i_0}^*}{|\alpha_{i_0}|}\ket{i_0}$for some$i_0$, and acts as identity on the other$\ket{i}$. The second axiom implies that it doesn’t change the measure of$i_0$. Repeating the argument for all$i_0$, we map$\ket{\psi}$to$\sum_i |\alpha_i|\ket{i}$without changing any measures. Now we shall see that to compute the measure of any world$i$in any state$\ket{\psi}$, that is,$\mu_{\ket{\psi}}(i)$, it is enough to compute$\mu_{\alpha_i\ket{0}+\beta\ket{1}}(0)$for some$\beta$. Consider the unitary $U = \Pi_i + \frac{\ket{i+1}\bra{\psi}(\id-\Pi_i)}{\sqrt{\bra{\psi}(\id-\Pi_i)\ket{\psi}}} + \ldots,$where$\Pi_i$is the projector onto world$i$. It maps any state$\ket{\psi}$into $U\ket{\psi} = \alpha_i \ket{i} + \beta\ket{i+1},$where$\beta=\sqrt{\bra{\psi}(\id-\Pi_i)\ket{\psi}}$, and we have that$\mu_{\ket{\psi}}(i) = \mu_{U\ket{\psi}}(i)$. Now consider the unitary $V = \ket{0}\bra{i} + \ket{i}\bra{0} + \ldots$ It takes$U\ket{\psi}$to $VU\ket{\psi} = \alpha_i\ket{0} + \beta\ket{i+1}$ It acts trivially on$i+1$, so$\mu_{VU\ket{\psi}}(i+1) = \mu_{U\ket{\psi}}(i+1)$. Since the total measure of$U\ket{\psi}$and$VU\ket{\psi}$are equal, we have that $\mu_{VU\ket{\psi}}(0) + \mu_{VU\ket{\psi}}(i+1) = \mu_{U\ket{\psi}}(i) + \mu_{U\ket{\psi}}(i+1),$ so$\mu_{VU\ket{\psi}}(0) = \mu_{U\ket{\psi}}(i)$. Doing the same trick again to map$i+1$to$1$we reduce the state to$\alpha_i\ket{0}+\beta\ket{1}$as we wanted. This reduction does all the heavy lifting. It implies in particular that if two worlds have the same amplitude, they must have the same measure, so if we have for example the state$\ket{\psi} = \alpha\ket{0} + \alpha\ket{1}$, then$\mu_{\ket{\psi}}(0) = \mu_{\ket{\psi}}(1)$. Since$\mu_{\ket{\psi}}(\mathbb{N}) = \mu_{\ket{\psi}}(0) + \mu_{\ket{\psi}}(1)$, we have that $p_{\ket{\psi}}(0) = p_{\ket{\psi}}(1) = \frac12.$A more interesting case is the state$\ket{\psi} = \alpha\sqrt{p}\ket{0} + \alpha\sqrt{q}\ket{1}$for positive integers$p,q$. We apply to it the unitary$U$such that $U\ket{0} = \frac1{\sqrt p}\sum_{i=0}^{p-1} \ket{i}\quad\text{and}\quad U\ket{1} = \frac1{\sqrt q}\sum_{i=p}^{p+q-1} \ket{i},$taking$\ket{\psi}$to$\alpha\sum_{i=0}^{p+q-1}\ket{i}$. Now all amplitudes are equal, and therefore all the measures are equal, call it$x$. Then the total measure is$(p+q)x$, the measure of the original world 0 is$px$, and the measure of the original world 1 is$qx$. Therefore the probability of the original world 0 is $p_{\ket{\psi}}(0) = \frac{p}{p+q}.$ Since the argument is valid for all$\alpha$, we have proven the Born rule for all worlds where the ratio between the amplitudes is the square root of a positive rational. Since such amplitudes are dense in the set of amplitudes, we only need a continuity argument to get the complete Born rule. Normally I don’t care about the continuity argument, as one usually needs a separate postulate to get it, and the continuum is just a convenient fiction anyway. Here the situation is a bit more interesting, because the axioms we have are already strong enough to get it, there’s no need for an extra continuity axiom. Unfortunately I couldn’t find an elegant proof, so I’ll refer you to the original paper for that. To conclude, I’m still skeptical about this proving the Born rule business, in the sense of replacing it with a better set of axioms to be included in the axioms of quantum mechanics. I don’t think we’ll ever get something better than simply postulating the measure of worlds to be$\mu(\ket{\psi}) = \langle\psi|\psi\rangle$. It’s a similar situation with the other axioms: there are tons of good arguments why one should use complex numbers, or tensor products, or unitary evolution. But when it comes down to writing down the axioms of quantum mechanics, nobody uses the arguments, they write the axioms directly. If what you want is an argument why we should use the Born rule, though, then this is a pretty good one. Posted in Uncategorised | 10 Comments ## Violating the Tsirelson bound I started writing the previous post as an introduction to another subject, but it got too long and I decided to publish it separately. What I actually wanted to talk about is the following question: what happens if you are doing device-independent quantum key distribution (DIQKD) experimentally, and you violate the Tsirelson bound? I don’t meant in the sense of violating quantum mechanics, but doing it in the way quantum mechanics allows. If you do the experiment perfectly, then the probability of winning each round of the CHSH game is exactly the Tsirelson bound$\omega = \frac{2+\sqrt{2}}{4}$. Then the probability of getting a number of victories$v$out of$n$rounds of the CHSH game such that $\frac{v}{n} > \frac{2+\sqrt{2}}{4}$ is given by $\sum_{v = \lceil n \omega \rceil}^n \binom{n}{v} \omega^v(1-\omega)^{n-v}.$ This is not small at all, it is equal to$\omega$for$n=1$, and goes to 1/2 for large$n$. So yeah, it’s perfectly possible to violate the Tsirelson bound, and it is not a matter of experimental error or doing too few rounds4. On the contrary, experimental error is precisely what makes it very unlikely to win the CHSH game too often. This is very unsatisfactory, though, we are relying on experimental error to sweep the problem under the rug. Clearly DIQKD must also work in the ideal case. Even if you only care about the realistic case, there’s a different scenario where this matters: as proposed by Brown and Fawzi$^{\otimes 2}$, one can use an estimate of the whole probability distribution to do DIQKD instead of only the probability of winning the CHSH game. This makes it harder for the eavesdropper to cheat, and thus gives us better key rates. The problem is that we’re now dealing with a high-dimensional object instead of a one-dimensional parameter, and we need the estimates of all the parameters to land in the quantum-allowed region. The probability that at least one falls outside is appreciable. It’s hard to give a precise statement about this, because it will depend on the quantum state and the measurements you are doing, but the fact of the matter is that experimentalists routinely get estimates outside of the quantum-allowed region2. For simplicity, though, we’ll focus on the one-dimensional case. Why wouldn’t it work, though? What is the problem with violating the Tsirelson bound? The idea of DIQKD is that Alice and Bob play the CHSH game, calculate the frequency of winning$\frac{v}n$, and do an optimization over all quantum states with winning probability equal to$\frac{v}n$, picking up the worst, that is, the one that gives Eve the most information about the key they’re trying to generate. Well, if$\frac{v}n > \frac{2+\sqrt{2}}{4}$there’s just no quantum state with this winning probability, so you can’t figure out how much information Eve can have in this way. What can we do then? One obvious solution is to say that the winning probability$p$is equal to$\frac{2+\sqrt{2}}{4}$. After all, this is the closest we can get to the frequency$\frac{v}n$while staying in the range allowed by quantum mechanics. That’s not a good idea though. We would be assuming that Eve has no information whatsoever about Alice and Bob’s key, while it is perfectly possible that$p$is slightly smaller than$\frac{2+\sqrt{2}}{4}$, which would give her a bit of information. In fact, the probability that$p$is exactly$\frac{2+\sqrt{2}}{4}$is zero, just because$p$is a continuous parameter. It is very likely, on the other hand, that$p$is close to$\frac{2+\sqrt{2}}{4}$. This is what you should assume. And this is not even related to violating the Tsirelson bound. Even if you find that$\frac{v}n = 0.8$, it would be stupid to assume that$p=0.8$, as it’s almost certainly not. Assuming a flat prior over the quantum-allowed region, the probability density of$p$is given by $f(p|v,n) = \frac{p^v(1-p)^{n-v}}{\int_{1-\omega}^{\omega}\mathrm{d}q\, q^v(1-q)^{n-v}},$ for$p \in [1-\omega,\omega]$and zero otherwise. Which finally brings us to the DIQKD papers I mentioned in the previous post. How did they deal with this problem? It turns out, they did something completely different. They set some expected winning probability$p_0$and some tolerance$\delta$, and if the measured frequency$v/n$is at least$p_0-\delta$, they assume that the actual probability$p$is also at least$p_0-\delta$. I find that very strange, they are not using the measured frequency for anything other than this test, the key rate is calculate solely based on$p_0-\delta$. This is not wrong, I must emphasize, the probability that the winning probability is at least$p_0-\delta$given that the frequency is at least$p_0-\delta$is indeed very high, and they have a proper security proof. I just find it bizarre that they are discarding valuable information, using the measured frequency can give you a much better idea on what the actual winning probability is. For example, if the measured frequency is very close to$p_0-\delta$, then the probability that the winning probability is at least$p_0-\delta$is close to 1/2. Not as high as we’d like. On the other hand, if the measured frequency is much higher than$p_0-\delta$, the winning probability is likely much higher, and you’re needlessly lowering your key rate by being so pessimistic. Posted in Uncategorised | 6 Comments ## DIQKD is here! This has been a momentous week for quantum information science. The long-awaited for experimental demonstration of device-independent quantum key distribution (DIQKD) is finally here! And not one only demonstration, but three in a row. First, the Oxford experiment came out, which motivated the München and the Hefei experiments to get their data out quickly to make it clear they did it independently. To give a bit of context, for decades the community had been trying to do a loophole-free violation of a Bell inequality. To the perennial criticism that such an experiment was pointless, because there was no plausible physical model that exploited the loopholes in order to fake a violation, people often answered that a loophole-free Bell test was technologically relevant, as it was a pre-requisite for DIQKD.3 That was finally achieved in 2015, but DIQKD had to wait until now. It’s way harder, you need less noise, higher detection efficiency, and much more data in order to generate a secure key. Without further ado, let’s look at the experimental results, summarized in the following table.$\omega$is the probability with which they win the CHSH game, distance is the distance between Alice and Bob’s stations, and key rate is the key rate they achieved. Experiment$\omega$Distance Key rate Oxford 0.835 2 m 3.4 bits/s München 0.822 700 m 0.0008 bits/s Hefei 0.756 220 m 2.6 bits/s I’ve highlight the München and the Hefei key rates in red because they didn’t actually generate secret keys, but rather estimated that this is the rate they would achieve in the asymptotic limit of infinitely many rounds. This is not really a problem for the Hefei experiment, as they were performing millions of rounds per second, and could thus easily generate a key. I suspect they simply hadn’t done the key extraction yet, and rushed to get the paper out. For the München experiment, though, it is a real problem. They were doing roughly 44 rounds per hour. At this rate it would take years to gather enough data to generate a key. Why is there such a drastic difference between Hefei and München? It boils down to the experimental technique they used to get high enough detection efficiency. Hefei used the latest technology in photon detectors, superconducting nanowire single-photon detectors,2 which allowed them to reach 87% efficiency. München, on the other hand, used a completely different technique: they did the measurement on trapped atoms, which has efficiency of essentially 100%. The difficulty is to entangle the atoms. To do that you make the atoms emit photons, and do an entangled measurement on the photons, which in turns entangles the atoms via entanglement swapping. This succeeds with very small probability, and is what makes the rate so low. What about Oxford? Their experimental setup is essentially the same as München, so how did they get the rate orders of magnitude higher? Just look at the distance: in Oxford Alice and Bob were 2 metres apart, and in München 700 metres. The photon loss grows exponentially with distance, so this explains the difference very well. That’s cheating, though. If we are two metres apart we don’t need crypto, we just talk. One can see this decay with distance very well in the Hefei paper: they did three experiments, with a separation of 20, 80, and 220 metres, and key rates of 466, 107, and 2.6 bits/s. In the table I only put the data for 220 metres separation because that’s the only relevant one. It seems that the Hefei experiment is the clear winner then, as the only experiment achieving workable keyrates over workable distances. I won’t crown them just yet, though, because they haven’t done a standard DIQKD protocol, but added something called “random post-selection”, which should be explained in a forthcoming paper and in the forthcoming Supplemental Material. Yeah, when it appears I’ll be satisfied, but not before. EDIT: In the meanwhile the Hefei group did release the Supplemental Material and the paper explaining what they’re doing. It’s pretty halal. The idea is to use the full data for the Bell test as usual, as otherwise you’d open the detection loophole and compromise your security, but for the key generation only use the data where both photons have actually been detected. Which gets you much more key, as the data where one or both photons were lost is pretty much uncorrelated. There’s an interesting subtlety that they can’t simply discard all the data where a photon has been lost, because they only have one photodetector per side; Alice (or Bob) simply assigns outcome ‘0’ to the photons that came to this photodetector, and ‘1’ to the photons that didn’t arrive there. Now if there was no loss at all, the ‘1’ outcomes would simple correspond to the photons with the other measurement result. But since there is loss, they correspond to a mixture of the other measurement result and the photons that have been lost, and there is no way to distinguish them. Still, they found it’s advantageous to discard some of the data with outcome ‘1’, as this improves the correlations. Now they don’t have a full security proof for this new protocol with random post-selection, they only examined the simplified scenario where the source emits the same state in each round and the eavesdropper makes a measurement in each round independently. I suppose this is just a matter of time, though. Extending the security proof to general case is hard, but usually boils down to proving that the eavesdropper can’t do anything better than attacking each round independently. EDIT2: It turns out the Hefei experiment didn’t actually use a random setting for each round, as is necessary in DIQKD, but just did blocks of measurements with fixed settings. It’s straightforward to change the setup to use randomized settings, the standard method is to use a Pockels cell to change the setting electronically (rather than mechanically) at a very high rate. However, Pockels cells are nasty devices, which use a lot of power and even need active cooling, and are bound to increase the noise in the setup. They also cause bigger losses than regular waveplates. It’s hard to estimate how much, but it’s safe to assume that the keyrate of the Hefei experiment will go down when they do it. Posted in Uncategorised | 15 Comments ## Floating-point arithmetic and semidefinite programming Another day working with MATLAB, another nasty surprise. I was solving a SDP that was already working well, did a minor change, and suddenly it started taking minutes to reach a solution, when usually it took seconds. After a long investigation, I realized that the problem was that my input was not exactly Hermitian anymore. I had switched from writing manually the components of a matrix to defining the matrix as$A = \ket{\psi}\bra{\psi}$for a complex vector$\ket{\psi}$. Now how could that possibly be a problem? It is a very simple theorem to prove that the outer product of a vector with itself is always Hermitian. Well, not with MATLAB. I can’t figure out why, but probably MATLAB uses some smart algorithm for the outer product that ruins this property due to floating-point arithmetic. What I could determine is that if you compute the outer product the naïve way, then even with MATLAB the result will be exactly Hermitian. Also, with Python the output is exactly Hermitian. To solve this problem, one can simply redefine$A$as$(A+A^\dagger)/2$, which is very fast and gives an exactly Hermitian output. I don’t like doing that, though, as$(A+A^\dagger)/2$is always exactly Hermitian, even when$A$is not even close to Hermitian. If I fucked up the definition of$A$this will hide my mistake, and make debugging harder. Instead, what I did was write my own outer product function, computing it as$\ket{\psi}\otimes\bra{\phi}$. It is slower than whatever black magic MATLAB is doing, sure, but it is fast enough, and leaves my mistakes easily visible. It dawned on me that this subtlety was probably the source of many bugs and slowdowns in my codes over the years. I decided to go hunting, and find out the fascinating properties of MATLAB algebra that doesn’t preserve Hermiticity where it should. It turns out that if$A$and$B$are exactly Hermitian, then both$A+B$and$A \otimes B$are exactly Hermitian, as they should. The problem is really when you do matrix multiplication. Which shouldn’t produce further problems, right? After all,$AB$is in general not Hermitian, so we don’t have anything to worry about. Except, of course, that the Hilbert-Schmidt inner product$\operatorname{tr}(A^\dagger B)$is real for Hermitian$A,B$, and this is not true in MATLAB. Argh!$\operatorname{tr}(A^\dagger B)$appears very often in the objective of a SDP, and it really needs to be a real number, as you can’t minimize a complex function. It turns out that this is not MATLAB’s fault, it is a fundamental problem of floating-point arithmetic. An well-known but often-forgotten fact is the floating-point addition is not associative. Let$a=10^{20}$and$b=10^{-20}$. Then with floating point numbers we see that$b+(a-a) = b$but$(b+a)-a = 0$. Which is the issue with the Hilbert-Schmidt inner product: we get stuff like$z+\bar{z}+w+\bar{w}+\ldots$, but not in order.$z+\bar{z}$is of course real, and$w+\bar{w}$as well, but$z+(\bar{z}+w)+\bar{w}$? Nope, not in floating-point arithmetic. Here I don’t think a proper solution is possible. One can of course write$\Re[\operatorname{tr}(A^\dagger B)]$, but that will hide your mistakes when$A$and$B$are not in fact Hermitian. A better idea is to vectorize$A$and$B$to avoid matrix multiplication, writing$\Re[\langle A|B\rangle]\$. This only helps with speed, though, it doesn’t touch the real problem.

Posted in Uncategorised | 6 Comments