They will have trace non-identical rotational spectra. Brightspec.com offers superlative chirped-pulse microwave spectrometers. Everything else is synthesis, optimization, and transition line internal self-calibration. Be clever when claiming the impossible is trivially observable.

Sourcing baryogenesis with chiral anisotropic vacuum requires abandoning “beauty” and “naturalness” for matrices like landfills. I’m not the guy who writes theory. Thanks for getting back to me.

]]>http://www.mazepath.com/uncleal/EquivPrinFail.pdf

… The worst it can do is succeed, healing everything. Look.

A better analogy would be the notion of absolute space in Newton’s theory. It is a convenient ontological overcommitment, in the sense that if somebody would try to interpret absolute velocity with respect to physical reality, I would tell him that it does not need to be interpreted (similar to the internal probabilities in my first comment). But the notion of Euclidean space is independent of Newton’s theory, and one might want to define it, in the sense of giving one or more “explicit models” of it, which reduce it to something more basic.

This is what Descartes did in a certain sense. He reduced the n-dimensional Euclidean space to n one-dimensional coordinates, i.e. n real numbers. The real numbers can be reduced by Dedekind cuts to rational numbers, and rational numbers are a sufficiently clear (basic) notion that no further reduction is required. (Dedekind cuts alone without rational numbers risk being circular, and could produce the surreal numbers instead of the real numbers.)

The cartesian product also occurs for probabilities, they model independence. This is a very important notion, which can often be interpreted independent of the probabilities themselves. But after that it gets really difficult, since now you need to come up with an intuitive model, where sigma-algebras (and probability measures) make sense, and still describe everything in terms of single events (more or less).

]]>Alternatively, if they could show that it does have a well-defined value, even if it does not equal the probability (as is the case when one considers the relative frequencies of finite sequences), then they would be proposing new axioms for probability theory.

The only problem with what they’re doing is that it does not work.

]]>Avoiding interpretation as often as possible is not a bad strategy. However, if the outcome of your model are predictions involving probabilities (like for the daily weather forecast, or for a geological survey on the chance that an earthquake of magnitude 6.7 or greater will occur before the year 2030 in the San Francisco Bay Area), then avoiding interpretation is no longer an option. The same is true if the inputs to your model include probabilities. However, this does not extent to internal probabilities. The fact that an infinite sequence does not have a probability (or that it has probability 0 if you insist on assigning a probability), or that there are non-measurable sets of infinite sequences for which it is impossible to assign any probability, it does not need to be interpreted (i.e. connected to reality).

]]>This is what you get if you make repeated measurements on the same particle, but I’m talking about preparing many copies of the entangled state and doing one measurement in each copy.

]]>This is probably a too stupid question, as I am not sophisticated enough in QM, but I don’t see how do you get your statistics from measuring the proper density matrix 2n times in the Z axis. You prepared an entangled state and are ready to do a sequence of 2n local measurements. The first measurement collapses the wave function. The 2n-1 remaining measurements should give the same eigenvalue. So one half of the times you get 0,0, … 0 and one half you should get 1,1,…1. Disregarding noise, what am I overlooking here?

]]>