Claims made recently in the CERN Courier that string theory can be applied to Quantum Information Theory (see here) are being followed up with a new paper entitled Four-qubit entanglement from string theory which appears to claim that, despite what some might think, string theory is falsifiable since it makes experimentally testable predictions about Quantum Information Theory:

Falsifiable predictions in the fields of high-energy physics or cosmology are hard to come by, especially for ambitious attempts, such as string/M-theory, to accommodate all the fundamental interactions. In the field of quantum information theory, however, previous work has shown that the stringy black hole/qubit correspondence can reproduce well-known results in the classification of two and three qubit entanglement. In this paper this correspondence has been taken one step further to predict new results in the less well-understood case of four-qubit entanglement that can in principle be tested in the laboratory.

Previous papers along these lines about the three-qubit case involved some algebra that I referred to as “remarkably obscure”, a comment that “was like waving a red flag in front of a bull” as far as John Baez was concerned, leading him to some expository comments about the subject in his latest This Week’s Finds in Mathematical Physics. About the string theory claims he comments:

Unfortunately, Duff gets a bit carried away. For example, he says that string theory “predicts” the various ways that three qubits can be entangled. Someone who didn’t know physics might jump to the conclusion that this is a prediction whose confirmation lends credence to string theory as a description of the fundamental constituents of nature. It’s not!

Unlike the three-qubit papers, this latest one sticks to mathematics that is not particularly obscure. The mathematics invoked is the quite beautiful subject of the classification of nilpotent orbits in a Lie algebra. I’ve been trying to learn more about some related topics in recent months, having to do with the role of nilpotent orbits in representation theory. Part of this story involves what are now known as “finite W-algebras”, and these have a BRST definition. I’ve been curious about the relation of this to the BRST/Dirac Cohomology relationship I’ve been working on.

The mathematical problem at issue here is that of classifying SL(2,**C**)^{4} orbits on the four-fold tensor product of **C**^{2}. For an exposition of this problem aimed at mathematicians, see these lecture notes by Nolan Wallach. In the new paper, the authors claim that the Kostant-Sekiguchi theorem implies that this classification is the same as that of orbits of SO(4,C) on its Lie algebra, and this latter classification also classifies certain sorts of black holes in supergravity, but I haven’t checked the details of this. It’s a complete mystery to me why the use of the Kostant-Sekiguchi theorem to relate the straight-forward mathematics used in QIT to a black hole classification problem is going to somehow turn string theory into falsifiable, experimentally testable science.

Despite the paper on CERN Courier, the seminar at CERN was attended almost only by some string theorists

Pete, sorry if OT. From the June 2010 Scientific American issue:

http://www.scientificamerican.com/article.cfm?id=interactive-12-events

anonymous,

The one of the SciAm 12 “events that will change everything” that has to do with particle theory is the possible discovery of extra dimensions, that is assigned a probability of 50% by 2050. It’s true that would be a huge event in the history of science, but I think most theorists would put the probability one or more orders of magnitude below 50%.

The article mistakenly claims that supersymmetry requires 10 dimensions, mixing up supersymmetry and string theory (string theory isn’t mentioned at all).

There’s a quote from Arkani-Hamed:

“So while extra dimensions would be a terrific discovery, at a deeper level, conceptually they aren’t particularly fundamental’.

In some sense I agree, although perhaps with a different point of view. Extra dimensions don’t solve any fundamental problem that we care about, much more interesting would be a conceptual breakthrough that would solve such problems.

Extra dimensions are the last thing we need. It’s hard enough trying to explain the existence of the first four!

Does the inverse scattering transform method for kdV equation make falsifiable predictions from the Schrodinger equation and hence validate the basic equation of quantum mechanics? Is this the major success of quantum mechanics?

Thanks for your answer, Pete.

Althought the article must be judged by its genre (journalistic science counterfactual forecasting?) when reading it i had the impression that present physics much more open than i think it is.

For instance the 50% issue: i´m not even sure that to assign probabilities to such (or similar) events is correct. If it is, 50% must be interpreted as if we had not any information about if the universe has 3+1 dimensions or more, but as you say “most theorists would put the probability one or more orders of magnitude below 50%”. This issue is not a fair coin, it´s clearly biased !

On the other hand despite Arkani-Hamed comment I still find highly unsatisfactory that SM is compatible with extradimensions, if only for parsimony.

In any case, congratulations for your blog!

@anonymous.

The assginment of probabilities to those events is perfectly foundationed interpreting them as Bayesian probabilities incrorporating whatever theoretical prior beliefs. The biggest difficulty is that (assuming that if extra dimensions exist they will be detectable by 2050) it seems almost all relevant physicists either believe theories that say there are extra dimensions with a tiny chance of being wrong, or that there aren’t extra dimensions for parsimony reasons with a tiny chance of being wrong. So all this probability really represents is what the writer (Sci Am, Peter) thinks the relative proportions of the two groups is.

bane,

The problem is not just prior theoretical prejudices. It is that the “large extra dimensions” scenarios are not at all developed theories, but guesses about what might possibly be achieved if somehow the apparently enormous problems involved in developing theories to support the scenarios might be overcome.

Any reasonable estimate of their chance of being true must take into account: (a) the magnitude and complexity of these difficulties; and (b) the odds that, even if those are overcome, the original guesses will turn out to have been pretty much correct — something which is hardly assured given the depth of the unresolved problems.

The 50% figure is preposterous from this point of view.

bane,

It is not that i want to revive the probabilities interpretation debate, but i´m classical or ortodox on this: let the probabilities for real or theoretical systems which fit into Kolmogorov axioms.

On this sense the posibility of asigning probabilities to such “universe property events” as the number of dimentions seems tied to the belief either on some kind of multiverse or sequential toss, on which 50% of the universes have extradimensions and 50% not. This makes no sense if the universe had unique properties (i.e. the universe properties are always the same after every sequential or parallel universe toss).

On the other hand the sentence “I think most theorists would put the probability one or more orders of magnitude below 50%” seems clear enough.