The write-up of Larry McLerran’s summary talk at Quark Matter 2006 has now appeared. This talk created a bit of a stir since McLerran was rather critical of the way string theorists have been overhyping the application of string theory to heavy-ion collisions.
McLerran explains in the last section of his paper the main problem, that N=4 supersymmetric Yang-Mills is a quite different theory than QCD, listing the ways in which they differ, then going on to write:
Even in lowest order strong coupling computations it is very speculative to make relationships between this theory and QCD, because of the above. It is much more difficult to relate non-leading computations to QCD… The AdS/CFT correspondence is probably best thought of as a discovery tool with limited resolving power. An example is the eta/s computation. The discovery of the bound on eta/s could be argued to be verified by an independent argument, as a consequence of the deBroglie wavelength of particles becoming of the order of mean free paths. It is a theoretical discovery but its direct applicability to heavy ion collisions remains to be shown.
McLerran goes on to make a more general and positive point about this situation:
The advocates of the AdS/CFT correspondence are shameless enthusiasts, and this is not a bad thing. Any theoretical physicist who is not, is surely in the wrong field. Such enthusiasm will hopefully be balanced by commensurate skepticism.
I think he’s got it about right: shameless enthusiasm has a legitimate place in science (as long as it’s not too shameless), but it needs to be counterbalanced by an equal degree of skeptical thinking. If shameless enthusiasts are going to hawk their wares in public, the public needs to hear an equal amount of informed skepticism.
Another shamelessly enthusiastic string theorist, Barton Zwiebach, has been giving a series of promotional lectures at CERN entitled String Theory For Pedestrians, which have been covered over at the Resonaances blog.
Zwiebach’s lectures are on-line (both transparencies and video), and included much shameless enthusiasm for the claims about AdS/CFT and heavy-ion physics that McLerran discusses. His last talk includes similar shameless enthusiasm for studying the Landscape and trying to get particle physics out of it. He describes intersecting D-brane models, making much of the fact that, after many years of effort, people finally managed to construct contrived (his language, not mine, see page 346 of his undergraduate textbook) models that reproduce the Standard Model gauge groups and choices of particle representations. Besides the highly contrived nature of these models, one problem with this is that it’s not even clear one wants to reproduce the SM particle structure. Ideally one would like to get a slightly different structure, predicting new particles that would be visible at higher energies such as will become available at the LHC. Zwiebach does admit that these contrived constructions don’t even begin to deal with supersymmetry-breaking and particle masses, leaving all particles massless.
He describes himself as not at all pessimistic about the problems created by the Landscape, with the possibility that there are vast numbers of models that agree to within experimental accuracy with everything we can measure, thus making it unclear how to predict anything, as only “somewhat disappointing”. He expects that, with input from the LHC and Cosmology, within 10 years we’ll have “fully realistic” unified string theory models of particle physics.
The video of his last talk ran out in the middle, just as he was starting to denounce my book and Lee Smolin’s, saying that he had to discuss LQG for “sociological” reasons, making clear that he thought there wasn’t a scientific reason to talk about it. I can’t tell how the talk ended; the blogger at Resonaances makes a mysterious comment about honey…
Finally, it seems that tomorrow across town at Rockefeller University, Dorian Devins will be moderating a discussion of Beyond the Facts in Sciences: Theory, Speculation, Hyperbole, Distortion. It looks like the main topic is shameless enthusiasm amongst life sciences researchers, with one of the panelists the philosopher Harry Frankfurt, author of the recent best-selling book with a title that many newspapers refused to print.
Update: Lubos brings us the news that he’s sure the video of the Zwiebach lectures was “cut off by whackos” who wanted to suppress Zwiebach’s explanation of what is wrong with LQG.
Update: CERN has put up the remaining few minutes of the Zwiebach video.
And, for the N’th time, the problem has nothing to do with the alleged complexity of the individual vacua in question.
“If the landscape consisted of an infinite number of vacua, labeled by a discrete parameter (say an integer), and we could calculate all physical observables easily in terms of that integer, there would be a landscape, but no landscape problem. … An infinite number of vacua is not inherently a problem, the problem is whether you are able to calculate things and match to experiment.”
In your alleged non-example, one can calculate, once and for all, the values of all physical observable in these different vacua as a closed-form function of a single (or several) integer(s).
If that’s your definition of “simple,” then I’m afraid that there are no simple theories of any physical relevance to the real world.
Some quantities (super-renormalizable couplings, like the cosmological constant) will vary wildly between different vacua, even if (as Nima and others have argued) other, renormalizable, couplings do not vary appreciably.
There are plenty of field theory examples (much simpler than the Standard Model) that exhibit this behavior, when coupled to gravity.
I’m sorry that this observation causes you such psychological distress.
You are completely devoted to missing the point here. Obviously my example was over-simplified, but the fundamental point is simply that of whether or not you can calculate things and compare them to experiment, i.e. do science.
In the SM you can, extremely successfully. In the string theory landscape people have failed utterly to calculate anything that can be compared to experiment in any way. Examining why that is might be worthwhile, claiming there’s no real difference is absurd.
Thanks for your concern about my psychological well-being, I assure you that I’m feeling just fine, the only psychological distress coming from mild annoyance that I’m wasting my time trying to have a serious discussion with someone who doesn’t seem interested in this.
While I don’t want to disagree with the general statement that there are no useful predictions yet, the above statement is not quite strictly true.
One thing that is rather easily determined for most vacua is for instance the gauge group of the effective 4-dimensional theory. That can be compared to experiment.
And that’s one of the main properties that people in string model building (= landscape point finding) check. They discard lots of vacua with the wrong gauge group and find vacua with gauge groups that come closer to the real thing by passing from simple to more involved vacua.
It is true that those vacua with the right gauge group that have been found so far are rather involved, as far as their “algorithmic complexity” goes (i.e. concerning how many pages you have to fill to define it). Although that does not preclude (nor suggest, of course) that there might not be a complicated looking Rube-Goldberg vacuum which is later realized to be “algorithmically non-complex” in that it, say, is the unique one with a certain property, whatever.
Please note well: I am not discussing whether or not we should be happy or non-happy with string theory and its space of solutions. We all know each others opinion on that already. All I want to address here is a technical point, namely if or if not people have managed to compute from a given string vacuum something that can be compared with the real world.
You’re well aware that one can get just about any gauge group one wants out of the landscape. When I wrote about comparing theory with experiment, I was referring to the standard scientific notion of testing one’s theory by making experimental predictions and checking them, something that is impossible in the landscape situation.
Urs, would you say that a negative, or at least non-positive, cosmological constant is a falsifyable prediction of AdS/CFT?
let me recall what I was commenting on:
Peter described how to proceed with a physical theory that has more than one solution
Then he remarked that
I was just pointing out that this is not quite true.
Lots of string solutions are known, lots of properties have been computed for them, like gauge group, number of generations, particle content in general, number of large dimensions.
Most choices one has tried don’t match observation. To match even these basic properties with observation one has to look for rather peculiar solutions. (Some that do match these basic properties do exist, though. Most famously maybe the solution by Braun et al.)
I do agree with you all, though, that no useful prediction has come out of string phenomenology so far.
If you look at the history of this, I think you’ll find that what has been going on is not at all what I was talking about: computing the predictions of a theory, comparing to experiment, and admitting failure if they don’t match. It has been something rather different: computing gauge groups + representations in simple examples, finding they don’t match the observed SM, choosing more complicated examples, computing again, finding they still doesn’t match, etc, reaching a present point of working with complicated examples that still are very unphysical, and not even having any idea whether the gauge groups + representations one has laboriously made match by ad hoc methods are even the ones that you want (what if the LHC discovers a new force or new particles?).
in your example,
how many integers would you check before you “admit failure”?
I should say that I agree very much that checking lots of convoluted examples, one by one, with no guarantee that there is at least one that works is not particularly uplifting.
What is, at times, much more enjoyable are more bottom-up approaches, where one tries to identify some hidden underlying structure of our standard model, which maybe suggests where to look for a UV completion.
I am particularly thinking of Connes’ way of encoding the entire standard model in the data of a spectral triple – and now rather elegantly so.
Sometimes I imagine that if I were a model builder, I would try to use that information to look for string backgrounds that have a chance of reducing to Connes’s spectral triple.
Notice that string backgrounds do tend to ordinary spectral triples on target space in the point particle limit:
Pingback: Not Even Wrong » Blog Archive » Semi-precise Predictions