This seems to be the month for string theory debates, with two a couple weeks ago in the UK involving Lee Smolin, and another featuring Lawrence Krauss and Brian Greene scheduled for next week in Washington D.C. The Washington Times has an article about this.
Smolin’s book has just appeared in the UK, and there have been lots of (very positive) reviews. See here, here, here, here, and here.
Besides talks (for a report on the one at Cambridge by a skeptical American physics student in England, see here), there were two debates. One featured Smolin, Philp Candelas, Simon Saunders and Frank Close and was held at Oxford; for a report, see here. It appears to have been a respectful and reasonable public airing of a few of the issues where string theorists and some of their critics disagree.
A couple days earlier though, a debate in London between Smolin and Mike Duff (also involving philospher Nancy Cartwright) had a very different nature. According to the report from one attendee, after Smolin started things out by arguing his case:
Smolin sat down. Duff stood up. It got nasty.
The trouble with physics, Duff began, is with people like Smolin…
Duff is described as “string theorist and man for whom, one imagines, the words ‘self’ and ‘doubt’ do not often rub shoulders”, and seemed to think it was a good idea to answer criticisms of string theory with vociferous ad hominem attacks. Lubos Motl and Clifford Johnson both found Duff’s behavior an excellent example for all string theorists, inspiring Clifford to write part VII of his extended attack on me, Smolin and our two books. He admitted somewhere around part V or VI that he actually hadn’t looked at the books and had no intention of doing so, and he’s pretty steadfast in that attitude. It never ceases to surprise me that people like Clifford don’t realize that, much as they may enjoy engaging in or listening to personal attacks on me and Smolin, this just doesn’t do a lot for the credibility of their field. String theorists often complain that Smolin portrays them as arrogantly dismissing any criticism, but they should realize that behavior like Duff’s doesn’t help them at all on this issue, quite the opposite.
Duff pretty obviously has a double standard for popular books about string theory. He’s quite capable of being polite, writing a very respectful review of Susskind’s The Cosmic Landscape for Physics World. His review of Smolin’s book in Nature Physics is something very different, much more like his performance at the debate. The review begins by misquoting Smolin, based upon something that was in the proof copy of the book he had (which the author hasn’t had a chance to look at), but was different in the published version. After the review, he had been informed about this, but still seemed to think it was a good idea to use this as ammunition in his personal attack on Smolin during the debate.
One of his main points was that it is ridiculous to claim that string theory has not made any progress since the 80s. Obviously there are some areas in which there has been progress in better understanding the theory, but, as far as the central issue, that of getting any predictions out of the idea of using strings to unify physics, it’s interesting to follow the link that someone with a waggish sense of humor at Nature put at the bottom of the page of Duff’s review. It’s a story from 1986 entitled Where Now With Superstrings?, and it reports on the views of string theorists at the time, roughly one year after the early developments that caused so much enthusiasm for string theory as a unified theory. The problem of too many vacua was something people were starting to worry about, but the feeling was that:
… another problem of non-uniqueness in superstring theory, the variety (thousands) of possible four-dimensional worlds it allows, is showing some signs of resolution.
The “progress” on this more than twenty years later is that instead of “thousands”, the number has moved up to the exponent, and we’ve now got the “Landscape” of 101000 or so possible four-dimensional worlds. Any “signs of resolution” of this are long vanished. Just as physicists are now waiting for the LHC next year, those of 1986 were waiting for the Tevatron to start up the next year, with Weinberg claiming that the mass range to be explored by the Tevatron was “a very plausible mass for them [superpartners] to have”. The reporter wrote that:
If the Tevatron sees no superparticles, supersymmetry will lose its value in the hierarchy problem, and hence half its motivation.
So, I guess Duff is right that it’s inaccurate to say that things haven’t changed with the prospects for string theory since 1986, since the situation now is a lot worse than it was then.
If you want to listen to the debate, audio is available on-line here, with a transcript to appear shortly. For another kind of audio showing what this is all about, see this posting from Sabine Hossenfelder.
thanks for the link. You know what I really don’t understand is why some string theorists apparently deliberately try to make the situation worse for themselves. In my opinion, the smart thing to do is definitely no to argue with their interpretation of the author’s intentions or psychological problems. For the public opinion – what many pretend to be oh-so concerned about – they could just have turned it into a reasonable discussion about funding in theoretical physics, instead of getting upset about a would-be here or there.
I agree. One of the weirder aspects of being involved in these arguments is sometimes watching people on the other side engage in self-destructive behavior. I think both Lee and I have often had the experience of feeling that we could do a better job of defending string theory research than many string theorists seem capable of.
Especially in private, most string theorists I know have no trouble admitting that the situation is pretty discouraging for string theory right now. Those who feel this way aren’t likely to agree to review the books or get into a debate on the subject in public, since it’s kind of a no-win situation. String theorists who do take on these assignments are often the most fanatical true believers around, and this is all too obvious. It would be very much in the interest of more sensible string theorists to speak up for themselves, and not let their least convincing colleagues be the ones to publicly represent their field.
Great post. Thanks for the heads up about the debate.
Sometimes folks will just hang themselves with their own rope, with very little to no encouragement from others. This is very common in politics and religion, than in science.
I’m an undergraduate—I’ve only been reading this blogs for about 6 months or so. It is generally quite informative and enjoyable, and I appreciate your updates (although the overly technical discussions obviously go over my head for the most part, which I expect). I occasionally read most of the blogs that you link to, so I’m familiar with many of the harsher posts, but this latest round following Dr. Smolin’s and Dr. Duff’s debate seems a bit worse than usual—particularly nasty and biting. I guess I’m just wondering to what extent this type of behavior is… “normal?” Is this most common in this particular branch of physics? I guess the claims being made are pretty weighty and the stakes are certainly pretty high in this branch. And finally, for all the little flurries of posts and responses that these things set off in the “blogosphere,” do you think the theoretical physics community at large experiences something similar?
I think you’ll find that academics in general are no better or worse behaved than the population at large. String theory is a bit of a special case: a huge amount of effort has gone into it and a lot of people have a lot invested in it, while it hasn’t worked out as they had hoped. Under the circumstances, lots of people in the field are unhappy with the current situation, but mostly quietly, and behaving reasonably in a difficult situation. Unfortunately, some are reacting with less than reasonable behavior. The blogosphere kind of encourages some of this, but it’s not just a blogosphere phenomenon.
It’s interesting, resurrecting that 1986 story. It shows how some (by no means all) string theory practitioners have transformed a once promising speculative path into a quasi-religious quest. I think Einstein, with his fruitless 30 year quest for a unified field theory, ignoring the wonderful empirical successes of quantum theory along the way, is at least partly responsible for the emergence of this attitude. I think it is no coincidence that Princeton, where Einstein conducted this quest, is today the locus of string theory research. Today’s practitioners of this doomed tradition could do with some of his humour and humility.
I can’t just blame Einstein though, and I often wonder why it is that other great physicists, Newton, Weinberg, Schrodinger – have shown a willingness to become side-tracked into the realms of metaphysics in their later years, albeit to very different degrees in each case. An age-related onset of rigidity of thinking, allied to a diminution of creative powers perhaps?
The problems with string theory unfortunately can’t be blamed on people getting old. That’s a well-known problem in the sciences, and biology has an effective way of dealing with it. Many of those most fanatically devoted to the string theory ideology are quite young, with new converts coming up through the ranks all the time.
Peter Woit said: Especially in private, most string theorists I know have no trouble admitting that the situation is pretty discouraging for string theory right now. Those who feel this way aren’t likely to agree to review the books or get into a debate on the subject in public, since it’s kind of a no-win situation. String theorists who do take on these assignments are often the most fanatical true believers around, and this is all too obvious. It would be very much in the interest of more sensible string theorists to speak up for themselves, and not let their least convincing colleagues be the ones to publicly represent their field.
This is also my perception, and I hope some of the ‘more sensible string theorists’ take your advice.
(Though I suspect that in some cases the stage for a debate is set such that controversy is expected or even hoped for. In a certain sense, it must be boring for the public to hear that in most cases theoretical physicists get along with each other pretty well.)
The problem, I think, is that there are at least three different debates going on simultaneously. The first is a discussion of the merits of string theory. At least speaking for myself, I think this is a perfectly legitimate debate to have. There’s a lot of misinformation out there, but these are generally scientific questions and can be discussed scientifically. I don’t believe I’ve ever personally attacked anyone for raising a criticism of string theory.
Secondly, there is the debate about how to best encourage risk-taking and new ideas in the field of theoretical physics. This is a much more difficult debate to have because I don’t think there are any easy answers.
The real problem, however, is the third “debate” which is really a series of attacks on string theorists and the string theory community. This is a personal debate and has often led to personal responses. I can’t imagine anyone finds it surprising that people often respons as such when their reputations as scientists are attacked.
I don’t see any real hope of disentangling these debates.
BTW — let me take this opportunity to give a short version of the response I’d like to give on the issue of the cc over at Asymptotia — I have someone visiting so I don’t have the time to do an extended version. The short story is that you can look at contributions to the cc in SUGRA and nonSUGRA situations:
1) Bare cc — related to superpotential
2) vacuum corrections — related to susy breaking scale
3) symmetry breaking
1) bare cc — free parameter
2) vacuum corrections — divergent, presumably cutoff by Planck scale
3) symmetry breaking
(and probably stuff I’m forgetting)
Now, given these charts which I hope you agree with, I don’t see how the statement that the cc-problem is worse in susy situations is supportable.
There’s also the question wether string theory is physics or philosophy based on mathematical reasoning.
My observation is that even suggesting the latter seems to be taken as an insult by atleast some string theorists.
I don’t really disagree with your charts. The point I keep trying to get across is that, as you note in your chart, in the SUGRA case the scale that appears is the SUSY breaking scale, and that is something well-defined enough for us to actually measure it. We have an experimental bound of around 100 GeV, and if low-energy SUSY exists, maybe the LHC will give us not just a bound, but a number. Problem then is that this number is at least 10^60 times too big, and this is what pretty much everyone describes as the “CC problem”.
In nonSUGRA the situation is just different, there is no such thing as a SUSY breaking scale, so you don’t have this problem. You do have the problem that vacuum corrections are divergent and so the CC is ill-defined, but that’s a different problem than in the SUGRA case. It’s somewhat a matter of taste which is the worse problem, but I think it’s a sustainable point of view that having a theory in which a question is well-posed, but the answer is completely wrong, is worse than having a theory in which the question is ill-posed. Being wrong is a worse thing to happen to a theory than being not even wrong, since at least you can hope that further understanding will move you from not even wrong to maybe right, but wrong is just wrong.
I don’t see how you can distinguish the divergent computation in nonsusy situations from susy situations; it’s the exact same calculation. You can even send the susy breaking scale to the planck scale and interpolate between the models.
About the “third debate”. I think most string theorists are careful not to engage in personal attacks, and in particular I think that you’re not someone who has done this, other than at times expressing exasperation with Lee, which isn’t especially unreasonable. But I think it’s highly unfair of you to characterize either Lee or me as being responsible for this situation because we started personally attacking string theorists. I don’t think there’s anything at all in either of our books that could be characterized that way. In his other public statements (as well as in all private discussions I’ve ever had with him), I know of no case where Lee has personally attacked anyone. I’ve by now written thousands of pages about string theory and string theorists on my blog and others, and undoubtedly have, in a small number of cases, out of exasperation, made comments about people that I regret. Recently, I have made very specific accusations about the personal behavior of some people, and those I don’t regret.
Sure, both Lee and I think there is a wide-spread problem with how string theory research has all too often been conducted. He gives examples of what he sees as problematic in his book, I do so in my blog and in my book. These aren’t “personal attacks” on people, they’re specific complaints about specific behavior and decisions, which we see as coming out of, not personal failings of specific people, but an organization of research that is not working.
This is very different than the wholesale personal abuse both Lee and I have been subjected to. You know very well that I can give you large numbers of examples of this (Lubos Motl and his supporters at Harvard, Susskind, Duff, Peet, Srednicki and the other jeering bozos at the George Johnson talk at the KITP, Jacques, Clifford, “Hmm”, “Michael”, and others). The problem with this “third debate” is real, but it’s not of our making.
There’s also the question wether string theory is physics or philosophy based on mathematical reasoning.
My observation is that even suggesting the latter seems to be taken as an insult by at least some string theorists.
That’s because they have such a dismissive (and not particularly well-informed) attitude towards philosophy. Not surprisingly, string theory is not particularly coherent or well-considered as philosophy.
In contrast, Einstein’s thinking was a thoughtful, careful, and original philosophical thinker, largely because he saw philosophical thinking as playing an essential role in physics. In particular he saw it as an antidote to ill-considered and uncritical reliance on mathematical formalism in physics. He paid a certain price for this; in 1921 the Nobel Committee looked askance at the philosophical cast of Einstein’s writing on special and general relativity.
I’m a little confused about the big problem with the landscape. The Standard Model appears to have a huge landscape of physically distinct compactified 2+1-dimensional vacua, many with a radius around 10 microns. A tiny intelligent creature living in one of those vacua would be forced to deal with vacuum selection. We would laugh if such a creature tried to explain the mass of his electrons based on some fundamental necessary principle. And since the existence of these 2+1-dimensional vacua depend only on low-energy physics, any theory that contains the Standard Model at low energies would also have to contain this landscape of 2+1 dimensional vacua.
So why is it a surprise that a theory with extra dimensions should have a landscape of vacua? If it must contain a landscape of 2+1-dimensional vacua, why is it a surprise that it should have a landscape of 3+1-dimensional vacua? And if the Standard Model has a landscape, why are we so upset to see other theories with a landscape, especially since any theory that contains the Standard Model will already have to contain its landscape?
I guess I just don’t see the big deal here, but perhaps I’m mistaken!
1) Please be a bit more specific about the Standard Model’s physically distinct compactified 2+1-dimensional vacua.
2) If we were the tiny intelligent creatures in question then we would eventually conclude that the theory is worthless, because our observers and theorists could play this game (until ‘O’ gets fed up):
I understand that you don’t like the landscape. But my point was that the Standard Model has one, whether you like it or not. (Check out a recent paper on the archive for this. Distler also discussed it in his blog recently.) This is the vanilla Standard Model, with its tiny cosmological constant and small (but nonzero) neutrino masses. No SUSY, nothing funny. And this landscape contains a near-continuum of 20 micron-sized 2+1-dimensional vacua. And a tiny intelligent creature living in one of these vacua would never be able to find a theory that predicted the values of the couplings and masses he saw, because we know full well that he’s living in just one of many vacua, all of which have different laws of physics.
But since the existence of these vacua is independent of any UV completion of the Standard Model, any other theory that extends the Standard Model at high energies will have to contain this landscape of 2+1-dimensional vacua, and thus, in particular, contain a landscape, whether we like it or not. The only question is whether a UV completion of the Standard Model must contain a landscape of 3+1 dimensional vacua, or if there is a unique 3+1 dimensional vacuum. But that’s a pretty strong assertion! (And would be of little consolation to our tiny 1 micron sized friends.)
The big deal about the landscape is that it’s a framework in which you can’t make any experimentally testable predictions, so it’s not science. Not being science is a big deal. If you want to do science, you have to come up with predictions that can be tested to see if your theory is right. In the landscape framework, you inherently can’t do that.
The Standard Model is a 4d flat space QFT that makes an infinite number of predictions, many of which have been tested to very high accuracy. This is completely different than the landscape, and claiming that it’s all the same is just sophistry. We’ve already had this discussion here, I think a couple times. I haven’t looked closely at the Arkani-Hamed et. al. paper about this that recently came out. Maybe there’s something interesting there. But if the claim is that it shows that there’s no difference between the Standard Model, which is highly predictive and testable, and the landscape, which is completely non-predictive, that is obviously nonsense.
So sorry to keep bothering you about all this, but it came up in the comments, and I thought it would be an interesting discussion!
You should check out their paper. It doesn’t say that the 3+1 dimensional vacuum of the Standard Model isn’t unique. It is, as far as the Standard Model is concerned. The Standard Model predicts a unique 3+1 dimensional vacuum. But the paper does shows that there exists a landscape of physically distinct compactified 2+1 dimensional vacua in the Standard Model, many with physics almost the same as in our 3+1 vacuum for objects that are small enough, and intelligent creatures could conceivably live in them, if they were small enough (~ 1 micron).
So what are these creatures supposed to do? They live in a landscape! Are they supposed to say that landscapes are bad and hence spend eternity looking for a reason why their electrons have the precise mass they do? We large creatures would laugh at them!
But the other is that any theory that contains the Standard Model would have to contain this landscape of 2+1 dimensional vacua, since the existence of this landscape depends only on low-energy physics. So there’s no question about theories having a landscape. The only question is whether they have a landscape of 3+1 dimensional vacua. Let’s hope there are no 4+1 dimensional creatures laughing at us!
About SUSY/non-SUSY. Sure, you can put in broken supersymmetry as a regulator of the vacuum energy in a non-SUSY theory, then take it to infinity. But in one case it’s a regulator, in the other it’s a physical, measurable scale. Having a theory with a divergence that you don’t know how to regularize without introducing trouble is bad. Having a theory that makes a robust prediction that is off by a factor of 10^60 is also bad, in a different way. Again, personally I think making a flat-out wrong prediction is worse than not being able to make a prediction. If you feel the other way, fine. Hard to argue about it though, they’re two completely different things. It just seems a bit ridiculous to me when I hear people saying it is an advantage of supersymmetry that you can calculate the vacuum energy in it, when the result comes out absurdly wrong.
It’s really not true when people say that you can’t calculate the vacuum energy in non-susy theories. It’s a free parameter. The cosmological constant is superrenormalizable, and the divergence can be dealt with in the usual manner. Thus, the cc problem is a fine tuning problem, not a problem of an incorrect prediction. In SUSY theories, the only difference is that the renormalization is finite and the bare term is constrained to be related to the superpotential. From the point of view of effective field theory (which is the only way I understand QFT), it all seems the same to me.
Putting things another way, would you agree that SUSY at any scale below the Planck scale helps the fine tuning problem for the Higgs mass?
The paper Mike mentioned is evidently the following:
Quantum Horizons of the Standard Model Landscape
(hep-th/0703067 — Authors: Nima Arkani-Hamed, Sergei Dubovsky, Alberto Nicolis, Giovanni Villadoro)
Sorry, but I still don’t see why I’m supposed to be interested in these 2+1 d compactifications. They have nothing to do with the real world, and I don’t buy the analogy with the string theory landscape. Sure, maybe we’re some random point in some landscape of some 4+1d QFT, or some 10 d string theory or whatever. Could be. But if you want to claim that going on about this is doing science, you have to come up with a way of testing what you are doing. If what you are doing inherently can’t lead to a testable prediction, I don’t know what you want to call what you are doing, but it isn’t science. Does the Arkani-Hamed et. al. paper suggest any way to test landscape ideas? If it does, let’s hear it. If it doesn’t, any claims it makes about the landscape are not science.
I don’t see the relevance of the philosophical argument about fine-tuning. I’ve repeated endlessly what I see as the difference here. One case involves a physically measurable number (that is wrong), the other doesn’t. If that’s not an important difference to you, fine, just say so. But then we’re operating with different value systems, and aren’t ever going to agree on what is “better” or “worse”.
These 2+1 dimensional vacua appear to exist, in our real world. And any theory that contains the Standard Model should contain this landscape of 2+1 dimensional vacua. That’s a prediction. The Standard Model predicts their existence. So I’m a bit confused about why you’re not interested in them. The upshot is that any theory that contains the Standard Model contains a landscape. Do you disagree?
In both cases, there is not an incorrect prediction. One can tune the cosmological constant by balancing the quantum corrections against the bare term in both the supersymmetric and non-supersymmetric theory. (Or, more properly, in the sugra and nonsugra theory.) The cosmological constant problem is always a fine tuning problem. It’s just whether you are fine tuning a formally divergent quantity (such as with the Higgs mass) or a finite quantity (such as the Higgs mass in supersymmetric theories.)
Mike, the fundamental issue remains:
In the presence of the string theory landscape what is excluded, and more to the point, what is excluded that isn’t already excluded by the Standard Model? If the answer to the latter question is “nothing” then we have an untestable pseudo-answer to the questions left open by the Standard Model. Some people may find this pseudo-answer appealing, but if it had been known 20+ years ago that this is where we were going to end up, few people would have bothered to continue down this path.
(And for those who take general relativity as a theory of spacetime structure seriously, and understand how deeply this viewpoint on the theory challenges the foundations of quantum theory, string theory and its offshoots have very little to offer—unless some major new insights are achieved into string theory’s bearing on this question. If so I doubt we’ll still be calling it string theory.)
No, these 2+1 dimensional vacua do not exist in our real world. Our real world has 4 large dimensions. You’re talking about some other worlds which have nothing to do with ours. Again, tell me how I am going to learn anything at all about the real world by thinking about these things. If there’s a proposal for this, it might be interesting. If there’s not, I just don’t see any reason to pay attention to this.
These other vacua can be connected to ours by interpolating geometries. They have to be very far away, though, because their opening angles are small. But if the universe is large enough, then they really must be out there. That’s a prediction. Though, obviously hard to test.
These 2+1 dimensional vacua appear to exist, in our real world.
Really? And how would you verify their existence? Let’s assume the “Standard Model predicts their existence,” as documented (allegedly) in hep-th/0703067. How would this prediction be checked? Bear in mind that most predictions of existence are inherently problematic; a failure of attempted verification can always be dismissed on the basis that one didn’t look hard enough. (Question begged: How hard is hard enough?)
Talking about it as a fine-tuning problem, as you say, in one case you are “fine tuning a formally divergent quantity” which isn’t a well-defined thing to do, and you can’t characterize the size of the fine-tuning. In the other you are fine-tuning something you can experimentally measure, and the size of the fine-tuning required is known and huge. If you think this is an acceptable thing to do, fine, do it and there’s no CC problem.
If you think there is a CC problem, then in the second case it has a well-defined size (huge), in the first it doesn’t. I just don’t buy the argument that it’s better to have a huge problem than an ill-defined one.
You’re not telling me how to experimentally see these other vacua. Do Arkani-Hamed et.al claim that they have new, in principle observable, predictions based on the standard model? If so, that would be much more interesting than empty analogies about landscapes. Where in their paper do they make these predictions?
Talking about it as a fine-tuning problem, as you say, in one case you are “fine tuning a formally divergent quantity” which isn’t a well-defined thing to do, and you can’t characterize the size of the fine-tuning.
Of course it is well-defined. How does this situation differ from the Higgs mass? The Higgs mass is quadratically divergent. The hierarchy problem is that the measured Higgs mass is much less than the cutoff scale. The cosmological constant problem is that the measured cosmological constant is much less than the cutoff scale. I don’t see how you’re distinguishing the two cases.
The difference between the two cases is, as I keep repeating again and again, that in one case you are talking about an experimentally measureable number, in the other case there is no such number (you don’t know what the “cutoff scale” is).
In the case of the hierarchy problem, trying to use supersymmetry to resolve it makes sense since the electroweak symmetry breaking scale may not be too different than the SSYM breaking scale (although experiment is on the way to ruling this out). In the case of the CC, it doesn’t work at all, in a spectacular way.
This has become a complete waste of time. In one case the problem is characterized by an experimentally measured number that we know to be completely of the wrong magnitude, and in the other, no such number exists. Sorry, but this is a difference. Again, If you don’t think it’s relevant, fine. I think it is, but there is no way to “prove” that it is or isn’t.
This all goes back to your complaints that what I wrote in my book about supersymmetry was not accurate. I don’t agree at all. The book gives an accurate characterization of the reasons why supersymmetry breaking is a problem. I’m not going to repeat these here, except to say that getting the completely wrong scale for the CC is one of those problems. This is not some weird idea I came up with, but conventional wisdom in the field.
I’m a little confused. If you have a well-tested theory (and no theory known to humankind is better tested than the Standard Model), and the theory, without modifying it in any way, predicts something that is difficult to observe (in this case, a landscape of 2+1 dimensional vacua, which would appear to us observers as cosmic black strings and other black objects if we found one), are we supposed to reject such phenomena? If the universe is big enough, these things will exist, at least if the Standard Model—with the presently observed c.c. and our best estimates for the neutrino masses—is in fact the correct low-energy description of the universe. Are we supposed to insist otherwise? Unless you modify the Standard Model in some way, then it predicts these phenomena. The Standard Model, as it now stands, predicts the existence of a landscape of vacua.
So, there are two questions here:
1) If the Standard Model predicts the existence of certain phenomena, though they may never be experimentally observed, why insist that they don’t exist and that we shouldn’t think about them? There may be intelligent life out there in the universe that we will never ever observe, but does that mean that they can’t exist? The universe is probably much bigger than the part we will ever be able to see, but does that mean that if a given well-tested theory predicts that it’s bigger, do we just reject that prediction?
2) Why do we live in this noncompact 3+1-dimensional vacuum, when there are a huge number of other vacua predicted by the Standard Model with the same physics all the way up to 20 times the Planck scale, at least for objects that are smaller than 10 microns, other than because of the anthropic principle?
3) If the Standard Model predicts the existence of a landscape of other vacua, which can be interpolated smoothly into our own over large spatial distances, and the existence of these vacua depend only on low-energy physics (as is the case), then doesn’t any high-energy extension of the Standard Model have to contain a landscape as well, at least a landscape of 2+1 dimensional vacua? So why criticize a theory that has a landscape?
4) If any such extension must have a landscape at least of 2+1 vacua, why is it so hard to imagine that there is a landscape of 3+1 dimensional vacua as well? Doesn’t the onus now turn to people to prove that it doesn’t happen? Just because it hurts predictivity somewhat, does that mean nature doesn’t work that way? I wish I could predict all the species of animals on the earth today, but I can’t. It’s a bunch of historical accidents. Nonetheless, zoology and paleontonlogy are still a sciences, because we can make predictions about a limited class of phenomena.
By “two questions” I meant four. I can’t count—so sue me!
I think you’re getting confused between theory and experiment. The thing is, the standard model is a hugely successful phenomenological theory of particle-particle interactions, but, at the end of the day, it is still just a model. To believe that anything you can derive from it is absolute truth implies an implicit faith in the idea that the model is completely and absolutely correct. This I believe is a mistake, because to render reality to a theoretical artifact is not really the way physics should be done; physics should be based on real experimental observations.
You make a lot of demands. Quantum mechanics predicts a lot of stuff that has been experimentally tested and verified. So does GR. But they both also predict a lot of strange stuff—well within in the physical regimes (energy regimes, length scales, etc.) in which they are known to be valid—but that we may never be able to see, and that at least we are not guaranteed to be able to rule out. Are we simply to insist that any such predictions that they make simply don’t exist? Do things only exist if they can defintely be observed, even if an astoundingly accurate model predicts that they should exist and they might but not definitely be observable some day? It’s like a fortune teller who predicts everything perfectly, but also starts telling you about things that you can’t be guaranteed to check with your own eyes.
I’m reminded of what Feynman used to say about all this. He asked why nature should care what we human beings liked. The behavior of nature simply isn’t up to any of us human beings. All we can do is try to figure it out and not be too prejudiced.
dear Mike C.,
I tried to have a look at the first pages of the paper you suggest, and it seems that what they find is just that the SM compatified to 2d can have one or a few vacua plus a quasi-flat direction i.e. some light scalar. That vacuum arises because neutrino masses are comparable to the cosmological constant, and likely this is an accident that has nothing to do with the string landscape. I don’t understand why they choose the name “landscape” for this simple situation. Probably their little 2d animals would have some fun in understanding what goes on, but I fail to see why we 3d animals should get interested in it.
So is this whole debate really about the demand that no prediction of a theory be believed—not matter how well-tested the theory—unless that prediction is guaranteed to be experimentally checkable? I suppose that’s a demand you can put on science, but it seems rather restrictive. Who are we to demand that nature must behave that way?
I guess the reason I disagree with that argument is that any theory of nature that addresses really, really high-energy questions is going to make lots of predictions that we cannot be assured of being able to experimentally verify, hopefully along with many predictions that we can be guaranteed to be able to verify so that we can decide if the model is good. But the ratio of the first kind of prediction to the second kind is inevitably going to get bigger and bigger as we approach greater and greater physical extremes. That seems unavoidable. And so should we always pretend that the first kind of prediction—those that may be testable but that are not guaranteed to be testable—just don’t exist?
An example. Suppose just for the sake of argument that string theory is correct. Suppose that it suddenly makes a huge number of extremely nontrivial predictions that we know we can test, and we test them and they turn out correct. But suppose that there’s idea how to do an experiment that would literally let us see these strings floating around. (It might be possible, but it’s not assured that such an experiment is possible, say.) Do we then insist in this case that there are really no strings?
Correction: missing “no”
But suppose that there’s *no* idea how to do an experiment that would literally let us see these strings floating around. (It might be possible, but it’s not assured that such an experiment is possible, say.) Do we then insist in this case that there are really no strings?
I don’t see how we have an experimentally determined quantity anywhere here except for the actual value of the cc (or, analogously, the Higgs mass). The SUSY breaking scale gives us a cutoff with respect to which we can express the fine tuning of the cc. We don’t know what the SUSY breaking scale is. It could be at the TeV scale (which would be nice), but it could be elsewhere. It could be at the Planck scale. And, if there were no SUSY at all, we’d still have a cutoff at the Planck scale. Are you claiming that these two cutoffs should be thought of differently? They’re both experimentally measurable, either through the detection of supersymmetric partners, or quantum gravitational effects.
Are you saying that because we don’t know the physics beyond the QG cutoff, there could be some mechanism that natural accomplishes the needed fine tuning? If so, I don’t see why you couldn’t make a similar argument for the SUSY-cutoff cc. If we don’t know, we don’t know, susy or not. But from the point of view of effective field theory, I don’t think it matters.
You are claiming that Arkani-Hamed et. al. have discovered that the Standard Model implies the existence of “cosmic black strings and other black objects”. What exactly are their properties, and how could we produce them? Where in their paper do they make this claim? I have trouble believing this. The SM is not a quantum gravitational theory and it appears to me that you need to calculate topology-changing transition amplitudes, but lack a theory where such a calculation makes sense.
If string theory make a lot of testable predictions, and they’re correct, the theory is verified, and this has nothing to do with whether you can “see strings”. The problem with string theory has nothing to do with conceptual difficulties about whether or not you can “see strings”. Fine if the strings are not observable themselves. But a theory has to make predictions of some kind, and string theory just doesn’t.
SM + GR tells you nothing at all about how this “Planck scale” cutoff is supposed to work and you have no hope of doing experiments to see what happens. SUGRA gives a very precise understanding of what happens at the SUSY breaking scale, and people are spending billions of dollars to go out and measure this. These are just different situations. Again, for the N’th time, having a theory in which the question is well-posed is different than having one where it isn’t.
The best analogy I can think of is that in the Standard Model 3+1 large flat dimensions is an experimental input/theoretical assumption/whatever – asking why 3+1 dim in the context of the SM is ill-posed (and maybe even with SM + GR).
It is in string theory that the question of the number of experimentally observable dimensions comes up, and in a nasty way, because the theoretical answer is different.
Mike C., the physicists in the 2+1 dimensions in the (SM+GR) world don’t have these interesting theological debates, because in their world, (2+1) quantum gravity is tractable, and so the biggest single motivation for string theory (that it retrodicts gravity) is simply not there.
assume tomorrow someone comes up with a Rube Golberg compactification of string theory, with a lifetime say about 10^10Gyrs, which fits everything we know about particle physics and cosmology: how’s that different from Newton’s “hypotheses non fingo” about the 1/r^2 dependence of the gravitational forces? It works, and this is all that matters… i’m not saying it’s likely (as a matter of fact, it don’t think it is), i’m just saying i see no reason to get worked up on the landscape issue.
On the other hand, it is very likely that string theory compactifications cannot yield all conceivable quantum field theories at low energies, but only a “small” subset of them (e.g. if something like the “gravity as the weakest force” conjecture holds). Should one prove a rigorous theorem about this and find this subset experimentally ruled out, string theory would be falsified.
To sum up: string theory is undoubtedly way overhyped and has probably slowed down the progress of more down-to-earth theoretical physics by attracting many smart guys, but i don’t find compelling reasons for saying it is not still worth a shot, expecially given the fact that the nonzero CC makes the landscape picture a lot less unreasonable.
The problem with the Rube Goldberg compactifications is that they are insufficiently rigid to be predictive. It’s not just a matter of finding one that fits what we already know, you have to find one that does that and is rigid enough to make falsifiable predictions. If the failure of every “prediction” can be fixed by going to a slightly different compactification, all you’re doing is coming up with a really ugly way of parametrizing experimental results.
As for the “Swampland”, the problem is two-fold. All the things that people are looking at as supposedly not able to come out of string theory already don’t look like the real world, so you can’t use this to falsify string theory. In addition, swampland proponents have this problem that sometimes when they announce that something can’t come from string theory, experts write into their blog explaining to them how to do it. “You can’t get X from string theory compactifications” often just means that no one has tried hard enough to do so. Yet another addition to the Rube Goldberg chain may do the trick. As long as you don’t know what non-perturbative string theory is, your arguments that “you can’t get that from string theory” are going to be dubious.