You may have seen by now claims from various sources about evidence for SUSY coming from CMS, for instance Hints of New Physics Crop Up at LHC, A Lifeline for Supersymmetry?, and CMS sees SUSY-like trilepton excesses. This nonsense is all due to Matt Strassler, who for some reason thought it was a good idea to post a blog entry Something Curious at the Large Hadron Collider that starts off:
Finally, something at the Large Hadron Collider (LHC) that does not seem to agree that well with the predictions of the equations of the Standard Model of particle physics.
followed by various caveats, which include though the advice:
But this is clearly something to watch closely over the coming months.
As one could easily have predicted, this got picked up by the media and various blogs, mostly dropping the caveats. In a later more detailed posting, Matt carefully three times in italicized red explains that “The excess will probably disappear”. He does continue to claim that “particle physicists are paying close attention” to this statistically insignificant discrepancy between data and theory, something I suspect was true before his blog posting for an equally statistically insignificant number of particle physicists.
During this past week or so, there has been a lot of various news about SUSY at the LHC, all of it bad. For some background, one should look at Mike Peskin’s write-up of his summary talk at LP2011, which he posted last week to the arXiv. See pages 37-41 for his discussion of the state of SUSY. He explains why one would expect that all SUSY mass terms are of the order of a few hundred GeV, with the Tevatron bounds on gluino and squark masses (around 300 GeV) already making one suspicious. Similar LHC bounds are already around 1000 GeV, getting close to the limit (around 1200 GeV) of what can be produced at current beam energy. When the LHC comes back on-line with higher beam energy in 2014, these bounds should then go up to 2000 GeV or more. Much has been and is being made of the fact that one can find SUSY models that evade these bounds, with LHC results then giving lower limits in the range 500 GeV and above.
As the LHC experiments become sensitive to hypothetical new particles with TeV masses, we are reminded of the phrase from the Latin Requiem Mass:
Confutatis maledictis, flammis acribus addictis, voca me cum benedictus.
A loose translation is: Thousands of theory papers are being tossed into the furnace. Please, Lord, not mine!
Before the startup of the LHC, I expected early discovery of events with the jets + missing transverse energy siignature of supersymetry. It did not happen. A particularly striking comparison is shown in Fig. 33. On the left I show the expectation given in 2008 by De Roeck, Ellis, and their collaborators for the preferred region of the parameter space of the constrained Minimal Supersymmetric Standard Model (the cMSSM, also know as MSUGRA). The red region is the 95% confidence expectation. On the right, I show the 95% confidence excluded region from one of the many supersymmetry search analyses presented by CMS at LP11. No reasonable person could view these figures together without concluding that we need to change our perspective.
Peskin goes on to argue though that the thing to do is not to abandon SUSY since it hasn’t shown up where it was supposed to, but to “acknowledge that, to test SUSY, we must search over the full parameter space of the the model”. The obvious problem with this is that the “full parameter space of the model” is huge, containing all sorts of corners that will never be accessible to the LHC, or that can be made arbitrarily difficult to rule out, requiring intensive effort from LHC experimenters for decades to come.
For details on what has been going on, various recent sources to consult include Anyes Taffard’s FNAL talk on ATLAS SUSY searches (“SUSY was NOT ‘just around the corner’ … must be hiding well … Or may be … need to go back to the drawing board”) and the many talks at the Berkeley Workshop on Searches for Supersymmetry at the LHC which included a huge array of negative SUSY results, including the one that for some reason got Matt so excited. Besides the kinds of models that Peskin expected to see at the LHC, lots of other more obscure ones are being ruled out by new LHC analyses. These include some that had gotten a lot of popular attention, such as split supersymmetry and F-theory models. These predicted things like long-lived gluinos or staus, which have now been searched for and ruled out in regions where they were supposed to show up. For example see here for more about F-theory and the stable staus, which CMS now says are not there where they were supposed to be (below 300 GeV).
For some other recent news, see the talks at the BNL conference running the past couple days, A First Glimpse of the Tera Scale.
Finally, for the best in recent HEP news, see this from Warren Siegel.
I would very much appreciate your thoughts in response to my comment (#60) over at Sean’s post. My comment was in response to your response (#59) to an earlier comment I had made (#19). Here is the link to Sean’s post:
You’ll also find a comment (#62) from “somebody” (not me, I assure you) and I am also very interested in your response to him/her. Thank you!
I don´t think it is absurd to ask SUSY to be tested over a larger parameter space, since who knows, it could be there and Nature not necessarily care about our experimental difficulties. What I would however like to see is how likely some regions of the SUSY parameter space is with respect to its motivations, most notably the “solution” of the hierarchy problem. Some interesting plots would be to show different regions of the SUSY space with a calculation of the percentage of extra fine tuning one needs to “solve” the hierarchy problem. I guess one could go on searching for SUSY forever but the community would slowly start to understand it is probably not there if the remaining parameter space to be searched for were a theoretically already disfavored one.
Phil, of course Peter knows better but I can tell you my opinion about your comment i.e. how QFT is different from String theory in that respect.
QFT is not a theory of everything and thus it doesn’t have to explain everything e.g. the values of the various constants it contains. It waits for a more fundamental theory to explain these things i.e. to explain QFT, GR and the Standard model itself.
ST on the other hand claims that it is a TOE and consequently must explain everything. Specifically it must certainly explain why we live in this particular low energy effective world with this particle/force content and with these constants.
Now if the String vacua picture is true then ST obviously can’t do that. In this picture there is no reason whatsoever why from these enormous number vacua only our vacuum, with these particular properties, is realized. So far nothing indicates that there are probability distributions highly peaked over our kind of vacuum.
Faced with this problem ST theory must admit her failure or else claim that indeed all these vacua can be realized using eternal inflation as the mechanism to populate them. This is how the multiverse picture emerges.
So Peter objection (as I understand it at least) is that ST instead of admitting its inability to predict anything regarding our world (and thus to admit her failure as a TOE) it justifies this inability by adapting the multiverse paradigm. According to this perception this situation resemebles a drowning man clutching at a straw.
I had not seen Warren’s parodies – still rolling around on the floor trying to catch my breath – he should team up with Weird Al Yankovic and make some videos.
Your comments in regards to string theory basically reflect the exact same attitude that many physicists had towards QFT in the 1950’s and 60’s. It is a myopic way of looking at the real situation as it is today. Just because our current understanding of string theory is limited doesn’t mean that the theory itself is limited.
Bernhard: Any supersupersymmetry parameter space with superpartners less than around 2 TeV can naturally solve the hierarchy problem. The present experimental limits are not even close to this.
I would like to see this for the future. At 14 TeV we will eventually get there and at some point the amount of fine tuning will start to get troubling (do you agree?). I´m not sure when this will start to happen to you, my understanding of this is that what come out is already worrying. So in any case, I would like to see a more detailed prediction on this, for when we get there I know exactly where we are standing in terms of the likelihood of SUSY being correct.
Isn’t it true that theorists haven’t been able to use QCD to calculate the proton’s mass? From my understanding, this is due to the failure of perturbative QCD and only non-perturbative QCD (which we don’t know very well) can allow us to calculate the proton’s mass. Isn’t this similar to our lack of knowledge of string theory non-perturbatively?
Maybe if we understand string theory non-perturbatively, we will be able to find our low energy universe in the theory and rule out the rest of the landscape? But we don’t know if such a thing is possible, hence the continuing research in string theory.
Also, one can argue that string theory is NOT a TOE because it is only a perturbative theory of strings that postulates what the degrees of freedom are beyond the standard model and towards the Planck scale. Perhaps M-theory, or whatever the nonperturbative formulation of string theory is, is the real TOE.
If QCD is the theory of the strong interactions, why haven’t we been able to use the theory to calculate the proton mass? Because we don’t understand the theory very well non-perturbatively. Perturbative QCD, like perturbative string theory, cannot explain everything (i.e the proton mass), but a more fundamental formulation (i.e. non-perturbative QCD) can. The same may be true with string theory.
Phil, I thought that lattice QCD could calculate the proton mass from the masses of the up and down quarks, and they have values for those masses.
If superpartners are not observed below ~ 2 TeV, then supersymmetry cannot solve the hierarchy problem. Thus, a major motivation for the expectation of observing supersymmetry at the LHC would go away. At the moment, we are not there. It will be a few years into the second LHC run before this scenario might be realized. However, I think it is probable that signals of the superpartners will show up by the end of next year, if they haven’t already i.e. the observed trilepton excesses. Also, I think it is likely that in the next few months, the first evidence for the Higgs will be anounced, and it will in fact be in the region favored by the MSSM.
QCD does have a precise non-perturbative definition, there is a lot you can say based on it, and this all agrees precisely with experiment.
Trying to say that the QFT framework which is the most predictive and successful theoretical framework ever developed by human beings is the same as the completely unpredictive string theory framework is really absurd. What you’re doing is saying that black is just like white since they’re both shades of gray.
If you look at the Peskin paper I linked to, you’ll see that his argument is that for SUSY to solve the hierarchy problem it should have superpartners at the mass scale of 100s of GeV, not 2 TeV.
Siegel’s frog parody is funny, but it’s impossible because the total energy of the protons that ever collided in the tevatron is far less than the rest mass energy of a frog.
If you read Peskin’s paper carefully, you will see that what he actually says is that the simplest possibility is for the superpartner masses to be in the few hundred GeV range, not that this is where they have to be in order to solve the hierarchy problem. The superpartners can be heavier than this, and masses from ~100 GeV to 2 TeV can do this.
There are many quantum field theories possible, corresponding to the many possible choices one can make for the underlying gauge groups. Similarly, there are many possible perturbative string theories, corresponding to the many possible ways of compactifying C-Y manifolds, etc. So what is the difference between these two?
The only difference I see is that we are able to perform experiments that show us the way towards the gauge groups of the standard model, but we do not yet have the technology to conduct experiments that show us what physics looks like far beyond the standard model or near the Planck scale. When string theorists get better at building stringy models that match the standard model they will have many models that not only contain the standard model, but also have different possibilities for what lies beyond the standard model and near the Planck scale. When or if our technology improves to the point which allows us to conduct much higher energy experiments, nature will guide us to the correct model, just like how nature guided us to the correct SU(3)XSU(2)XU(1) quantum field theory after we performed lots of particle physics experiments.
Do you agree with the above?
If it is shown that no stringy model contains the standard model (which hasn’t been done yet) or the results of beyond standard model experiment, then string theory will be shown to be wrong. If such a thing is not shown, then string theorists will keep on looking.
OK then, bets are on :-).
As for the Higgs, I think we would already be hearing rumors by now if it were true and things are really quiet, but who knows…
I was so fascinated by your latin quote that I used Google’s translation tool.
I think you did better. But then I assume Latin is merely like another Italian dialect to you.
Shame about SUSY. I have always been a great fan.
PS I did find this official translation, although I still prefer yours
Pingback: Qué ha pasado con la señal supersimétrica observada por el experimento CMS del LHC en el CERN « Francis (th)E mule Science's News
Bernard said, “As for the Higgs, I think we would already be hearing rumors by now if it were true and things are really quiet, but who knows…”
Referring to the Higgs, Tommasso commented on his blog a couple of days ago that “… CMS and ATLAS experiments are now seeing an excess exactly of the right size at the most probable mass”.
So let’s wait and see if this turns out to be a hint of a true signal…
Really?? OK, I missed that. But as you said, let´s wait and see.
You’re trying to argue that black is just like white by starting with “first assume that black somehow turns into light grey..”. This is a waste of time.
Why 2 TeV? Why not 4, 10, 100?
The translation is not mine, but Peskin’s.
You can do find this calculation in any QFT book which discusses supersymmetry. It is a simple calculation of the loop corrections to the Higgs mass. For exact supersymmetry, the correction is zero. As the mass difference between the SM particles and their superpartners gets bigger, then the corrections to the Higgs mass get bigger. At some point (~1-2 TeV), the Higgs mass becomes unstable. I would suggest actually performing the calculation yourself so that actually know what you are talking about for once.
I recently started reading your blog, thanks, it`s very interesting.
What do you think is it possible that theorists do not recognize the failure of SUSY and simply close their eyes on LHC results?
False theories have taken place before but in SUSY too much invested… Academic degrees, thousands of papers, grants etc. Do scientific community able to recognize the crash of SUSY, strings and other “great theories”?
Sorry for my English, I am from Russia.
“You?re trying to argue that black is just like white by starting with ?first assume that black somehow turns into light grey..?. This is a waste of time.”
Peter, I really don’t understand your point here. Can you please clarify?
You say QFT is predictive. Does QFT predict the SU(3) X SU(2) X U(1) group structure as the one that describes our world? No. Experiments were needed to elucidate this fact about our low energy world. Likewise, experiments are needed to elucidate the correct compactification for the extra dimensions, assuming extra dimensions and string theory describe our world. Again, experiments are needed to determine whether or not string theory and compactified extra dimensions describe our world, just as experiments were necessary to show us that group theory and the formalism of quantum field theory describe our world.
So, in that sense, QFT is just as unpredictive as perturbative string theory.
Do you agree with this? Why or why not, and please, for the purposes of enlightening and persuading me to adopt your point of view, refrain from using the “black”, “white”, and “grey” “argument”, since I have absolutely no idea what any of that means. Thanks!!
the point is that simple groups like SU(3), SU(2) and U(1) are the very first thing one would think of. They are quite the simplest thing to try. Calabi-Yau compactifications, however, are mind-numbingly complicated, and there appear to be so many of them that if ever one of them is found that reduces to the standard model in the low engery limit, it would hard to claim that that is a “prediction” of string theory. This is why string theorists are reduced to nonsense like the landscape, and spurious statistical arguments based on it.
QFT as an effective theory cannot predict a bunch of things, like masses. But once you used it there is a clear predictive power to it, once you use e.g. the rules for renormalizable gauge theories you are very be able to compare the model you built with experiment. The models based on QFT´s are very predictive each one separately, reason way we can start discussing which models will be excluded at the LHC. That is in this sense a big difference between SUSY and string theory (ST), even if SUSY was born inspired by ST we can really discover or rule out SUSY. That´s very exciting and interesting. ST comparatively have nothing even remotely close to that. There are is no way you could use ST to make different models as predictive as QFT and there has been no advances on how to solve this in the last years. Even if you had a gigantic accelerator there would be no way you could say “right, that´s the correct ST model”, because there are no precise quantitative predictions to compare to. QFT is a powerful framework, ST on the other hand is… well I´m not sure what it is.
ha! and Bernard,
Thank you very much for your helpful responses! I’m not a string theory expert by any means (my field is in a different area of physics), and I’m pretty neutral about all this, but I just want the straight dope about all this. Here are my responses.
ha!, True the simplest groups turned out to be the right ones, but isn’t it true that Nature doesn’t care what is simple or not? Her laws are what they are. If Nature cared about simplicity, then why don’t we simply live in a “billiard ball” universe controlled only by Newton’s laws of motion?
So are you saying that, as of now, if a string theorist produced a particular compactified calabi-yau manifold, it wouldn’t lead to definite predictions for what our low energy universe should look like according to the model (and, therefore, to see if it matches the standard model), or what physics beyond our low energy universe should look like? How come? I always assumed that once you choose the parameters for the calabi-yau, you will get a definite theory of particle interactions by using the rules of string interactions. Are there no definite rules for string interactions? I always thought strings join together and break apart and that there were clear expressions for amplitudes for these processes. Why is building a string theory model, based on a particular calabi-yau manifold, capable of telling us what the particle interactions at different scales should look like, such a hard problem?
In my advanced old age, I have trouble performing loop calculations in the MSSM, especially keeping track of the extra 105 undetermined parameters. And then, I never know whether I should really be using the NMSSM or some other extension of the MSSM, and sometimes I’m not sure if I need to go to higher loops or not. So, maybe you can help me out by pointing to where someone has done this calculation. And, by the way, is it 1 TeV or 2 TeV? If it’s 1, the subject is just about done.
Theorists aren’t closing their eyes to LHC results, they’re paying close attention. Right now, the question is, what do you do if these results rule out the generic picture of how SUSY appears that you had been advertising for a long time as the one to be expected? One attitude (mine) would be that there is already a long list of reasons not to expect SUSY, so this should finish the subject off as a popular one to work on. Lacking a better idea, experimentalists might want to keep doing searches for SUSY variants, but should be under no illusion that they’re likely to see anything.
If you have been spending the last 20-30 years of your professional career investing your time in developing expertise in the ins and outs of the intricate details of SUSY models, or you have loudly advertised SUSY as an implication of another subject you are deeply invested in (e.g. string theory), you might not be willing to give up so easily. In this case you would start going on about all the special cases of SUSY you could think of that might be such as to evade the current LHC limits. There may be some limit though to how long you can keep doing this, and still have people pay any attention to you.
I am always surprised when people say that Supersymmetry was born inspired by String Theory (as Bernhard says). Miyazawa, in papers from 1966 and 1968, introduced Supersymmetry a few years before the superstring hype. Of course no strings are mentioned in Miyazawa papers. The superstring was introduced in 1971.
I see that part of the problem here is that you don’t actually understand what is involved in constructing a realistic string theory model. To start with: you can’t just “pick a Calabi-Yau and calculate”. Calabi-Yau’s come in a very large number (possibly infinite) of families, each family locally parametrized by a space of high dimension. If you pick a point in such a space, that does pick out a Calabi-Yau, but you basically know almost nothing about its metric, and that’s the first thing you would need to start doing calculations. Oh, and this actually is only approximately true: you really don’t want Ricci-flatness, but a more complicated condition with higher order terms.
Let’s say you actually could do this. Then you’d have a theory that was clearly wrong, it would have the “moduli” problem. The parameters defining your Calabi-Yau would behave like massless fields, giving you lots of long-range forces of gravitational strength, violating the equivalence principle. So, you need to add a bunch of non-perturbative structures in by hand to “stabilize moduli”. Take a look at the KKLT construction which is supposed to do this, which even its advocates refer to as a “Rube Goldberg construction”.
Let’s say you’ve done all this in some specific case. Then you have the problem that vacuum energy is completely wrong. Etc, etc, etc. What’s going on here is nothing at all like QFT, where you can just sit down and calculate. Here anything you can calculate comes out wrong, so you are forced into more and more complicated constructions, just to avoid being ruled out by experiment. This is black where doing the calculation and getting an answer that agrees with the real world to 10 decimals is white. String theorists who go on about how they really can calculate things in principle, there are just some minor technical problems are being intentionally misleading.
may c j,
Miyazawa proposed a proto-SUSY (meson-baryon) thing but as far as I know was not exactly the same SUSY (fermion-boson) that we have today. But in any case, I´m fine to give credit to Miyazawa, makes no difference to me.
Once again, you are throwing in spurious arguments that have nothing to do with the actual point. I presume you are doing so with the intent of being deliberately misleading. Whatever extra parameters (SUSY soft terms) there are in the MSSM, they have no relevance in regards to the loop corrections to the Higgs mass. The loop corrections are only sensitive to the scale of the mass splittings between the SM particles and their superpartners, not on the detailed superpartner spectra which are determined by the soft terms.
Just to make what Peter is talking about more explicit, here is a reference http://www.springerlink.com/content/2845m53jmpw5754h/fulltext.pdf , where the authors first considered a general case with an arbitrary Kahler metric to obtain some general expressions but then picked a particular Calabi Yau and looked at stabilizing a subclass of the moduli (so called Kahler moduli) – the scalar fields that describe the deformations of the internal metric that correspond to the volumes of two- and four-cycles and also control the overall volume of the compactification. You don’t have to look through all the details but I would like to draw your attention to the expressions in eq. 3.24. Here \tau_i describe the volumes of four-cycles, t_i describe the volumes of two-cycles and V_X is the volume of the Calabi-Yau manifold used in this example. If you examine eq. 3.24 you will notice immediately that all these volumes are controlled by a single parameter, denoted by \tau_D while all the numerical coefficients are completely fixed by the topology of the Calabi-Yau. In Type IIB compactifications, Kahler moduli control the values of gauge couplings, e.g. the value of the unified gauge coupling \alpha_GUT=1/25 is literally equal to the inverse volume of the four-cycle wrapped by the visible sector D7-brane stack, so if you assume the standard gauge coupling unification and disregard the small threshold corrections, the corresponding volume \tau_GUT=25. Now, if you have a realistic compactification, you could use this as input to determine the value of \tau_D and therefore automatically determine the numerical values of all the t_i, \tau_i as well as the overall volume V_X. So, once you pick a Calabi Yau, your model would be extremely predictive because you would be able to immediately constrain all the physical quantities whose values are controlled by Kahler moduli.
People are quick to forget that LEP took a BIG bite out of supersymmetry pushing it to an odd corner of parameter space for survival. The LHC is only mop-up.
Phil/Eric, if you are so sanguine about supersymmetry.string theory can you tell me what it predicts for theta_13? This is anumber we shall measure very accurately in next few years.
Phil/Eric, if you are so sanguine about supersymmetry.string theory can you tell me what it predicts for theta_13?
Non-zero. Because SUSY etc. doesn’t have any symmetry principle that requires it to be zero.
A frog genome weighs roughly 5pg. LHC collisions can accumulate this rest mass in less than an hour, I guess.
If you read my comment carefully you will have noticed that I am not sanguine about string theory. Like I said, I’m no expert in string theory (my field, though in physics, is something completely different). I am merely after the truth about string theory and its prospects and how the situation with string theory’s predictive power (or lack thereof) is different from quantum field theory’s predictive power. No bias, just the “straight dope”. So, in answer to your question, I have absolutely no idea. I’m not even so sure of what a theta angle is. I think it’s something to do with neutrino masses.
I want a prediction for its value with error bars. we sort of already know its non-0(I think 3 sigma from T2K results). If string theory is a theory of everything it should be able to predict its value.
i think string theory doesn’t require it to be 42 either.
thats my prediction, its not 42.
If SUSY goes, does that imply that inflation has to go too ?
SUSY and inflation have pretty much nothing at all to do with each other.
Peter, what do you think of this: Natural SUSY Endures, arXiv:1110.6926 ? One of the points it makes is that one must distinguish between sparticles which must be low-mass in order for the theory to be `natural’, and others who can be as heavy as they like.
Was thinking of writing a post about this, not sure if I’ll have time. It’s interesting to see that people are focusing on this one sort of scenario, kind of a last hope of avoiding fine-tuning. Interesting question it whether this (and next year’s) data will be enough to test it, or if one will have to wait until 2014-5. And when it gets shot down, will SUSY advocates give up?
Besides the paper you mention, there’s lots about this at
Especially pithy is Arkani-Hamed’s presentation, see