Run 2 of the LHC is about to start with first stable beams scheduled for Wednesday morning, Geneva time). If you’re up (I’ll be asleep) you can watch a live webcast, or watch what is going on here. The current plan is 3 bunches/beam Wednesday, 13 bunches/beam Friday, and 48 bunches/beam over the weekend.

Tomorrow will also be an LHCC meeting, which you can also watch live. It will include reports from the experiments, and a status report about the machine which should give the latest details about the planned schedule for ramping up the intensity over the next couple months.

For the best advice about what to look for in coming months, see Jester’s summary here. First new results may well be about gluinos.

This week there’s a workshop going on at Nordita. On Thursday Gordon Kane will explain how string theory predicts that the LHC will see superpartners soon. I gather his claim is that gluinos are at 1.5 TeV, just above the Run I limits of around 1.4 TeV, so a sure thing for Run II. Of course, back in 1997 he was claiming they were at around 250 GeV, just above Run I limits, a sure thing for Run II (but that was the Tevatron…).

**Update**: Kane has very specific string theory predictions for Run 2: gluinos at 1.5 TeV, winos at 620 GeV (+/- 10%). So, I guess string theory is going to finally be tested by the LHC over the next year or so…

Last Updated on

Why does Gordon Kane keep making irresponsible ‘predictions’ that aren’t firm??

SRV,

Because he can? A more to the point question might be why people keep inviting him to do this. For example, long after publishing the failed Tevatron predictions, Physics Today invited him to do it again, see here

http://www.math.columbia.edu/~woit/wordpress/?p=3236

what are the implications to HEP if LHC run 2 finds no gluinos?

gluinos,

That’s a very interesting question that I think we’re going to find out the answer to. Gluinos are both easy to see (since strongly interacting), and standard arguments for SUSY (eg “naturalness”) don’t allow you to push their mass up too high. So, they’re the most likely ones to be seen, given standard expectations about SUSY. If they’re not seen, likely there will be no superpartners seen.

Kane is an extreme case, it’s quite clear what his reaction is going to be (“gluino mass is just above 2 TeV, will be seen at the HL-LHC”). Much more interesting will be how others react to this situation. Will they just say they believe in SUSY no matter what (John Ellis seems to be planning to go that route), or will the negative experimental results change their attitude about SUSY? Some of them will have to pay off bets they have made about this, perhaps that will have some effect.

“First new results may well be about gluinos”.

Dr. Woit,

I think I remember you wrote in your blog not too long ago that no matter what energy they revved up the LHC to no superpartners would ever appear. Am I missing something?

Peter

I thought you may be like to read this

https://www.quantamagazine.org/20150527-a-new-theory-to-explain-the-higgs-mass/

Eduardo Lira,

What I meant is just what Jester was discussing: first new results are likely to be stronger limits on gluinos than the limits from Run 1.

Mimoune,

I did see that. I think I agree with Arkani-Hamed: “far-fetched”…

Because of the hierarchy problem, until we find them or reach the highest energy scales, the most probable place for sparticles is always just around the corner. That might be frustrating, but it’s how it is.

LHC results should barely impact your preference for SUSY over the SM alone – even with all the LHC results, the simplest SUSY models are much more probable explanations for the smallness of the weak scale than the SM.

Kane’s remarks are easy to ridicule, but they’re not actually that foolish or silly.

And stable beams are up, with collisions going ahead!

YNWA,

So, SUSY is a theory for which there is no evidence, which always predicts that it will be vindicated “right around the corner”, and when it isn’t, that has no negative impact on the idea at all. Somehow Kane has always neglected to explain this.

If you want to make particle theory an object of public ridicule, that argument seems to me an excellent way to do it.

There is enormous evidence for SUSY versus the SM. The SM’s generic prediction for the weak-scale is totally wrong, $M_Z \sim M_P$ (of course it can be tuned to get it right though). The LHC results should decrease your belief in SUSY versus the SM, but only by a jot. In order to make SUSY as bad at predicting the weak scale as the SM, you’d have to exclude sparticles up to near the Planck scale. That’s simply a reflection of how bad the SM is at predicting the weak scale.

The SM doesn’t predict the weak scale at all, and, by your argument, neither does SUSY.

This reminds me of the argument that, since the SM doesn’t predict the CC, but SUSY does, SUSY is much better, even though the prediction is off by some exponentially large factor. To my mind that doesn’t count as “enormous evidence” for SUSY vs. the SM, but the opposite.

The SM (interpreted as an effective theory valid up to around the Planck scale) does predict the weak scale – its generic prediction is totally wrong $M_Z \sim M_P$. That’s the essence of the hierarchy/fine-tuning/naturalness problem in the SM.

A SUSY model also predicts the weak scale and its generic prediction is much better (its prediction is that $M_Z \sim M_SUSY$, where $M_SUSY$ could be anthing up to the Planck scale.

This comes from a place of respect – you must see that there are good reasons for believing that sparticles are around the corner, even if you don’t find them compelling.

YNWA,

I don’t think the SM has anything to do with the Planck scale. You can get a bad prediction by making various assumptions about what is happening at the Planck scale and interpreting the SM in terms of those, but that’s a problem for your assumptions about the Planck scale, not for the SM.

I also don’t think the arguments for SUSY were good pre-LHC (and wrote a chapter about this in my book many years ago). For fans of these bad arguments, the LHC gave some hope of vindication anyway. That’s now pretty much gone, and best that be admitted, not replaced by new “right around the corner” claims.

If I remember correctly, around 2000 Gordy was saying that he would have abandoned SUSY if it was not discovered at Tevatron run II.

All right, let’s leave it at that. I’ll try to read your book sometime.

“A SUSY model also predicts the weak scale and its generic prediction is much better (its prediction is that $M_Z \sim M_SUSY$, where $M_SUSY$ could be anthing up to the Planck scale.”

Is there actually a hard argument that it cannot be above the Planck scale (other than that all bets are off at the Planck scale)? If not, your “prediction” is simply that the weak scale is a scale. I find the fact you are trumpeting this as some kind of success hilarious.

The hierarchy problem is a case of one free parameter, when extrapolated over 16 orders of magnitude in energy, taking on a value we do not understand. The Standard Model also has 25 other free parameters with values we do not understand the reasons for, some of which also look suspicious. Increasing the number of free parameters by 100+ in order to sort of account for the value of one parameter does not look like progress.

The SUSY may be always just over the horizon to those who love her truly, but increasingly the rest of us simply won’t care. This certainly looks like it will be the case if it doesn’t show up in the next couple of years.

Peter,

What is the upper limit of Gluino masses that is allowed under naturalness/weak scale hiearchy argument? What is the upper limit of gluino masses LHC Run 2 13TEV can produce and detect? How soon will these results be announced, given LHC Run 2 starts today and its projected luminosity? thanks regards

gluinos,

The talk by Mike Lamont today said 5-10 inverse fb this year, 1 inverse fb during the initial 50 ns part of the run (early July). So, won’t match the amount of 8 GeV data until sometime next year. 1 inverse fb should be about enough to match the previous bounds (1.4 Gev), so that may happen by the end of the summer. I’d guess by sometime next year you’ll see 2 GeV bounds.

The problem is that there is no upper limit. If you believed the naturalness argument, we were supposed to have seen these things long ago, at the Tevatron. The people who didn’t give up after the Tevatron/LEP and insisted that LHC energies were needed are now all gearing up to argue that nothing at the LHC is still no reason to give up, that what is really needed is a 100 TeV machine.

Here are my predictions:

September 2015 – Z’ boson not discovered at LHC

March 2015 – no sign of gluinos at LHC

Summer 2016 – still no axions seen at LHC

2017-2018 – strong evidence that dark matter particles, whatever they are, are not produced at LHC

2019 – “Theory of Everything” fever finally collapses, Nobel Prize awarded for “Theory of Nothing”, which explains why there is probably nothing to see between 1-100 TeV.

To my anonymous critic. You want to know what happens if we permit the SUSY scale, and presumably also the cut-off in the SM, to be greater than the Planck scale (let me call this high scale $\Lambda$). In the comparison of SUSY versus the SM, not much changes – the SM predicts that $M_Z \sim \Lambda$, whereas SUSY predicts that $M_Z \sim M_SUSY$, where $M_SUSY$ is anything less than $\Lambda$.

The point isn’t that SUSY predicts the weak scale spot-on – it doesn’t. The point is that a SUSY model’s generic prediction is more compatible with what we observe than the SM prediction. The smallness of the weak scale is much more probable in a SUSY theory.

You compare the dim-2 Higgs coupling with the SM’s many other unexplained parameters, such as the Yukawas. If we can’t explain the Yukawas, who cares if we can’t explain the dim-2 coupling? But we don’t know any theories that make concrete, correct predictions for the measured Yukawas (without parameter fitting). If we knew of such theories, I’d definietly favor them over the SM (especially if they were supersymmetric).

Wouldn’t you say that supersymmetry is a ‘prediction’ of string theory?

SRV,

Personally I have been consistent in claiming that string theory makes no predictions. In the past string theorists have often responded to the “no predictions” claim by saying “string theory predicts supersymmetry”. See for example

http://www.math.columbia.edu/~woit/wordpress/?p=3904

Ever since results from the LHC started to come in, you hear this claim a lot less often…

YNWA,

What experimentally verified predictions does SUSY give that the SM doesn’t?

I have realized that I am probably the most successful string phenomenologist on this planet. Not famous or prominent, but successful in the sense that used to count as success in pre-post-modern physics: agreement with experiment. After all, who else has managed to use string theory to make a falsifiable but confirmed prediction about the LHC?

Given the role supersymmetry plays in modern mathematics, would the nonexistence of supersymmetry modify our view on the relation between physics and mathematics?

Thanks

“The point isn’t that SUSY predicts the weak scale spot-on – it doesn’t. The point is that a SUSY model’s generic prediction is more compatible with what we observe than the SM prediction. The smallness of the weak scale is much more probable in a SUSY theory.”

My issue initially was that you described the postdiction that a number is between zero and infinity as being a successful prediction. The fact is that this model is set up entirely to solve this “problem” – the fact that it can accommodate a solution is not then evidence that this model is in any way correct.

I agree with you that there aren’t any known models that successfully reduce the number of free parameters from the SM – but this does not mean that such models are not realized in nature. The fact that models designed to solve the hierarchy problem alone continue to not show up in nature should maybe be taken as a sign that what we are doing now is not the right approach. If we continue to make the “just over the horizon” argument, and learn absolutely nothing from experimental results, it will ultimately kill the field.

Mesmar,

No. The specific supersymmetry algebra that has been conjectured to be relevant for particle physics, to some extent being tested by the LHC, is only one very special example of the general phenomenon of supersymmetry and supersymmetric QFTs. That this specific example isn’t relevant to the real world doesn’t affect the mathematically interesting aspects of supersymmetry (this example was never especially interesting mathematically). It might even have a positive effect, encouraging people to pay more attention to aspects of supersymmetry that are mathematically interesting, rather than this particular one which just had a failed idea about physics associated with it.

Update: Kane has very specific string theory predictions for Run 2: gluinos at 1.5 TeV, winos at 620 GeV (+/- 10%). So, I guess string theory is going to finally be tested by the LHC over the next year or so…

^

do you agree with Kane? What happens Kane’s prediction if no gluinos or no winos.

gluinos,

I can predict with 100% certainty what will happen once Kane’s prediction doesn’t work out: he’ll come up with another prediction. A couple years from now he’ll be predicting gluinos just above 2 TeV, too heavy for Run 2, but a sure thing for the HL-LHC.

If no gluinos or winos, Kane revises his model and makes another prediction. In his paper on G_2-MSSM, he says that the entire sparticle spectrum is determined once you take into account the electroweak symmetry-breaking scale, the Higgs mass, and the gravitino mass (which he calculates approximately), but the details depend on the discrete choice of the 7-dimensional compactification manifold, which changes the hidden sector gauge group and the parameters in the formula for the gravitino mass:

http://arxiv.org/abs/1408.1961

At best, the prediction of a 1.5 TeV gluino mass could be considered a test of the very specific string-inspired model that Kane is promoting, which is essentially a particular instantiation of SUSY SU(5), but even within his G_2-MSSM model there is room for adjustment.

I’ve become increasingly suspicious of “naturalness” or “no fine-tuning” arguments. They often seem to amount to saying that dimensionless constants are more likely to be near 1 than enormously large or enormously small.

This might be a fine heuristic when we’re completely clueless and desperately need any heuristic we can grab ahold of – but is there any reason to believe it’s

true?Can someone point to a reference where someone enthusiastically and intelligently defends this sort of argument? Or alternatively, attacks it?The thing I find sad is when people say “look! No free parameters!”, when they have chosen a geometry on which to dimensionally reduce (or similar: a choice of non-compact space, a choice of structure group,…). Do they really believe that a moduli space of Calbi-Yau manifolds is different to a parameter space of coupling constants? That a discrete choice of homotopy type of manifold, of isomorphism class of a Lie group is different from a choice of real number?

As a geometry guy don’t you realize that geometry always seems more natural than real analysis? Visualization!

@John, regarding naturalness: I thought the discussion by Strassler here wasn’t too bad: http://profmattstrassler.com/articles-and-posts/particle-physics-basics/the-hierarchy-problem/naturalness/

@David, regarding free parameters: of course moduli are also parameters, but they are not free parameters, instead they are dynamical fields. While you may start out selecting moduli in string theory just as you select free parameters in field theory, afterwards you are required to check that the values you chose constitute a solution to the theory (background equations of motion+quantum corrections+anomaly cancellation). Notably if you want to assume that the moduli won’t evolve away from the values you’d like to consider, then you need to check that they sit in their potential well, hence that they are stabilized. Discussion of moduli stabilization is a huge topic (landscape and all). Beware that the full constraints for consistent choices of moduli in string theory are strong, hence solving them is hard, and accordingly it is in practical approximation rather than in fundamental meaning that glancing through the literature you might get the impression that some people choose their moduli as if they were free parameters.

@John, sorry, forgot to add: a pronounced criticism of the traditional naturalness argument was voiced by Wilson, see p. 10 here http://arxiv.org/abs/hep-lat/0412043

Urs,

You’re neglecting to mention that there’s a minor problem with the “moduli are dynamically determined” argument: you don’t know what the dynamics are. The “moduli stabilization” story is a very complicated one, and not because you know what the underlying theory is and are having a hard time solving it…

@Urs

Mentioning ‘moduli’ was a little risky, but I mean the moduli space of

allCY manifolds. I had hoped mentioning homotopy types makes it clear. Unless changing homotopy type is part of the dynamics (which it may be, who knows) I don’t see how picking a homotopy type of manifold (and then of course complex structure etc, which can evolve over time) is not a choice.@David, yes change of homotopy type (aka topology change) of spacetime is part of the dynamics of string backgrounds, see e.g. the references here: http://ncatlab.org/nlab/show/flop+transition .

But this is tangent to your quarrel about the common usage of the term “free parameters”. Consider plain GR. It’s free parameters are the gravitational coupling constant, the cosmological constant and the prefactors in front of all higher curvature corrections. What is not called a free parameter of GR is the homotopy type of spacetime (what physicists call “the topology”). Instead, this is part of what it means to have a solution to the theory.

In string theory all those prefactors in the effective Lagrangian are fixed, in M-theory also the global coupling is fixed, you may not choose these parameters freely, their values are fixed by a more fundamental principle. Instead, there is now a richer space of solutions of the theory, which in suitable limits look like some effective field theory with free paraneters. But now all these parameters are actually parameters of the solution space of the UV-completing string theory (its moduli) and hence are dynamical fields.

See also the string theory FAQ http://ncatlab.org/nlab/show/string+theory+FAQ#IsStringTheoryTestable

@Urs

thanks for the clarification. I guess GR is qualitatively different at this point in time in that we have (local) solutions that correspond to measured reality, and one can calculate a solution then go measure how accurate it is.

We aren’t yet at the point with string theory that Eddington was at when he measured the effect of gravity on light (if we even get there!) in the sense that it was a new prediction that could make or break GR. I guess it’s more like when people predicted the Omega^- based on what we now know as representation theory of the structure group of the gauge bundle, but at the time would have been (for mathematicians) very shaky justification. At that time, Gell-Mann and Ne’eman got lucky, in that they had sniffed out what was going on, and the details filled in later. Now, on the other hand, people have no single idea (rather a giant space of field theories-worth) what specific geometric structure should give what we see at the LHC (the analogue of SU(3), or the Schwarzschild metric), if there even is one, and so are clutching at whatever they can calculate.

Time will tell, perhaps.

@David, your issue about the technical meaning of the technical term “free parameter” has nothing to do with phenomenology, it applies to, say, the Ising model as well as to any other realistic or nonrealistic model, and as such is not qualitatively different in gravity, no, on the contrary, it is precisely the free parameters of Einstein-Yang-Mills-Dirac-Higgs Lagrangians that are meant when people say that string theory has no such. This is a mathematical statement about certain functionals which is entirely independent of any phenomenology. While I doubt here the right place here for the discussion we are having, it’s good that you voiced your confusion about this common terminology, because without such basics sorted out, there is no educated assessment of the theories under discussion.

@Urs thanks again for clarifications. I’m not sure my issue is only with ‘free parameters’, but as you say, perhaps this is better discussed elsewhere.

Here’s two predictions that Gordy Kane made in August 2011 to me privately (I then blogged about it):

———

I thought it was worthwhile to comment a little on recent LHC searches, since they have led to a number of surprising statements. First consider gluino searches. The results of a search for gluinos are very sensitive to squark masses. Theoretically the only well motivated values for squark masses are very large, tens of TeV, because they are generically predicted in compactified string/M-theories when the associated moduli satisfy cosmological restrictions. Then (a) the gluino production rates are considerably reduced, and (b) the decays or gluinos to 3rd family final states dominate. Existing gluino searches cover this region poorly. The current limit on gluino masses is not above 500 GeV. Whether the squarks are indeed so heavy is not the issue, the point is that if they are the limits on gluino masses are smaller than is often stated. I and others excpect this decay to tops and bottoms is the signature by which gluinos will be found, with masses well below a TeV.

Second, when squarks are heavy the two doublet Higgs sector is an effective single doublet since the heavy partners decouple. There is a single light Higgs boson observable. If the gauge group of the theory is the MSSM one then the Higgs mass is between about 115 and 128 GeV (essentially a function of the parameter tan(beta)). It will not be above that range. It has the SM production rate. The LHC searches are not yet sensitive to this region, and should not yet have seen a signal, so not seeing a signal does not allow any meaningful conclusions about Standard Model or MSSM Higgs bosons.

———

As you see he got one wrong and one right 😉

Cheers,

T.