I’ve been thinking about what to write about this essay by Ben Allanach, which gives his take on the current state of HEP theory. Allanach is a specialist on the phenomenology of SUSY models, but here he announces that he’s basically giving up on these models:

The trouble is that it’s not clear when to give up on supersymmetry. True, as more data arrives from the LHC with no sign of superpartners, the heavier they would have to be if they existed, and the less they solve the problem. But there’s no obvious point at which one says ‘ah well, that’s it – now supersymmetry is dead’. Everyone has their own biased point in time at which they stop believing, at least enough to stop working on it. The LHC is still going and there’s still plenty of effort going into the search for superpartners, but many of my colleagues have moved on to new research topics. For the first 20 years of my scientific career, I cut my teeth on figuring out ways to detect the presence of superpartners in LHC data. Now I’ve all but dropped it as a research topic.

While most HEP physicists still try and end their talks with some sort of optimistic expression of hope that things will change soon, I was struck by a recent talk by John Iliopoulos, which was more somber and more realistic:

No coherent picture emerges

We were expecting new physics to be around the corner…..

But we see no cornerThe easy answer: We need more data

Two problems: (i) We do not know what kind of data

(ii) They will not come for quite a long timeA rather frustrating problem!

and he ends with

The Future of Particle Physics will undoubtedly be bright, but

I will not learn the answer

While thinking about this I happened to look at an old posting of mine, a review of Lisa Randall’s Knocking on Heaven’s Door written back in 2011. There I wrote

One odd thing about the book is the title, which for Randall carries a positive meaning that she acknowledges doesn’t correspond to the very dark one of the Bob Dylan song from the soundtrack of the Sam Peckinpah film. It’s a beautiful song, but one not about finding truth, but about getting shot in the gut and facing death, hopefully not relevant to particle physics in the LHC era:

Mama, put my guns in the ground

I can’t shoot them anymore.

That long black cloud is comin’ down

I feel like I’m knockin’ on heaven’s door.

It does seem like much of the last 40 years of HEP theory is now “knockin’ on heaven’s door”, deeply wounded by negative results from the LHC. What this means for the future is still up in the air: what story about what has happened will become the conventional wisdom?

In Whiskey Veritas …

https://www.heavensdoor.com/

Peter,

it occurs to me that all this “woe be upon us for LHC keeps confirming the Standard Model” is mostly prevalent among theorists of a certain age.

We have occasional LHC-lunches where experimentalists and theorists get together (consider yourself invited if in the Garden State!). The last one was a few days ago and I was struck by the excitement among the young theory grad students and postdocs about the upcoming High-Luminosity LHC run. They were going on about clever new techniques to find hidden new physics, proposing new detectors for long-lived dark photons deep in the LHC tunnels, etc etc.

The attitude was “nature is being coy, now it gets interesting”. What a contrast from the essay by Allanach.

Sterile neutrinos, g-muon (4 sigma) and B-mesons anomalies, no-MOND (i.e. “absence of a fundamental acceleration”, 10 sigma)…

Surely You’re Joking, Mr. Woit 😉

Moyses

Regarding anomalies associated with sterile neutrinos, g-2 muon + B-mesons, the community maintains a healthy scepticism. The number of sigma of a deviation is irrelevant if there is an underestimated or overlooked uncertainty. At any given time over the past few decades one could have done as you’ve done and suggest new physics on the horizon due to the anomalies of the day – leptoquarks at HERA in 1997 was my favourite. Its difficult to see that the situation today is qualitatively different.

Amit,

I’m glad to hear that the part of HEP theory directly interacting with the experiments is healthy. The part that I think has been mortally wounded is the part that has had the assumption that the path to further unification will be based on a SUSY extension of the SM. This isn’t a bad thing in and of itself, depends on what emerges from the wreckage (Dijkgraaf’s recent “There are no laws of physics” piece unfortunately shows that something worse is possible).

Peter, the part of HEP close to experiment is the actual heart of HEP. These people are never as loud or visible as those who become media darlings peddling mysticism be it “landscape/multiverse” or in a previous era “The Tao of Physics”.

Ben Allanach and many others have been using the ideas of the MSSM, with some very sophisticated computer programs, to see if there is evidence for SUSY at the LHC. But I claim that the MSSM is quite flawed. The MSSM is explained on the Particle Data Group website, for example. The idea is that there is an invisible undetectable sector that breaks susy. Then that communicates to the visible sector somehow, also in an unknown way. The practical consequence is that we break susy explicitly with many parameters. I would argue that this is not a theory at all. It is certainly not susy.

It is more like the emperor’s new clothes. If the MSSM turned out to be right, that would be horrible. What does susy really predict, if anything? I would say that nobody knows. Spontaneous breaking of SUSY does not work in a simple way, or even in a complicated way. Does it really predict superpartners, if it is done correctly? To know that, we need a new way to interpret SUSY. What the experiments, and Ben Allanach’s work, and the work of many others, prove, is that the MSSM is not right. But it is not at all clear that the MSSM is the right way to do SUSY. Lets hope it is not, because it is very ugly and non-predictive, whereas SUSY is a simple consequence of quantum mechanics and relativity.

John Dixon,

I should make it clear that when I’m referring to problems with SUSY here, it is to the standard SUSY extension of the SM (the MSSM) and more complicated variants. As you note, such theories have always been problematic, since you need to break SUSY and this introduces a great deal of ugliness, extra parameters and lack of predictivity. This was well-known (I wrote about it in my book) and has always been a good reason to not believe in the MSSM. The interesting news here is that a theorist like Allanach, who has devoted much of his career to these things despite these problems, has finally given up.

There are lots of other possible “supersymmetries” (I don’t believe though that “SUSY is a simple consequence of quantum mechanics and relativity”), but that’s another topic, not relevant to what Allanach is writing about.

Hi Peter. Thanks for your prompt reply. I see that we agree about the mssm. What I meant was that susy is the square root of a translation in quantum field theory. I am not talking about other supersymmetries, whatever they are. The general impression is that spontaneous breaking of susy is the only game for Susy. That is not so. We can do susy without expecting superpartners by transforming the superpartners to sources in the supergravity master equation. My point is that those ideas might conceivably bring Ben Allanach back to Susy because the experimental consequences could be very different.

There is a noteworthy subtlety with this statement, if it is meant as a mathematical motivation for supersymmetry: Not all super-Lie algebra extensions of the Poincaré Lie algebra are standard susy super Lie algebras — a counter-example is described here.

Hence, to mathematically motivate standard susy super Lie algebras among all “square roots of translations”, one needs to invoke another principle.

Here is one: Start with the superpoint, regarded as an abelian super Lie algebra. Then iteratively a) double the fermionic part and b) pass to the maximal invariant central super Lie algebra extension; and repeat.

This process turns out to discover the correct supersymmetry super Lie algebras, avoiding the spurious ones above. Interestingly, it singles out more: It discovers the susy super Lie algebras precisely in dimensions 2+1, 3+1, 5+1, 9+1 and 10+1. See the theorem here.

Nothing substantial to add other than

(A) waiting for LMMI’s questions/comments and PW’s answer far better than my typical “huh?”

(B) Urs Screiber’s comments are self serving. Or maybe I just do not understand him?

paddy,

Urs’s comment are all right, they just, as often, ignore the main point. There’s a huge amount one can say about what happens when you decide to extend the Poincare algebra to some bigger superalgebra. In many of these extensions you have odd generators that when squared give usual translation generators of Poincare.

The fundamental problem occurs when these new generators commute with the energy-momentum generators. If the new generators act trivially on the vacuum, then particle states get taken to particle states with the same energy-momentum relation, i.e. same mass. These “superpartners” will have opposite statistics, but problem is we know of no same mass fermion-boson pairs. So, to avoid this, you have to assume the new generator acts non-trivially on the vacuum, i.e. that the supersymmetry it generates is spontaneously broken. This is where you get into trouble: while there are lots of ways of devising spontaneously broken models, they all tend to either not look like the real world, or require introducing lots of new parameters, then choosing them so as to make the superpartners invisible.

While proponents like to go on about how beautiful this new symmetry is, and how wonderful it is that it relates bosons and fermions, the basic problem is simple: the supersymmetry doesn’t relate any bosons and fermions we know about. Put differently, restricting to the state space of known particles, the supersymmetry acts trivially. This may be a beautiful new algebra, but as far as known physics is concerned, it is the trivial algebra.

Okay so SUSY didn’t pan out. As people have pointed out, it is only minimal SUSY, made even more minimal by experimentalists who scan in just two or three parameters (yeah, guilty). But fine, maybe nature doesn’t do SUSY.

This is no reason to run away from the LHC. Not only is it the only game in town (modulo neutrinos) for the next 20 years, but it has only collected about 2% of its total data haul.

To do a systematic search for deviations from Standard Model we still need guidance from theorists about what new physics signatures could look like. We also need to understand the SM much better than we do now.

Theorists, we need you! Now more than ever.

Hi, Amitabh,

If I understand correctly, the question of whether or not nature does SUSY is one that shall remain open forever, since it cannot be disproven. What has been all-but-disproven is the notion that SUSY makes the mass of the Higgs “natural”. So while the LHC has only begun to collect data, that first 2% has had an oversized impact. Furthermore, there doesn’t appear to be a (known) compelling reason to expect new physics is lurking in the remaining 98% to be collected. The one principle widely touted as a good reason to build the LHC (beyond finding the Higgs) has been demolished. At what scale should experimentalists expect new physics? What guiding principle will take the place of (and hopefully prove more reliable than) “naturalness”?

Experimentalists, I rather think theorists are in most dire need of your contributions at this time. Given the past 40 years, what more reliable guidance would you expect for where to look than “wherever you can”?

LMMI,

There are different sorts of HEP theorists, many of whom are not of much use to LHC experimentalists, only able to offer rather obvious advice (e.g. “measure everything you can about the Higgs, would be great if you could measure Higgs self-interactions…”). Those expert on certain kinds of specific BSM extensions (e.g. SUSY models) that haven’t been found and now look implausible are going to have to retool to try and find some more useful role to play. I think that’s what Allanach is trying to do.

I take Allanach’s essay as saying that he wants to concentrate on looking at possible experimental anomalies, then using theory to characterize the possible things that could be causing such an anomaly, consistent with all known negative results. One can then play a useful role for experimentalists by telling them things like: “a particle X could explain that B-meson anomaly, and if that’s the cause you should also see something unexpected in this other channel”. This is definitely useful, but how well it’s going to work depends on what anomalies one has to work with. A much harder thing to do is to start with no anomalies and ask “what are the possible consistent SM extensions that would produce something visible at the LHC”, indicating where to look for anomalies. Many people have been trying to do this for a long time, coming up with something new here is very hard.

Then again, there are also lots of theorists still needed to understand exactly what the SM predicts in various cases, so that experimental results can be accurately compared with the SM to look for anomalies.

LMMI, as Peter points out there are many flavors of HEP theorists, some of whom work with experimentalists. We need more.

Yes, we are in an era where experiment is going to drive theory, but theory drives experiment. UA1, UA2 at the SPS were built for W and Z discovery. Electroweak theory played a key role in detector design, and electron and muon identification progressed because theory said that was how to bag these vector bosons. CDF and D0 at the Tevatron were optimized for the top search, and a whole lot of R&D went to make the radiation-hard silicon vertex detectors because theory said tagging the b-quark in top decay was critical. The LHC detectors have superb photon resolution because higgs theory said the diphoton decay, although rare, was among the cleanest.

Right now the mix of technologies that will go into the HL-LHC detectors is being decided. Necessary compromises are being made about hardware, bandwidth, physics reach.

And for the first time in many decades we do not have one single driving vision for new physics search. A theorist with a compelling new model (even if half baked) could make all the difference at this moment.

A little surprised to read by Allanach:

“Instead, many of us have switched from the old top-down style of working to a more humble, bottom-up approach.”

“Old, top-down style”? Just about every significant particle physics discovery has arisen from the bottom-up approach… from Ellis and Wooster discovering the neutrino at least through to the bottom quark discovery by Lederman at all at FNAL. That finally infinitesimal “top down” extrapolations predicted the W, Z, Higgs, and top quark is a small crust on top of a deep core of bottoms-up work.

Actually SUSY was all the rage after UA1 “discovered it” in the 1980’s. There is a whole book about that discovery, and its disproof… “Nobel Dreams” by Gary Taubes. Looking for SUSY and not finding it has been around a lot longer than Allanach.

When to give up? Everyone knew in the 1990’s that SUSY’s lowest WIMP-nucleon cross section was about 10^(-55) cm^2, yet people get worried now if nothing is seen at 10^(-48) cm^2.