Back in 2017, after it had already become clear that negative LHC results about SUSY and WIMPs had falsified theorist’s most popular scenarios for how to extend the Standard Model, Nima Arkani-Hamed gave a summer school talk to students with the title Where in the World are SUSY & WIMPS?, which I discussed here. At the time I was encouraged that while he was still promoting SUSY and the landscape (in the split SUSY variant), at least he seemed to be arguing that the lesson to be drawn might be that the whole SUSY-GUT business was a mistake:
The disadvantage to the trajectory of going with what works and then changing a little and changing a little is that you might just be in the basin of attraction of the wrong idea from the start and then you’ll just stay there for ever.
A few weeks ago in Princeton, at a PCTS workshop on Dark Matter, he gave an updated version of the same talk. Much of it was the same material about how split SUSY is the best idea still standing. Unfortunately, at the end (1:09) he seems to now have changed his mind and be arguing that the best thing for theorists to do is to keep tweaking the models that failed at the LHC:
You could very justifiably say “look, you’re just continuing to make excuses for a paradigm that failed”, OK, and I would say that’s true, and even the paradigm most of your advisors love [e.g. usual SUSY] was already an excuse for the failure of non-supersymmetric GUTs before that.
That is a perfectly decent attitude to take, but I would like to at least tell you that you should study some of the history of physics. This very, very, very rarely happens, that some idea that seems basically right is just crap and wrong, It’s probably mostly right with a tweak or some reinterpretation. You’d have to go back over…, I don’t know how far you’d have to go back, even Ptolemy wasn’t so far from wrong…
These are two different attitudes towards connecting theory and experiment. If you like, more the theory egocentered attitude, or the just more explore from the bottom up attitude, they’re both perfectly good attitudes, we’ll see which is more fruitful in the end. If you take the more top-down attitude, just keep fixing things a little bit.
If you had to pick the single most influential theorist out there on these issues, it would probably be Arkani-Hamed. This kind of refusal to face reality is I think a significant factor in what has caused Sabine Hossenfelder to go on her anti-new-collider campaign. While I disagree with her and would like to see a new collider project, the prospect of having to spend the decades of my golden years listening to the argument “we were always right about SUSY, it just needs a tweak, and we’ll see it at the FCC” is almost enough to make me change my mind…
Update: Today Ethan Siegel at Forbes has Why Supersymmetry May Be The Greatest Failed Prediction in Particle Physics History.
“at lease he seemed to be arguing” ->
“at least he seemed to be arguing”
I have followed your blog since its beginning (along with reading your book). I am a biophysicist with only an amateur’s interest in high energy physics, but I found it fascinating that some relative nobody was claiming that the leading theoretical physicists were going down dead ends, lured by some sort of mathematical siren song. In your many posts you discussed in detail what was wrong with both string theory and supersymmetry. Although I could not judge the correctness of your explanations in detail, what I found most convincing was that you encouraged critical comments in your forum and I never saw a serious or convincing rebuttal of your arguments. It should be emphasized, that you stated this opinion long before the experimental results were available that now seem to support you. I do not think the hep community gives you the appropriate credit. Not only were you brave enough to stick your neck out, but, apparently, you were right. This account suggests to me that there are two distinctive types of theoretical physicists’ skills or abilities. One is a great facility with the mathematical formalism, etc. that has led these “brilliant” physicists down dead ends. The other is a physical intuition that gave you the confidence to shout out that the emperor had no clothes. The fact that these brilliant physicists are not willing to give up these ideas even in the face of experimental evidence is just more confirmation of how strong a lure the mathematical ideas must be, along with their lack of the appropriate physical intuition.
Thanks, but I can’t take so much credit, and the conclusions you draw are backwards.
First of all, skepticism about SUSY and string theory has always been fairly widespread among theorists. A sizable fraction of theorists never have worked on SUSY or string theory and for most such physicists, one reason was skepticism that these highly speculative models were all that promising. All I can take credit for is being more vocal and obnoxious with my skepticism than most.
Secondly, one of the main reasons for the solidity of my skepticism has always been that I’m not much interested in “physical intuition”, but am quite devoted to the idea that, at a deep level, great mathematical ideas and great ideas about physics go together. The problem with SUSY and string theory has been that while you can motivate them starting with some deep mathematical ideas, you find that these don’t give you the right physics. You need to add ugly mathematical structure to them (SUSY breaking in the case of SUSY, hidden extra dimensions in the case of string theory) to get models that aren’t obviously ruled out by experiment. So, my skepticism is not based on any “physical intuition” about problems with superpartners or strings. It is based on looking in detail at the actual SUSY/string models that are supposed to unify physics and judging that they are, mathematically, hideously ugly. As such, without strong experimental evidence in their favor, I’m going to be skeptical, and the complete lack of any experimental evidence seemed conclusive, even pre-LHC.
“It is based on looking in detail at the actual SUSY/string models that are supposed to unify physics and judging that they are, mathematically, hideously ugly.”
Isn’t this exactly the opposite what Sabine advocates in her book ?
I don’t know about “exact opposite”, but she and I have quite different takes on the role of mathematics here. I don’t want to discuss her views here, what’s on topic is Arkani-Hamed, and I don’t think his problem is being “Lost in Math”. In his talk, he does a good job of locating the start of all this trouble in the GUT hypothesis. I don’t understand why he doesn’t just draw the conclusion that should be abandoned.
Yes, this kind of refusal is the reason why theorists in these fields have not come up with useful predictions for decades. It illustrates that the self-correction in this community is not working. (Esp disturbing they still do it after they’ve been called out on it so many times.)
The absence of reliable predictions for new discoveries at the next larger collider means that such a machine is presently not a promising investment. There are better avenues to pursue. It adds to this that we have not seen much progress in collider technology since the 1990, yet maybe in 20 years we’ll have plasma wakefield accelerators or high-T superconductors. Banking money on the FCC right now does not make any sense.
You call it “my anti-new-collider campaign” but I think I am just sharing information that particle physicists would like to keep for themselves.
“This very, very, very rarely happens, that some idea that seems basically right is just crap and wrong, It’s probably mostly right with a tweak or some reinterpretation. You’d have to go back over…, I don’t know how far you’d have to go back, even Ptolemy wasn’t so far from wrong…”
You don’t have to go back that far, surely? Kelvin’s theory of vortex atoms, for instance.
“That is a perfectly decent attitude to take, but I would like to at least tell you that you should study some of the history of physics. This very, very, very rarely happens, that some idea that seems basically right is just crap and wrong, It’s probably mostly right with a tweak or some reinterpretation.”
Say what? The luminiferous aether is mostly right? Quantum mechanics is just a reinterpretation of Newtonian physics? Steady state cosmology just needed some tweaks? Phlogiston was abandoned too soon? What history of physics is he talking about? How could you say something like this in an open session and not have a line of people challenging you at Q&A?
I’d like to have references from historians of science to backup the claim that it “very, very rarely happens”. I think it isn’t true. Now if he thinks Ptolemy wasn’t “far from wrong” once suitably “reinterpreted”, then ok: *by these standards* physicists were rarely wrong. But is it the kind of standards we want? Keep tweaking the Ptolemaic model until it works? That’s the question. I really doubt extensions of the Ptolemaic model would have been fruitful and yield new predictions for very long, if at all.
That’s a side issue, but the idea of top-down vs bottom-up seems wrong-headed as well. To my knowledge, big paradigm change (such as relativity theory) were not “bottom-up” at all, quite the contrary: they were attempts to solve theoretical tensions. And conservative extensions of existing paradigms are very often “bottom-up”, that is, take new data/phenomena and make it fit in a model of the theory.
“This very, very, very rarely happens, that some idea that seems basically right is just crap and wrong”
Sure, we are all totally aware that modern physics deals with the aether, we all know that there cannot be a five-fold symmetry in a crystal, transition temperatures in superconductors are limited to below 40K and of course there is no such thing as graphene or dark energy or dark matter…
Tweak is generally accepted to mean a minor adjustment. On this basis the jump from for example Newtonian Mechanics to General Relativity could hardly be considered a tweak.
History they say – and this includes the history of science – is written by the victors. Theories and ideas which however good they seem at an instant don’t stand the test of time are either lost to history or reported as ‘failed’ attempts on the path to our current understanding. Some ideas, however ‘correct’ they appear will, regardless of how much they are tweaked (in the sense we normally use the term), never produce the results we hope for. The difficulty is when and how, having started with a ‘good idea’ , one decides to give up tweaking and accept the initial premise as a lost cause.
It will always be easier to drop an unpromising line of enquiry if there are a number of alternative starting points. If a theorist is making little/no progress they have three alternatives – construct a new starting point (often requires a conceptual leap), keep tweaking and hope to make a breakthrough, give up (difficult). If a researcher is convinced that their starting point is the only ‘good idea’ available it’s understandable that they will keep tweaking in the hope of making a breakthrough. On this basis A-H’s stance, if not the reasons for it, makes some sort of sense. In the absence of an alternative plug on and hope for the best. The arguments against this approach have been well exercised in this blog and elsewhere by Dr Hossenfelder and others. Will there be a consensus any time soon?
There seems to be a consensus that Arkani-Hamed’s argument from history doesn’t hold up…
In case you haven’t watched the whole thing, or seen other of his talks like this, I think the most striking thing is the stream of consciousness, access to the id aspect of it. Here and in the previous version, he frames his support for split SUSY as “this is what I would say if woken up in the middle of the night with a gun to my head”. This is highly peculiar, asking for psychoanalysis rather than rational evaluation. It’s a fascinating insight into where people like him who have spent their careers on failed BSM ideas are right now.
If you just look at his rational argument, he locates the fundamental problem at the GUT scenario, and the way, after proton decay experiments shot down the first GUTs, people decided to “tweak” the GUT idea by making it supersymmetric. I remember the first time (1983? from Jon Bagger?) I heard the argument that the negative proton decay results coming in meant that SUSY-GUTs were the way to go. This made no sense to me: non-SUSY GUTs already had problems with too many parameters (they need a whole new Higgs sector to break the GUT symmetry), why was it a good idea to move to theories with even more unobserved fields and undetermined parameters? If Arkani-Hamed’s argument was following a rational path, I think it would be an argument for why the GUT scenario is a mistake. It’s disturbing that since the 2017 version he seems to be retreating from following the logic of his own argument.
Stop picking on Nima. You all are doing the internet thing of taking one statement in an hour talk and ganging up.
I saw the video and what I see is a smart guy who is grappling with the LHC and direct detection null results. He was asked to defend SUSY to a room of bright young physicists. He goes into the history of how and why SUSY models came about, and his opinion is that the new theories will be arrived at adiabatically rather than from a paradigm shift. Yes he mentions 100 TeV colliders but also spends a lot of time on the electron EDM.
Since the unspoken baseline for all these discussions seems to be that the LHC has demolished all BSM forevermore, here are the actual limits (from CMS, ATLAS is probably similar):
Note that in certain sectors (EWK gauginos, some scalar DM) the limits are less than 1 TeV. Sometimes much less.
“He was asked to defend SUSY to a room of bright young physicists.”
My question is why would anyone ask him to do this, and why would he agree? His talk took the point of view that the naturalness argument for SUSY is now basically dead, so what remains of the case for SUSY is
1. Better coupling constant unification in GUTs
2. WIMP candidate
Is it even conceivable that anyone in that room hadn’t heard these arguments before? Maybe there was an interesting talk to be given about exactly what the LHC WIMP limits are, but that would have been “bottom-up”, which he even explains is not what he does.
Again, the truly weird thing about the talk is that it gives an accurate account of the SUSY history, then draws the opposite of the obvious conclusion.
1. 1974: GUT proposal, testable by proton decay
2. 1983: Epic Fail of GUT proposal, no proton decay. Deal with failure not by abandoning GUTs, but by going to more complicated SUSY GUTs, with SUSY broken at the electroweak scale, testable at Fermilab, LEP and the LHC
3. 1990s-Now: Epic Fail of SUSY GUTs with SUSY broken at the electroweak scale.
An idea with no evidence for it fails miserably, you “tweak” it with new aspects for which you have no evidence, and the “tweaked” version fails miserably also. What lesson to draw?
1. The original idea was just wrong, beyond fixability by tweaking.
2. Keep on tweaking.
2. seems to me the wrong lesson, but even if you want to go that way, invoking the history of science as showing this is always the way to deal with consecutive multiple failures of an idea is just bizarre.
The part about looking at the history of physics is truly unbelievable. The history of physics (as well as other sciences) is so packed with failed theoretical concepts which everyone deemed super beautiful at the time and which held back science for decades. Read Bachelard, there are two examples a page of such trends. But on top of that, Arkani-Hamed takes the worst possible example by referring to the ptolemaic worldview. The ptolemaic model did not need a little bit of tweaking to get true; in fact, for centuries people did nothing but tweak it with epicycles to make it fit averse data. What was needed, and what eventually happened, was to just get rid of it altogether, wipe the state clean and start all over again within the copernician worldview. That’s one of the most obvious examples of a paradigm shift. Now I guess it is obvious to pretty much everyone what the lessons from this part of history tell us about theories which tweak themselves out of experimental data. That Arkani-Hamed would choose this specific example as a defense of SUSY is surprising, to say the least.
–My question is why would anyone ask him to do this, and why would he agree?
Why would anyone ask? Because SUSY is an important part of theoretical development of the last 40 years and the null results from LHC must be addressed. And who better than Nima for this reckoning?
Wby would Nima agree? Why not? What, do you want him to hide out in humiliation at the lack of SUSY or DM signatures?
I took away an entirely different message from his talk. He starts with the historical basis of SUSY, why did so many people feel it was right? What’s left of the arguments after the latest LHC run? What is the way forward? Is it even possible? That bit about Howard Georgi deciding that short-distance physics is a dead end was there for a reason.
Even the pre-Copernican Ptolemaic stuff made sense to me. Basically, there are concepts in a failed theory that you might want to keep (things moving in circles around other things) and others you might want to jettison. Granted, he is not very good at history of science.
Look, if he had gone on about multiverse or beauty or naturalness then I would agree with you. But that is not what this talk was. It was him coming to grips, in public, that nature does not work the way he thought it did.
I watched the entire presentation and I agree with you that we shouldn’t get mad at him because of a loose comment made during the Q&A session. It looks more like a personal opinion than a fully worked out argument. But I disagree on Ptolemy, for the reasons given by Aurélien Bellanger: the question is not to what extent Ptolemaic astronomy got something right, but what research programs are fruitful.
Coming up with more and more epicycles before Copernicus would have been very inefficient without a deeper understanding. Another example is Vulcan, and then the asteroid belt around the sun hypothesised to explain Mercury’s precession. In both cases, scientists were patching their current models again and again to fit anomalous data, without making new successful predictions or learning anything new, and I would say, looking back at history of science, that it “very very rarely” works…
Perhaps pursuing these programs would have been harmless before, but the question whether we’re in the same kind of situation is more pressing today, given the cost of a new collider.
I fully agree with your last comment.
I think Nima has had the integrity of coming out to acknowledge, in public, that certain pictures of SUSY might very well be wrong.
And I applaud him for doing so.
Honesty and integrity are fast disappearing in today´s world, so I think that Nima should be given two thumbs up for what he´s done.
A couple of months ago Susskind also said, in a public interview, that ST would have to be “modified in a bigger context, generalized”, because they (meaning the top ST´s) know that it can´t describe the world as it truly is.
Another example of honest intellectual integrity.
Does that mean that ST should be fully abandoned, or that we shouldn´t build a larger, more powerful collider?
No it doesn´t. It just means that certain problems need to be attacked from a different angle.
So Nima, thanks a lot.
DB, of course one does “acknowledge that certain pictures of SUSY might very well be wrong” but what someone like Nima brings is that excitement: Nature is more complex and interesting that we had thought! Let’s get going! That above all is what I get from his talk. Listen to the description of electron EDM, he brilliantly summarizes the current reach and future prospects and what it means for BSM searches.
(made me slightly sad that I didn’t chose to join the high precision group in grad school…)
The whole tenor of the talk is like that. This is a first rate mind surveying the situation figuring out where we go next, not one crouching at despair at the lack of high MET events. Either humans are meant to get to the next step in fundamental structure of matter, or we are not. Nima (and most of us experimentalists) are in the former, more optimistic camp.
It’s amazing how quickly amnesia hits.
For a more recent example from the history of physics, where a theory wasn’t tweaked but forgotten, what about all the bootstrap models, that were one of the big things before the Standard Model was formulated? I only remember the PR articles about them, which I vaguely remember as being reminiscent of the current PR articles about String Theory. These theories seem to have vanished utterly once the Standard Model was accepted, which happened some time when I was in college.
Yes, Arkani-Hamed is a very impressive thinker and a very compelling speaker, with a talent for inspiring others with a positive vision at a time when most are trying to process what seems like a bad situation. He’s also sometimes spouting nonsense. Not mentioned here at all was the bulk of his talk, about models assuming hundreds of new scalar fields, or thousands of copies of every field already known. No one else could stand in front of an audience and go on about how this explains why we see nothing BSM now, but will soon, without getting laughed out of the room.
As far as the more serious material goes, a lot of his talk was of the order of “don’t blame me, I never said the LHC would see something, in my favorite model (split SUSY), you don’t expect to have seen anything, but you will at the FCC!”. For another very similar discussion, see this by James Wells
The problem here is that most theorists don’t take split SUSY seriously. It actually was the butt of a well-known joke, the April 1 publication of a paper on “Super Split Supersymmetry”, which now has a Wikipedia entry. In this context, the joke is that
1. GUTs don’t work, so you “tweak” them to get SUSY-GUTs, broken below 1 TeV
2. SUSY-GUTs broken below 1 TeV don’t work, so you “tweak” them to get SUSY-GUTs broken at 10-100 TeV, safe (for now) from disconfirmation.
The Super Split Supersymmetry joke is to make fun of the implausible second tweak by suggesting a further tweak of moving the breaking scale even higher, ensuring that your theory is completely indistiguishable from the SM. You’re back where you started, pre-GUT hypothesis.
To put things more concretely, once you abandon the link of SUSY-breaking to the electroweak scale, SUSY doesn’t explain the “WIMP miracle”. So, all well and good to think maybe a TeV scale WIMP is behind dark matter, and even advertise that one thing the LHC and its successor will do is look for such a thing, but there’s little to no reason to believe SUSY models have anything to do with it.
Some would claim modern string theory as a successful “tweak” of the old bootstrap. The old bootstrap involved the idea that QFT couldn’t describe the strong interactions, but you could do it using some unknown framework, specified not by a usual theory, but by consistency conditions (often actually analyticity conditions). This has now been “tweaked” so as to not just describe the strong interactions, but all interactions, with the conjecture of an “M-theory”, specified not by a usual theory, but by a list of consistency conditions on what happens in various limits, related by dualities.
Woit always criticizes SUSY/string models. Fine, but not once on this blog have I seen any alternative proposed. What exactly would you like a potential FCC to look for?
I’d like a potential FCC to look for anything and everything that it can see that hasn’t already been ruled out by LEP/LHC searches. This includes getting the best possible measurements of production and decays of the Higgs, especially anything sensitive to Higgs self-interactions. Also, of course, searches for any new states not accessible at LHC or LEP energies. Searches could specifically look for gluinos, squarks, or other states characteristic of SUSY models, but people should just realize that at this point these are not well-motivated searches. They’re not likely to see anything (which doesn’t mean you shouldn’t do them), and if they do see something it is more likely to be something unexpected than something described by the MSSM.
No, I don’t have a well-motivated BSM model that would provide an attractive target for an FCC search. Neither does anyone else….
Peter, I do not understand your comments about “tweaking”. Are you ridiculing theorists for proposing models and then revisiting them in light of null experimental results? Isn’t that kinda how science is supposed to work?
What I see happening is theorists proposing a myriad of models, as they should when there is little constraint from experiment. As experiments come on line and models start falling, hopefully one model survives and the experimental goal switches to figuring out its parameter space (or there is a “surprise” ie November 1974).
We are at that point where some SUSY models have fallen, and some have not. I would not call it a “bad situation” because nature is doing something clever and not plain and obvious, and we have to be just as clever.
And as for crazy ideas “getting laughed out of the room” well they did that with fractionally charged particles that one couldn’t see because they were too strongly bound. Frankly from my point of view there aren’t enough crazy ideas yet, more are needed.
You are not making a point. If someone says it is unreasonable to believe dark matter is made of gray porcupines, you can’t fault them for not presenting an alternative.
As a particle theorist who has never really bought into BSM models, I can summarize why a (mostly silent) minority of my colleagues never felt tweaking these models was going to work:
The models don’t solve problems, by which I mean inconsistencies. They do address esthetic questions, e.g. naturalness, which made them worth examining, but did not make them convincing.
The way science is supposed to work is that you are supposed to tweak good models that are doing something right to make them better. The problem here is that we’re talking about tweaking bad models that didn’t explain anything, not to make them better, but just to get them out of inconsistency with experiment (often by making them even worse, i.e. more complicated and explaining less). I see that Peter Orland has made a similar point.
There’s a reasonable point of view that these models are just providing a randomized set of regions in the accessible search area for the LHC, giving experimentalists specific targets and specific ways of measuring progress towards covering this search area. That’s fine, just don’t fool yourself that these are well-motivated models, giving you something better than a random set of choices from throwing dice.
From a theory point of view, fractional charges were extremely well-motivated. We were seeing several sets of states with quantum numbers organized into the rigid patterns provided by irreducible representations of SU(3). The fact that all such representations could be generated by taking products of fundamental 3d representations provided an excellent reason to look for such 3d reps as elementary constituents, and these had to have fractional charges.
>>”Not mentioned here at all was the bulk of his talk, about models assuming hundreds of new scalar fields, or thousands of copies of every field already known. No one else could stand in front of an audience and go on about how this explains why we see nothing BSM now, but will soon, without getting laughed out of the room.”
While I also found it strange that no one there seemed to raise any questions about this, I also have to ask: given a rather arbitrary assemblage of 24 fields or so already, give or take a graviton or inflaton or two, what’s so fundamentally different about having 1000 or 10,000 other such (hypothetical) fields also filling space?
Isn’t this a kind of Noetherian-related “prejudice” for Nature obeying some (in Sabine Hossenfelder’s terminology) mathematical “beauty” requirements — in this case just the minimalist “gruppenpest” symmetries of the Standard Model? While I recognize that Arkani-Hamed used these ideas to justify and explain more of the same stuff he has been proposing for years, for a 100TeV collider to test, if physicists are supposed to be thinking outside the box they’ve been in for the last two generations, why should even considering such things be verboten and ridiculed?
Both the conformal and the S-matrix bootstrap are alive and well. See for example https://arxiv.org/abs/1805.04405 and https://arxiv.org/abs/1810.12849
Peter W, your use of the word “tweak” needs a better definition. One person’s tweak could be another’s exploring new areas in theory-space in light of experimental results.
The most tweaked thing in my experience was the Higgs mass. In grad school (early 90’s) we were taught it could not be heavier than the Z. Probably around 70 GeV or so. Then after some null results by Tevatron and LEP1 there were “tweaks” that raised it up to slightly above 100 GeV. I remember being told that 110 GeV was the absolute limit of what the theory would bear (granted it was by someone trying to recruit me for the search). When LEP2 found some fluctuation around 115 GeV one of the arguments from the theory side for delaying LHC was that it had to be the Higgs, because it couldn’t really be much heavier, even this was a stretch. It didn’t really end until 2012.
The natural SUSY has certainly taken a hit after the null results at the LHC but the measure of naturalness was arbitrary anyways to begin with. You change your naturalness measure slightly or the limit what you call as natural and your sparticles are guaranteed to be out of the LHC’s reach. Let’s not forget one of the most important hints for SUSY, which is the Higgs mass at 125 GeV. In Standard Model, you really don’t expect it to be that light, and 125 GeV is on the heavy side for SUSY almost guaranteeing that SUSY won’t reveal her beautiful face at the LHC unless we are lucky. There is no inconsistency here. We knew back in July 4th, 2012, that this wouldn’t be easy. So I am not sure why you and Sabine Hossenfelder think that the null results in the past few years are an indication that SUSY is ruled out. No, we really knew the results would most probably be null given the Higgs mass. Phenomenologists tweak their models so that they are discoverable at the LHC. It doesn’t mean that’s all out there, it just means that those models require immediate attention because the heavier ones won’t be discovered at the LHC for sure.
A good example of a tweak in the history of astronomy is the addition of Neptune to the solar system. Back in the early 1800s, it was known that the present map of the solar system and the theory of Newtonian gravitation provided an inaccurate description of the movement of bodies in the outer solar system.
I don’t know if any people proposed “alternative gravity” theories. But the solution to the problem was to tweak the framework that was then available, by adding Neptune to the map of the solar system, and eventually Neptune was indeed discovered. This would have followed an extensive period (decades?) of people not discovering Neptune, where they probably had “upper limits” on Neptune.
Though nowadays the solar system is largely relegated to being in children’s science books, it was equivalent to the standard model of particle physics for most of the past few thousand years. As in, it was the mathematical/physical structure that people studied if they were curious about nature, and yes, many of the people studying it were partly inspired by the beauty of the underlying mathematics.
The equivalent to the BSM skeptics, circa 1820, would have been people saying “let’s stop looking for Neptune. If we have not found it then it’s not there. Let’s just get better measurements of the motion of Saturn.” Yeah, whatever.
Circling back to BSM physics, it’s not a curiosity, it’s a requirement. The standard model is a false description of the natural world. It doesn’t include gravity, dark energy, dark matter, and it is not consistent with the baryon asymmetry in the universe. Ergo, it’s false, and thus it is sensible that some fraction of theoretical physics be devoted to BSM research.
I agree with you that the benchmarks for what we consider to be natural have been somewhat arbitrarily set. I don’t think that naturalness is ruled out. Theorists early best guesses have been ruled out. The LHC/naturalness debate seems to be more sociological than scientific in origin.
I’m not sure what you mean by the SM preferring the Higgs to be heavier than it is.
The precision EW fits from SM parameters imply the Higgs should be pretty much where it is: https://arxiv.org/pdf/1803.01853.pdf .
A two-minutes look at the related Wikipedia page is enough to learn that Neptune was theoretically predicted in 1845, then subsequently looked for and immediately found at the predicted location, in 1846. There were no “upper limits” on Neptune, and no two-decades long search for its position: it was predicted to be somewhere, people looked, and there it was. I fail to see how this piece of history provides a relevant defense of SUSY.
A more significant analogy would have been if Le Verrier had predicted a position, had been told that there was nothing there, had then tweaked its model by saying something like “it’s certainly not a planet but a system of three planets orbiting each other in an intricate way, that’s why it’s actually further away, just look at this other position”, had been told he was wrong again, and had then repeated the same process several times again. But that’s not even close to what happened.
In an earlier comment, RGT laid out the challenge: “… the jump from for example Newtonian Mechanics to General Relativity could hardly be considered a tweak.”
I would maintain that the said jump is actually a series of tweaks, each one well-motivated and successfully tested in its time. The analogy is of course with evolution, which is nothing but a long succession of small individual mutations.
* Galileo, Newton: particle mechanics, forces [Newtonian mechanics]
* Newton, Coulomb: forces and potentials generated by massive and charged particles [Newtonian and Coulomb potentials]
* Gauss, Poisson, Ampere, Faraday: forces and potentials generated by multitudes of sources combined into fields [gravitational, electric and magnetic force/potential fields]
* Maxwell: (some) fields have their own dynamics independent of sources, finite speed of field propagation [Maxwell’s equations]
* Einstein, Lorentz, Poincaré: (universal) finite speed of propagation extended to particle mechanics [special relativity]
* Einstein: the gravitational field is a metric [equivalence principle]
* Einstein: universal finite speed of propagation extended to all fields known at the time (meaning to the gravitational field), the gravitational field has its own dynamics [general relativity]
I did not aim to be very precise. The names listed with each tweak are mainly representative of the most famous people involved and of the epoch. And I also do not claim that the evolution of physical theories followed this one singular path. There were many branch points that underwent their own independent series of tweaks, but in the end did not survive. I also do not want to imply that there were no significant jumps in thought in theory development. But those jumps were more along the lines of abandoning a (possibly dominant but) unsuccessful line of thought and returning to tweak the last best step that was known to work.
Igor Khavkine writes: “I would maintain that the said jump is actually a series of tweaks, each one well-motivated and successfully tested in its time.”
Dear Igor, it all depends on how you define “tweak.” According to your definition, a TESLA automobile is just a few “tweaks” of the wheel.
A rocketship is a just a few “tweaks” of fire and tents.
Most folks have a different definition of the word “tweak.”
Be careful relying on 2-minute searches of wikipedia to come up with broad conclusions of scientific history. You may end up mistaken, as was the case here. The wikipedia article contains accurate information, but it does not contain all of the information.
Yes, there were improved measurements and calculations of Uranus’ orbit in the 1840s, which led to the discovery of Neptune a few years later. However, the conclusion that Uranus’ orbital motion was peculiar dates back to at least the 1820s, see page 23 at this link:
Moreover, you were very quick to dismiss an otherwise valid point on a technicality. You should know that the same general issue applies very much to the history of solar system studies. The orbital motion of mercury, for example, took a very long time to understand, and it had to be reconciled by modified gravity. Today, astronomers are looking for a “planet nine” which was proposed three years ago to explain the unexpected phase space distribution of kuiper-belt objections, observations and analysis are ongoing. Many problems in solar system science remain unresolved, such as the faint young sun paradox. People have not given up studying these issues simply because the questions have been around for 10+ years or 20+ years. Some questions are genuinely difficult to solve, and thus people research them for decades.
As was the case for the discovery of Neptune. That was a roughly ~25-year research problem. More actually, as once Neptune was found they had to confirm that it had the right parameters to resolve the issue.
Enough of the historical analogies, this really is a waste of time. The bottom line is that there is zero evidence for SUSY extensions of the Standard Model, and they explain essentially nothing (e.g just coupling constant unification in GUTs which don’t seem to work anyway) at the cost of a huge increase in complexity and number of undetermined parameters.
That this kind of SUSY is now a dead idea, finally killed off by the negative LHC results, is not just some peculiar opinion from me and Sabine Hossenfelder. Most of the particle theory community has long been skeptical of the idea. The famous Copenhagen bet on SUSY back in 2000 had more than twice as many theorists taking the no-SUSY side, see
It would be interesting to see a poll of the HEP theory community now on the subject, I think one would find belief that SUSY extensions of the SM are viable post-LHC to be a small minority opinion. Even those sticking to this opinion I suspect have their doubts. Arkani-Hamed is known for the enthusiasm with which he expresses his opinions, in this case he’s unusually defensive and tentative (“if you woke me up in the middle of the night and put a gun to my head…”).
I agree with your latest comment.
It would be great to know what the top theorists believe.
Does anybody know?
Witten? Susskind? Maldacena? Nima? Seiberg?
Even the younger generation, like Douglas Stanford, Xi Yin or Daniel Harlow…
Arkani-Hamed’s talk is exactly answering this question, and taking the “I still believe, despite it all…” stand. For all the others you mention, I think they have long ago voted with their feet and decided that SUSY extensions of the SM were not something they wanted to work on. In a recent Facebook post, Harlow writes
“when I started graduate school in 2006 I was intending to work on LHC physics, but I decided that there wasn’t much room for me to make important contributions so I chose to work on other problems which seemed to me more promising. In the years since the LHC turned on, many others have made the same choice. In particular I want to emphasize that Nima Arkani-Hamed, one of the best-known LHC model builders, has for the last ten years mostly worked on other topics where he finds progress more plausible. ”
Some of these people may have been hopeful that SUSY would show up at the LHC, but I don’t see any evidence that any of them besides Arkani-Hamed are impressed by split SUSY or any other attempts to “tweak” SUSY to explain why it hasn’t shown up at the LHC.
Peter, split SUSY is not a tweak in response to LHC null results because I recall mentions of split SUSY during Tevatron Run2 . It didn’t get a lot of publicity probably because MSSM was ascendant (not to mention doable).
More accurate to say minimal SUSY is dead. The larger question “is nature supersymmetric?” has not been answered. It may well be beyond the capabilities of LHC or even future colliders to answer but we won’t know until we get there.
your statement “The larger question “is nature supersymmetric?” has not been answered. It may well be beyond the capabilities of LHC or even future colliders to answer but we won’t know until we get there.” needs to be amended, I think.
There are various scenarios where we can know *before* we get “there”; one scenario is that we find another description of the standard model that solves the issues that people have with it, most notably the understanding of its parameters.
Sometimes I like to be a bit populistic. The argument that “we cannot know before we get there” can be made for any theory – also for the one that tiny green mice live inside particles above 100 PeV.
If one honestly thinks that nature is supersymmetric and that it is described by 130 parameters instead of the 25 of the standard model, one must provide some evidence – like we would require from the people who talk about little green mice. Nobody has seen any evidence for those additional parameters. Some weeks ago, I talked to a researcher who spent all his life on supersymmetry. I asked him why he continued on that topic. He was quite for a minute, then said slowly: “because it is the only game in town”. I think he was very honest.
My personal opinion is this: the interest in supersymmetry will decay rapidly as soon as another game in town arises.
There are other “games in town.” There are variants of strong coupling models of condensates such as technicolour. There are models of substructure of quarks and leptons, called preon models. There are people attempting to understand the deep structure behind the choices of gauge groups and representations of the standard model, such as Cohl Furey, arXiv:1806.00612 and previous papers.
Like all ambitious new ideas people can and have pointed our potential shortcomings of each of these. But that is always the case for new theories. I find it astounding that just a handful of people are working on the idea Furey develops that the octonions have something to do with the structure of the standard model.
The main job of particle theorists remains, after all is said and done, to understand why nature chose the structure of the standard model, with its particular gauge groups and representations, and the particular values of the dimensionless parameters. There is exactly one right answer to this question, which means that the sooner we encourage people to put to one side hypotheses that have not succeeded in favour of inventing and exploring a diverse range of newer ideas, the sooner we will know it. Even if one of the older ideas like supersymmetry turn out to play a role, it is likely at this point that the route to the right answer will involve new ideas such as the structure of the division algebras (and indeed there are links between the division algebras and SUSY.)
“I am not sure why you and Sabine Hossenfelder think that the null results in the past few years are an indication that SUSY is ruled out.”
You can’t rule out SUSY and I certainly didn’t say so. You can only rule out specific SUSY models. To the extent that predictions have been made for the LHC with specific models those have been ruled out. The reason is that all those models used arguments from naturalness for motivation why new physics should be discoverable in the LHC range.
“No, we really knew the results would most probably be null given the Higgs mass.”
Funny how particle physicists know things in advance after the data forced them to admit their predictions were wrong.
“Phenomenologists tweak their models so that they are discoverable at the LHC.”
We all know that. This is why those “predictions” are worthless. Stop doing it. It’s not good science.
If I were a graduate student being told by a leading practitioner that his paradigm was no more wrong that Ptolemy, I would not be reassured. Isn’t Ptolomy’s the proverbial example of an utterly wrong theory being patched up to “save the appearances”? It’s not an accident that “epicycle” is a standard insult in physics.
So I’m not sure Arkani-Hamed has actually changed his mind, he is just leaving the conclusion to be drawn as an exercise for the audience.
Pingback: The Mathematical Question From Which All Answers Flow | Not Even Wrong