Heading to Oxford today, this evening I’ll give a talk there on Unified Theories of Physics. On Saturday I’ll try to find some way to get to the HTLGI Festival in London despite a national rail strike, where I’ll give a talk on Saturday and be on two panel discussions Sunday.
I’ll post slides after the talk tonight, one theme of which will be the failure of a series of attempts to extend the Standard Model, all of which were started in the mid-1970s (GUTs, SUSY, string theory). An opinion piece by Sabine Hossenfelder appeared yesterday in the Guardian, which takes a similar point of view on the current fate of extensions of the SM, but I strongly disagree with a lot of what she has to say.
The bad theory activity she points to has been going on for decades, but in recent years it seems to me to be a lot less popular. Most influential theorists have (quietly) agreed with her that particle physics is dead. In attacking bad model building in particle physics, I think she’s going after a small group of stragglers, not the center of theoretical activity (which has problems much more worth discussing).
What I most disagree with her about though is her treatment of HEP experiment and experimentalists. Yes, one can find people who have used bad theory to make bad arguments for building a new machine, but I don’t think those have been of much significance. For more on the current debate about this, see here. At the present time though, no one is spending money on building a new energy frontier machine any time soon. Money is being spent on running the LHC at high luminosity (CERN) and studying neutrinos (US), as well as studying the possibilities for going to higher energy. All of these activities are valuable and well-justified.
The LHC has been a huge success so far, with the old claims that it was going to see extra dimensions an embarrassment which doesn’t change the science that has happened. The discovery of the Higgs was a huge advance for the field, and the on-going effort to study its properties in detail is important. Another huge advance for the field has been the careful investigation of the new energy range opened up by the LHC, shooting down a lot of bad theory. Pre-LHC, the most influential theorists in the world heavily promoted dubious SUSY extensions of the SM, making these arguably the dominant paradigm in the field. LHC experimentalists have blown huge holes in that bandwagon, in some sense by doing exactly what Hossenfelder complains about (looking for evidence of badly motivated theories of new particles). In this story they’re not the problem, they’re the solution.
I’ll be busy this week with the talks mentioned and with attending math talks in Oxford, so little time to discuss more here or do a good job moderating a discussion. So, behave.
Update: The slides from the Oxford talk are here.
Update: Sabine has a blog entry more carefully explaining her point of view here.
Update: Some coverage of this at Physics World.
Update: More discussion of this from Ethan Siegel here, response from Sabine here.
“Most influential theorists have (quietly) agreed with her that particle physics is dead.”
I am not concerned with “most influential theorists” (in particle physics) but with “most theorists” (in particle physics). That is, I’m concerned with the 99.9% that make up the big bulk. You are referring to the 0.1% at the top, ppl who are swimming in grant money and can do whatever they want. I can only guess, but quite possibly the sample of people you personally get to meet and talk to is strongly biased towards the top.
The problem with those 0.1% is that, as you say correctly, they have been very quiet when instead they should have been pushing for the systemic change that is badly needed.
Second issue with your criticism is that you somehow ended up claiming I’m commenting on HEP in particular. I actually don’t know why that is.
If you think that LHC experimentalists have blown “a hole into the bandwagon”, you seem to not have noticed that most of them have just jumped off this wagon and onto the astroparticle wagon while they’re waiting for the plans for the next bigger collider that will make the next big bandwagon.
Experimentalists make themselves guilty by just not caring how badly motivated many of those theories are to begin with. They excuse themselves by claiming “I’m just an experimentalist, I just test this stuff” and pocketing the money. It’s not an excuse I’m willing to accept.
This field is in dire need of a methodological change. The situation is very similar to that in psychology 15 years ago, when they realized that it had become very common to publish results based on sloppy statistics that ended up being irreproducible. Psychologists managed to mobilize the community and more or less agree on guidelines for better methods. At least in my mind it has been a remarkable success that proves it is possible for academic fields to undergo community-driven changes in methodology. Particle physics needs to do the same. If psychologists can do it, so can they. They should start by analyzing what went wrong in the first place, but that still hasn’t happened.
The LHC is running, and as far as I am aware, by far the biggest cost is just making the particles collide. And it’s certainly seems like it’s worth it, just from the amount we’re learning about the Higgs boson, pentaquarks, and other things that really exist. Experimentalists running data analyses on the results to eliminate new particles that theorists have proposed is a relatively minor additional expense.
What would you suggest they do with the data — just leave it there unanalyzed?
“Experimentalists running data analyses on the results to eliminate new particles that theorists have proposed is a relatively minor additional expense.”
As a member of one of these large collaborations, my experience is that not many (relatively) people do this nowadays. It’s become rather a niche activity. In the early days when we were told by theorists to expect squarks/gluinos etc on day one such analyses were very popular activities.
I have never heard Dr. Hossenfelder say that the LHC should be turned off or end its current run. I have heard her say that a new, bigger, more expensive collider is not justified, at least using the justifications put out by the promoters. The promoters still use SUSY or wishful thinking to promote it, or even “we have to keep all these trained HEP folks employed.” The next machine as proposed is too expensive for those kinds of arguments.
Small edit to your slides: Dirac’s tomb is not in Westminster Abbey (its just a marker there). He is in fact buried in Florida 🙂
In a 2006 comment on your blog post at https://www.math.columbia.edu/~woit/wordpress/?p=406 about LHC predictions Sabine Hossenfelder wrote “I favour the scenario: they find nothing at all. No higgs. No susy. No monopoles. No nothing.” When Lubos Motl claimed in a follow up comment that this was impossible she questioned “the validity of the axioms you use to draw the conclusions.” Her scenario seemed even less well motivated than the models she now criticizes, and of course the LHC didn’t just find “nothing.” Unfortunately her current article linked above has no transparency or reflection about this track record. She now writes “The Higgs boson, on the other hand, was required to solve a problem.” Should Dr. Hossenfelder be the one adjudicating particle theory predictions.
“The promoters still use SUSY or wishful thinking to promote it, or even “we have to keep all these trained HEP folks employed.””
Most of the literature I have read on the topic emphasises improved precision on the measurements of the properties of the Higgs particle.
““we have to keep all these trained HEP folks employed.””
that’s what hedge funds, google, amazon etc are for! Very few experimental particle physicists actually work as academic scientists, so no-one is arguing for a new collider to create jobs for particle physicists (at least I have never come across this).
I still don’t understand why building a Higgs factory for no other reason than to better understand the only known (and probably) fundamental scalar can’t be made into a compelling argument on its own. Yeah, it’s expensive, but so what? It’s not stupid, and nations blow money on stupid things constantly, in a seemingly compulsory manner.
Still no confidence in muon colliders, I see.
Peter, I am surprised that you are supportive of neutrino experiments. What exactly will the next generation experiments measure which will help connect it to TeV scale physics.
Its 24 years since we “supposedly” have evidence for physics beyond standard model through non-0 neutrino mass and yet influential theorists have privately admitted to you that particle physics is dead. so help me understand the paradox.
So my question to you is what measurement are you eagerly waiting for from next generation neutrino experiments which make particle physics rise from the ashes?
It seems worthwhile to emphasize that the current proposals are conceptual designs that are likely at least a decade (probably more) from any kind of major funding decision. The R&D activities to advance the conceptual designs to something solid are relatively cheap and usually generally valuable (especially work on more compact superconducting magnets and RF cavities, plus alternative strategies for reaching higher energy), even if that work doesn’t directly lead to a funded accelerator complex. So I’d encourage more of a “wait and see” attitude to the current conceptual designs.
p.s. working for an LHC experiment, expect to be retired long before major funding for any kind of next generation facility becomes a real issue
I agree with Dan Riley, we must keep investing at small scale to test improved devices or new acceleration techniques. Wake field is an exemple of novelty. These small scale research will also have impacts on various industries, you can think of electricity storage and transport. I think that building an accelerator with actual technology and just reaching one order up in energy is, as probably Sabine tough, a waste of money: there is no theoretical predictions that necessitate any rush.
The problem is that the LHC has actually already done an impressive job studying the Higgs, and the results after the HL-LHC should be even better. To justify a new machine, you have to show the improvement over the HL-LHC is going to be worth the high cost.
It may ultimately be that people decide to wait for muon collider technology instead of building a new energy frontier pp or e+-e- collider. But that technology I think is many years out.
The neutrino sector is the least well-understood of the standard model, and you can study it better without having to deal with the fundamental technological limitations that make new energy frontier experiments exorbitantly expensive. I don’t have anything specific to point to, but (here I somewhat disagree with Sabine) I think it’s up to experimentalists to measure what they can. Maybe something unexpected will turn up about how neutrinos behave , maybe we’ll just get better measurements of SM parameters.
The point of theory-building, I suppose, is that the theory should be, as Einstein said, as simple as possible, but not more so. Or, in other words, as complicated as necessary, but not more so. More or less all theories since the Standard Model have been “more complicated than necessary”, either in the sense that they predict phenomena that do not actually occur in the real world, or that they cannot predict anything at all. That is why I find your twistor model of gravity so appealing – it is quite clearly just as complicated as necessary, and not more so.
“Should Dr. Hossenfelder be the one adjudicating particle theory predictions.”
Well, first of all, she isn’t. And second of all, there is a real difference, I think, between a comment in a single thread on a science blog, and a model/prediction published in the literature. Everyone is wrong sometimes. What’s notable, even in that blog comment, is how much Hossenfelder got right (e.g., no monopoles, no SUSY, etc.)
Null experiments can be just as important as the confirmation of expected theoretical findings. Just saying…
Thanks for the post. I think the point is that the HEP energy frontier experimental community is mostly working on the LHC which is currently the only place where the Higgs boson can be produced and working on analyzing that data. There are some measurements which can be already classified as precision (i.e. percent level measurements) but there are many things that are just not measured at all or very poorly measured (i.e. couplings to the second generation fermions, the higgs self-coupling, rare decays). Would anyone say that if we had experimentally confirmed that the photon coupled to half the charged particles and measured its couplings to some of those only to 30-40% percent precision that we should just call it a day? That is seems , in a word or two, absurdly anti-scientific.
HEP high energy experimentalists are also working on novel ways to get to higher energy in more compact and cheaper ways (plasma wakefield accelerators, muon colliders, etc) but nobody is proposing to build those anytime soon because we can’t. CERN has plans for a larger collider but that is still decades away and even the biggest proponents of those machines understand that we are a long way both financially and scientifically from actually constructing those proposals.
Honestly, I don’t know what these straw man arguments except to confuse people who aren’t actually engaged or following HEP.
Pingback: In defense of particle physics experiments – Flippiefanus
It seems to me that there’s a lot of argument, when there’s probably not all that much to argue about.
1. Barring global catastrophes, we’re not going to shut down the LHC until we’ve got all the data out of it that we can.
2. If we get to the end of the LHC run, and there is no hint that higher energies will find anything interesting, it will be really hard to make a case for building a Super-LHC, and I expect that one will not be built.
So that leaves the increasingly unlikely scenario that at the end of the LHC run, there are hints that interesting phenomena will happen at higher energies. In that case, there will be difficult decisions to make. But it seems much too premature to worry about this now.
The presentation of arXiv at Scientific World is, well… problematic:
“Described as “an open research sharing platform reinventing scientific communications,” arXiv is home to preprints of many of the papers that Hossenfelder derides. What is more, you can read them free of charge and come to your own conclusions.”
First, does arXiv define itself as “reinventing scientific communications” ? It has been going on for the mid 90′, and at least in hep-th and maths it is the main way of communicating science, I would say.
Second, presenting arXiv has the home for useless speculations is rather unfair. Yes there are some on arXiv, and there is also the proof of Poincaré conjecture by Perelman, the papers announcing the Higgs discovery etc. It would be nice that the journalist leaves arXiv outside of the debates.
Not really off-topic: this is going to really piss Peter off.
The Nobel Prize in Physics 2022 was awarded to Alain Aspect, John F. Clauser and Anton Zeilinger “for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”
Hidden variables are irrelevant to QM, of course, and so this experiment is trivial (“non-spooky”).
Posting only to be deleted, but the previous line *is* your expressed view, eh?
Actually, I’m perfectly happy with this Nobel prize and the citation. This kind of experimental study of the unusual behavior of QM is great. Even happier that they didn’t give it to some theorist or philosopher for going on incoherently about what this means for “locality”…
I’m looking forward to your blog post on the Nobel and also to Peter Shor’s perspective.
As someone with a fifty-year obsession with the foundations of quantum mechanics, I think this Nobel was richly deserved: I am only sad that John Bell died so young and that Abner Shimony died seven years ago.
Quantum cryptography and quantum computation are perhaps a bit oversold nowadays — but at least all of this is real physics that involves actual experiments! And it is hard to see how any of it could have happened without the seminal work on Bell’s theorem.
Dave Miller in Sacramento
I don’t really have anything interesting to say about the Nobel, so won’t blog about it. The choice was a good one, and I had actually thought they already had made this award long ago. A reaction much like mine is Philip Ball’s, see here
Maybe no one remembers this but me: When John Clauser was building his first experiment, he expected the opposite result from what he got.
Since I was asked, let me say I’m very happy about the Nobel prize going to quantum information theory. And these are an excellent set of recipients, possibly being the three people who did the most for experimental tests of Bell inequalities and similar demonstrations of non-classicality.
“When John Clauser was building his first experiment, he expected the opposite result from what he got.”
This has been reported in at least one article on the prize. E.g. here’s Quanta on it:
“Unsure what he would find, Clauser had placed a $2 bet that his experiment would prove Einstein right. To his surprise, his results vindicated Bell’s prediction over Einstein’s. ”
As a non-physicist, I’ve been surprised by how hard this is. The Bell inequality thing seems pretty clear (at least when read in various popular descriptions) but people keep finding possible loopholes. Thus it’s interesting that work from 2017 was included in this Nobel Prize. Does this make it one of the shortest periods between work done and Nobel recognition?
FWIW: there’s a collection of reprints of Bell’s papers (John S. Bell on The Foundations of Quantum Mechanics) that’s affordable and surprisingly interesting even to non-physicists (even if we don’t get most of it’s points).
According to David Kaiser in How the Hippies Saved Physics, when Aspect started work on this he was warned that he would ruin his chances for a permanent academic position by working in such an unfashionable area…
Thanks for the link to the Siegel essay, it had some nice general ideas and was a fun read. However, I take small issue with the scope of the closing statement:
While in principle this is true, the structure of academia is that people will actually choose work on what will keep them employed. And one person working on an ambitious idea (not ruled out by the usual quick theory-killers) may not have the critical mass of attention to see it go through to the end of its lifecycle (either finding why it cannot work due to some perhaps subtle technical problem, physical argument or experiment, or else be actually useful for something).