Physics Today seems to have decided to deal with Sabine Hossenfelder’s criticism of a future collider by publishing the least credible possible response: a column by Gordon Kane arguing that string theory predicts new particles of just the right mass to be likely beyond the LHC reach, but accessible to a higher-energy proton-proton machine.
In the column, we learn that:
In recent years there has been progress in understanding those [string theory] models. They predict or describe the Higgs boson mass. We can now study the masses that new particles have in such models to get guidance for what colliders to build. The models generically have some observable superpartners with masses between about 1500 GeV and 5000 GeV. The lower third or so of this range will be observable at the upgraded LHC. The full range and beyond can be covered at proposed colliders. The full range might be covered at a proton–proton collider with only two to three times the energy of the LHC. One important lesson from studying such models is that we should not have expected to find superpartners at the LHC with masses below about 1500 GeV.
Kane has a long history with this kind of thing at Physics Today, publishing there back in 1997 much the same sort of argument, in an article entitled String Theory is Testable, Even Supertestable. According to the Kane of 1997, a generic “prediction of string models” was a gluino at around 250 GeV, just beyond the Tevatron limits of the time. Thirteen years later, Physics Today had him back, publishing an article entitled String theory and the real world. I don’t have the time to do a full search, but, by 2011 after the first LHC results came in, Kane had a string theory prediction of a gluino mass at 600 GeV, or “well below a TeV”.
As better LHC results have come in, each time Kane has issued a new “string theory prediction” that the mass is a bit higher, just about to appear at the next round of LHC results. The last version of this I had seen (see here), was from 2017 and predicted “that gluinos will have masses of about 1.5 TeV”. This is already disconfirmed and out of date, with Kane now telling us “between about 1500 GeV and 5000 GeV.”
For some other evidence of how Kane deals with the problem of having predictions falsified, one can compare the 2000 and 2013 versions of his popular book on SUSY, an exercise I went through here.
At this point, the argument that we need a new collider because “string theory predictions” say that it will see gluinos has zero credibility. I don’t know of any other theorist besides Kane who believes such a thing. That Physics Today is publishing this is just mystifying. Perhaps a collider skeptic there has come up with this as a clever way to back the Hossenfelder side of the argument.
There are some other odd things in the piece, one that stuck out for me was this bizarre claim about recent history:
We now know that if Fermilab and the US Department of Energy had taken the Higgs physics more seriously, the Tevatron would have discovered the Higgs boson years before the Large Hadron Collider did.
I see Will Kinney has more about this on Twitter.
Update: More commentary on this from Jon Butterworth and Sabine Hossenfelder.
I urge everyone to write a response to this Physics today article. I am planning to write one. Peter is it okay, if we borrow mostly from this article
If people want to contact Physics Today they should. For the editors there to see the problem they have, you don’t need to point here, just to their own pages. A comparison of what Kane wrote there in 1997 and is writing now should be all that is necessary.
I’m quite serious that the publication of this kind of thing is going to do damage to the credibility of HEP among other physicists, and be very unhelpful for the argument for a new collider. All those who went public to counter Hossenfelder should be doing the same here. That this kind of thing has been going on unchallenged for decades is part of the problem.
Will Kinney has some nice images of Gordon Kane’s boldly stated wrong predictions from 1994, 2001, 2015 and 2019:
I don’t know Kane. What about him makes him so eager to do this? Of course if you can keep convincing people to build particle accelerators based on a wild guesses that keep turning out to be wrong, and they keep forgetting you’re always wrong, you have a motive to keep doing it. But that applies to anyone. Why does Gordon Kane, among all particle physicists, have such a spectacular track record of doing this?
Kane has always been an outlier, with very few others taking seriously the argument that there were “string theory predictions” relevant to LEP, the Tevatron, the LHC, or any other collider (and such arguments I think had nothing to do with building and operating those machines). It’s not so surprising to me that a theorist would have delusional views about their own favorite theoretical models. What I don’t understand is why Physics Today (or anywhere else) continues to provide him a prominent venue to repeatedly make such obviously dubious claims.
I’m guessing that you meant to be facetious in your speculation about PT’s motives for publishing. You might nonetheless be spot-on.
Fermilab had a snowball’s chance in hell of finding the Higgs given how far LEP pushed the mass limits…
Take it from me, I was there and was responsible for showing that Fermilab could get lucky if the mass was ~160 or so…
That statement from Physics Today is gaslighting at it’s finest…
The Higgs hunt at the Tevatron was covered extensively on this blog, so I’m well aware that there was an intense effort there, with, it turns out, no chance of success (i.e. no 5 sigma detection at 125 GeV). Thus my surprise at the Kane claim.
For an early posting about this, see this blog entry from 2005
I think it was the first time I interacted with Tommaso Dorigo, who sent in a helpful comment clarifying the situation.
“… least credible possible response: a column by Gordon Kane…” I would have loved it if PT published the article but written by an accomplished experimentalist who is also a good writer like, say T. Dorigo instead.
Too bad Gordon Kane can’t bet his own $20B on building the FCC and finding his predicted superparticles. That would likely lead to the optimal case of the FCC being built and him losing money for being wrong. Instead, we’ll have the FCC not built and him continuing to be profitably wrong.
He has bet about $100, but bet not likely to pay off for several years, see
the Tevatron folks gave the proverbial college try, but the machine and the detectors were not quite up to it. When all the smoke cleared they could discern
WZ -> l nu bbar production, the standard candle, at about the 2-3 sigma level.
FWIW, from a conversation I had many years ago, the current CERN DG had no doubts about the WW channel around 160 but thought that the VH modes were wishful thinking. The CERN people were freaking out that FNAL could scoop them for a Higgs in the 150-170 GeV region…
HEP superstring predictions for what colliders might find are degenerating into the weekly astrology column that so many magazines carry.
I don’t think it’s a good analogy. Horoscope predictions are sometimes correct.
Physics Today used to be a serious publication.
Gordon Kane seems to be on a PR offensive in support of his past claims that string theory makes verifiable predictions:
Funny that given the mess that theoretical particle physics has become, I am somewhat surprised that this paper has not been more closely examined:
Mh = 125 GeV and Mt = 175 GeV, a full two years before the top was found….
Not being a theorist I can’t comment whether there is any real meat to this or was it a lucky parameterization of our ignorance…
I wish I’d gotten in on some of those bets. My habitual pessimism could have paid off, for once.
You are not the only one, it has nothing to do with pessimism, and people have been making these bets with G Kane since 1984.
I was present at a party in Ann Arbor in 1984 when I heard T Veltman say “Gordy, if they discover supersymmetry, I’ll eat my hat.”
The “what if Fermilab” question: since we know that Fermilab actually made and saw Higges, but nowhere near well enough to hope for 5 sigma, what could have been done to get enough?
Presumably that does not include increasing the energy. But it could have included higher rep rate for high luminosity. And what about adding a better, more modern, (LHC-like) detector or detectors? These are technical questions that someone probably thought about. Was the political situation post SSC-demise so bad that there was no hope?
you can whip a mule all you want but it ain’t gonna win the Kentucky Derby.
The Tevatron Run II involved ambitious major upgrades to both the existing detectors and the accelerator. Even with them, a SM Higgs was never really in reach. For example the silicon vertex detectors critical for b-jet tagging would have fried for the luminosity required to see a signal, the rad-hard electonics simply did not exist yet. The calorimetry was also not up to snuff in coverage in eta, segmentation and sampling resolution implying that jet-jet mass resolution was always going to be a limiting factor.
A heavy 4th generation quark might have bumped up the cross section to almost observable levels but that was already ruled out by LEP. Only an oddball Higgs would have been observable (e.g. fermio-phobic, very high tan-beta, triplet etc…).
The only realistic window was in the mass region around 2M_W and even that was a stretch. Personally I think it was remarkable the Tevatron program closed the mass region that at one time was seen as the hardest one for the LHC to cover…
Unrelated, but are you going to cover prof. Karen Uhlenbeck’s contributions to an actual understanding of physics?
This is OT, but I think the award of the Abel Prize to Karen Uhlenbeck is quite noteworthy, and relevant to the math/physics intersection covered here. Any thoughts, Peter?
Sorry, but I’m on vacation this week, and no time for more than saying congratulations to Karen. The usual suspects (e.g. Nature and Quanta) seem to have quite good articles about this that I can’t compete with.
It would perhaps be interesting to have a list of papers by serious physicists which really gave a reasonably correct PREDICTION (i.e., dating from before 2012) of the Higgs mass. Besides the Kahana/Kahana paper quoted earlier in this thread the one I know of is by Shaposhnikov/Wetterich arXiv:0912.0208 : “Detecting the Higgs scalar with mass around 126 GeV at the LHC could give a strong hint for the absence of new physics influencing the running of the SM couplings between the Fermi and Planck/unification scales.”
And here you go…
The compilation is up to version 8, and the Kahana’s paper is still the best one especially given when it occurred and what the direct limits on the top quark and Higgs mass were at the time. Making Higgs boson mass predictions after LEP has put 112 GeV lower bound is not as nearly as impressive…
Thank you. I suspect that many papers on the list where it says “many supersymmetric particles” under “other predictions” have been ruled out by LHC experiments. I also suspect that Shaposhnikov/Wetterich is consistent with current experimental data. If Kahana/Kahana get the “Higgs as a deeply bound state of two top quarks” that might be testable by now. I wonder whether it is consistent with LHC data.
No idea if there are other testable predictions that can be gleaned from the Kahanas. At first glance, I am pretty sure it is consistent with current LHC results. It does however suggest to me that a e+ e- machine capable of studying the t-tbar threshold could yield a surprise or two that the LHC might not be capable of discerning.
It is pretty clear to me that a very heavy top quark is somehow related to the mechanism of EWSB and creating the vev of the Higgs potential. It just “smells” that way. That being said, I am not a theorist, so I can’t really say if those 2 papers that predicted the correct mass are showing us something of fundamental significance or were “luck” based on choices of boundary conditions or something else…
AFPHBH, I notice that the Kahanas’ paper is dated Dec 1993. That was when SLD had collected around 50k Z events made with polarized electrons and the production asymmetry was turning out larger than expected, which (along with the latest Tevatron top mass limits) restricted the top mass window to about 5 GeV. (Full disclosure, that left-right asymmetry was my thesis).
I don’t think we put anything out in 1993 but if they had their ear to the ground (like good theorists do) the Kahanas may have picked some rumblings.
Note that top mass comes in squared in the Weak asymmetries, while the higgs mass comes in log. The asymmetries narrowed the Higgs mass to less than 200-ish, so for the Kahanas to nail that is pretty impressive.
I was part of SLD when it stood for “Slow Lingering Death”. Never saw data.
Having those CCDs so close to the IP made SLD a very powerful detector for some key measurements especially when coupled with the polarization. It was a great complement to LEP…
David and Sid Kahanas predictions about the Higgs-Boson-mass and the Top-Quark-mass (1993!!!) in a “parameter free fashion” are very precise. Source: https://arxiv.org/pdf/hep-ph/9312316.pdf
According to the standard model (SM) predictions are not possible. How do you explain the obvious discrepancy?
Peter Higgs knew about their work … he said, “You’re from Brookhaven, right. Make sure to tell Sid Kahana that he was right about the top quark 175 GeV and the Higgs boson 125 GeV” [Kahana and Kahana 1993].” Source:https://arxiv.org/pdf/1608.06934.pdf
One would assume that highly accurate calculations about the Top-quark-mass and the Higgs-mass are remarkable. Why didn’t the “Kahanas” get the “proper” attention? Why is there no adequate mention about these theoretical achievements?
I strongly believe that Sidney and David Kahana’s predictions need to be published and discussed again. With reference to Nambu Y and Jona-Lasinio G 1961 Phys. Rev. 122 (1) 345-358 (https://journals.aps.org/pr/pdf/10.1103/PhysRev.122.345) it seems that the entire SM-project had already been completed (…“methodical circular conclusions”) at the beginning of the 1960s.
For further reading see https://arxiv.org/pdf/1112.2794.pdf … “predictions by the authors D. E. Kahana and S. H. Kahana , mH = 125 GeV/c² uses dynamical symmetry breaking with the Higgs being a deeply bound state of two top quarks. At the same time (1993) this model predicted two years prior to the discovery to the top its mass to be mt = 175 GeV/c²…”
Incidentally the list of 96 Higgs-mass predictions by Thomas Schücker (https://arxiv.org/pdf/0708.3344v8.pdf) is a “good” reference of how and when predictions were made.
Dirk Freyling – I agree that it’s strange just how little attention that paper has received (all the more so since Peter Higgs himself evidently knew about it!). I cannot find a single serious discussion of that model, formal or informal, that has taken place. By contrast, Shaposhnikov and Wetterich 2009, another paper which managed to predict the Higgs mass through heterodox means, is now approaching 300 citations.
It’s not as if the Kahanas did something completely alien to the known paradigms of physics. There are hundreds of papers on NJL-type models. You would think that someone who knows the topic, would have ventured a commentary by now.