This week the Aspen Center for Physics is hosting one of the first of this year’s “Winter Conferences” where results from last year’s LHC run are being reported. Appropriately, the title of the conference is New Data from the Energy Frontier. The most dramatic result has to do with what is not being seen: any evidence of supersymmetry, with new limits reported today by ATLAS. The new ATLAS results rule out gluino masses up to 7-800 GeV, improving on the first limits of this kind from CMS which were about 600 GeV.
For more detailed discussion courtesy of the blogosphere, see Resonaances and Cosmic Variance. For some indication of what this means for string theory, Michael Dine’s lecture notes for his talks on “What LHC might tell us about String Theory” at last summer’s TASI summer school are now out, with the title Supersymmetry from the top down. These lecture notes start off with a section very unusual for this kind of thing, entitled “Reasons for Skepticism”. and he notes:
Our enthusiasm for supersymmetry, however, should be tempered by the realization that from existing
data – including early LHC data – there are, as we will discuss, reasons for skepticism.
For some historical perspective about what pre-LHC expectations were, I happened to run across today a copy of Witten’s lecture notes from a string theory conference at Hangzhou in 2002, where he gives the muon magnetic moment discrepancy as one piece of evidence for supersymmetry, and says:
Assuming this discrepancy holds up, we would expect to interpret it in terms of new particles, but these are highly constrained; one explanation that does work is supersymmetry, with masses of new particles of order 200 – 300 GeV.
Of course, even the minimal supersymmetric extension of the Standard Model is ferociously complicated, with over a hundred unknown parameters, so all quoted limits make various simplifying assumptions. Relating LHC data to limits on supersymmetry will be a subject keeping many physicists busy for the next few years, for more about this, see this talk at Aspen by Jay Wacker. He doesn’t expect this year’s run to as dramatically increase limits on gluinos as last year’s run did, describing early results as “full coverage up to 300 GeV, reach up to 600 GeV”, increasing to “full coverage up to 375 GeV, reach up to 800 GeV” after an inverse femtobarn of data is analyzed (that’s the official LHC goal for 2011, although it’s hoped they can double or triple that).
The last sentence of his last slide refers to something that I’ve always worried about, but am not expert enough to know whether such a worry is serious. He describes the web-site http://LHCNewPhysics.org where simplified models based on supersymmetry and other BSM ideas are given, and notes:
ATLAS studying 10 Simplified Models from 0 in August. Changing their triggers.
The worry I’m not so sure about is to what extent the LHC detector triggers are being optimized to look for supersymmetry, potentially missing un-expected non-Standard Model physics. Since there were always reasons to be skeptical of LHC-scale supersymmetry, and these have now become so compelling that even Michael Dine is writing about them, one hopes that the trigger designers will keep that in mind.
Meanwhile, back at the LHC, powering tests are finished, the ring is closed and will be put through full tests of its operational cycle the next couple of days. Official start of beams for this year is planned for Monday.
Update: More details about the latest on this at Resonannces.
Update: More from Tommaso Dorigo (LHC Excludes SUSY Theories, Theorists Clinch Hands), and a Physics World article by Kate McAlpine here. Tommaso links to a 2008 posting by Ben Allanach that discusses predictions for SUSY masses made (using various assumptions one can read about there) around that time. One of these, by a large group including John Ellis, predicted that 50 inverse picobarns at 10 TeV would be enough to explore most of the region they expected SUSY masses to be in, at 68% confidence level. The latest data, which is about that luminosity but at 7 TeV, does rule out much of that region, with the most likely SUSY mass right around the boundary of the region ruled out by ATLAS (although the tan(beta) values are different). According to the Physics World article:
John Ellis of CERN and King’s College London disagrees that the LHC results cause any new problems for supersymmetry. Because the LHC collides strongly interacting quarks and gluons inside the protons, it can most easily produce their strongly interacting counterparts, the squarks and gluinos. However, in many models the supersymmetric partners of the electrons, muons and photons are lighter, and their masses could still be near the electroweak scale, he says.
If even Michael Dine admits that there are reasons for skepticism, the reasons must be very strong indeed. Now we are only waiting for Gordon Kane…
When the LHC fails to find supersymmetry, then Split Supersymmetry will be invoked to explain this away. Just as the Multiverse is now used to explain away the Landscape Problem of String Theory.
Fundamental theoretical physics has never been in such a parlous state as it is today. It looks increasingly less like science, and more like a branch of metaphysical speculation.
However, after SUSY particles start to be detected at the LHC, Supersymmetry will be considered the holy grail of Theoretical Physics and another jewel in the crown of theoretical speculation and ingenuity, a la par with General Relativity. And then you should feel ashamed, but that’s petty and unimportant.
In split supersymmetry, the gluinos are still supposed to be at a mass accessible to the LHC. To explain away a failure to see them, you have to invoke super split supersymmetry,
There are two sides of the story.
If Low SUSY is ruled out then someone could see it as a failure of the SUSY idea (and thus String theory) but others as another triumph of anthropic reasoning and the landscape. Indeed such failure will leave EWSB unexplained (like CC) and thus susceptible to anthropic interpretation (like CC).
The rule is that the more phenomena are left unexplained the more the landscape idea gains validity.
Peter, the “trigger bias” is a huge issue.
Experiments were always theory-laden, but this used to be compensated by an ensemble of experiments to match the ensemble of theories.
In era where each experiment costs billions of dollars, and you can cound the number of experiments on the fingers of one hand, biases built in the experiment can literally set phenomenology back for decades.
A “toy scenario” example relevant for SUSY would be if unitarity at the EW breaking scale was restored by a slight decrease in multi-particle processes, rather then a resonance (this is exactly how unitarity is restored in high energy QCD, via Froissart’s bound. We do not know exactly how to derive in terms of partons, but we know its associated to low Bjorken x strongly coupled physics).
If this was the case, the only way to get a grip on the new physics would be to precisely cound how many multi-W, multi-Z and multi-top events there are, and compare with expectation. Can ATLAS and CMS do this, given the trigger bias? Perhaps, but its a good question.
Experimental timescale to adapt to changes in phenomenology has already had big consequences in heavy ion collisions: When the LHC experiments were being designed, the consensus was that most interesting physics indicating deconfinement would be hidden in correlations of very low momentum particles (the technical name is HBT interferometry), and jet physics, for example,such would have little or no relevance.
Nowadays the consensus is entirely reversed, few people care about HBT and jets are where the physics is at. Because of this, “particle-optimized detectors” (CMS and ATLAS) actually might have a better shot at intesting physics than the detector optimized for heavy ion collisions (ALICE), at least if this consensus lasts and is justified. So its not a “worry”. It happened already.
Thank you for info on the subject I have wondered about for some time.
To what extent the raw data from finished experiments is archived and available for re-analysis, in case someone later comes up with a new idea?
The data collected is archived, but the issue with “trigger bias” is much more serious. Data that doesn’t pass the trigger is not collected at all. In these experiments, the detector is producing data far more quickly than it can be collected, so an elaborate trigger is necessary to decide which data to collect.
Don’t they ever consider collecting some amount of randomly sampled events? I suspect it could help to debug and validate the trigger, in addition to enabling tests for alternative theories.
They do all sorts of tests and checks. Control expts to quantify backgrounds, tests to quantify cosmic ray events which decay in the detector, lots of things. Why not ask someone on a big HEP collaboration? It does not have to be LHC, the same is/was done at the Tevatron and LEP, SLC, B-factories, etc.
concerning Tristes_tigres’ comment: yes, we devote a small fraction of the trigger bandwidth to recording min-bias and even zero-bias data, precisely for its value in calibration and checks of detector performance. Such data are also extremely useful for tuning models of the underlying event (ie, extra tracks and energy not directly related to the main interaction) which can spoil the construction of an “interesting” high-pT event. Aside from the zero- and min-bias data, we collect events with relaxed trigger requirements, in order to verify the main triggers. These so-called “monitoring” triggers are pre-scaled, which means we save some fraction 1/N of them, so as to keep the trigger rate under control.
People sometimes propose to look for new physics in these auxiliary triggers. The problem is that they have been heavily prescaled, so that the effective luminosity will be really very low. The new physics signal would have to have a large cross section in order to be discovered this way, which is why we experimentalists work so hard to devise a wide variety of triggers to collect relevant, rather than irrelevant, events.
I think it´s pretty soon to make claims about supersymmetry not being seen, as the luminosity is still very low. But in any case, the complicated plots showed by ATLAS and CMS showing some new parameter space region being excluded must be handled with caution. Normally a constrained model like minimal supergravity is used to set those limits and it´s not clear at all e.g. if light gluinos are really being excluded… I think that best thing you can say is that light gluinos are being disfavored but the evidence is circumstantial in the sense that if they were there, sure you could have detected them. On the other hand SUSY models have so many parameters that I can really see already some new paper coming up saving light gluinos again. Eventually they will have to give up but although you can exclude more regions, the theory is poorly falsifiable with such low statistics.
In any case I believe also that one thing that should not be forgotten, ever, is that even if SUSY is found and string people will be delighted, we should never miss the point that we are still talking about point particle theory (even with Michio Kaku lying to children on TV saying he works with string theory, THE theory that predicts s-particles).
Supersymmetry will never be disproved. Phlogiston and the luminiferous ether were never disproved. Instead something else came along which could do a better job of explaiing the facts. But the point is “the facts” — HEP is really hurting for expt data beyond the Standard Model. In the 1950s and 60s there was plenty of unexplained data, and new discoveries every year. This just isn’t happening now. A “particle desert” at the LHC will make the futire of expt HEP really bleak. People will not give up on supersymmetry or string theory jusy becaise no sparticles are found. Something needs to be FOUND, and then we shall see how well the various theiries explain it.
You might find the book ‘Laboratory Life’ by Latour and Woolgar interesting.
I say this because I understand your ‘worry’ to mean that you are concerned that experiment is being used to construct facts rather than observe them. Something the authors discuss.
Perhaps in this case you feel that the process is overly constructive of facts by means of excessive selection of data.
Anway, my goal was to point out the literature and show its relation to the present discussion. I cannot possibly make a qualified comment.
Perhaps this link may be helpful (Cormac O’Raifeartaigh) http://coraifeartaigh.wordpress.com/
Let’s not forget what has been pointed out many times already, namely, that a signal in CMS or ATLAS for physics beyond the standard model will be difficult or even impossible to ascribe to SUSY per se. The reconstruction of the final states is difficult and it will take a while before any pattern points to SUSY as opposed to extra dimensions or Little Higgs models, among others. Some versions of low-energy SUSY might be more-or-less ruled out by CMS and ATLAS, but they cannot be unambiguously discovered.
I have no doubt that HE experimental teams are thoughtful and know what they are doing. What I was concerned with is the possibility that motivation to get results ASAP gets in the way of testing alternative physics ideas by some outsider later. If we indeed hit the “particle desert”, I hope they will store some minimally filtered data, just in case. Although, I imagine, someone not involved with detector designers won’t find it easy to make use of the raw data.
Thanks for the explanations. You make clear the problem: almost certainly not enough possible luminosity in “unbiased” data to find evidence for something new.
Thanks, but I should make clear that the “selection of data” effect here is not “excessive”, it’s forced upon the experimentalists by the overall limits on how much data they can collect. They’re well aware of the problem and devote a lot of effort to figuring out how to deal with this. I’m not expert in this subject, so I don’t understand all the choices they’re making or their implications. My “worry” is just that these choices may be being overly influenced by theoretical models (supersymmetry and large extra dimensions) that have gotten a lot more attention than they deserve.
There’s been some debate in the past about whether these kinds of experiments should make their data public for others to try and analyze. Given the complexities involved, I’m rather dubious that outsiders could do this in any reliable way. In any case, my guess is that the LHC is going to be the only experiment of this kind for a very long time, and also will run (with upgrades) for a very long time. The groups working there are extremely large and will be doing this for many, many years. After the first few years, unless they’ve got some very exciting new physics already to analyze, I’m pretty sure that many of the physicists there will be quite interested in pursuing any new ideas about how to find something unexpected in the data.
This just appeared on CNN
I suppose the nattering nabobs of negativism will complain about the statement near the end “finding … the Higgs boson … would explain gravity” or that the scientists said they would be happier if the Higgs is not found (?). But really, it is an effort to convey what is happening in modern science (expt HEP anyway). The stuff about 4 trillion degrees (Celsius or Fahrenheit doesn’t matter) and primordial soup/early Universe is not bad at all.
I’ve always wondered – to what extent does Witten’s proof of the positive mass theorem depend on supersymmetry being realized in the physical world?
The great thing about mathematical proofs is that they don’t depend at all on the physical world. Witten’s argument about the positive mass theorem has nothing to do with whether a supersymmetric theory describes nature.
More generally, if the MSSM doesn’t appear at the LHC, this doesn’t affect in the slightest any of the many interesting mathematical applications of supersymmetry.
Pingback: Implications of Initial LHC Searches for Supersymmetry « Not Even Wrong
Thank you for your response Peter.