The End of LHC Run 2 and the Road Ahead

Some experimental HEP news items:

  • Since 2015 the LHC experiments have been taking data from proton-proton collisions at 13 TeV. This is “Run 2” of the LHC, “Run 1” was at the lower energy of 8 TeV. The proton-proton Run 2 ended this morning, with the LHC shifting to other tasks, first machine development, later heavy ions. It will shut down completely in December for the start of “Long Shutdown 2 (LS2)”, which will last for over two years, into early 2021. During LS2 there will be maintenance performed and improvements made, including bringing the collision energy of the machine up to the design energy of 14 TeV.

    ATLAS is reporting 158 inverse fb of collisions delivered by the machine during Run 2, of which 149 inverse fb were recorded, the CMS numbers should be similar. Most data analysis reported to date by ATLAS and CMS has only used the 2015 and 2016 data (about 36 inverse fb) although a few results have included data through 2017 (about 80 inverse fb). My impression is that for many searches they have been waiting for the full run 2 dataset to be available. Perhaps results of searches with the full dataset might start becoming available by the time of summer 2019 conferences.

    The LHC run 3 is planned for 2021-2023, producing perhaps 300 inverse fb of data, results perhaps available in 2024. It will thus be quite a long time after run 2 results start appearing before better ones due simply to more data become available.

  • The Europeans are now starting a process that will lead to an update of the
    European Strategy for Particle Physics. Tommaso Dorigo has a blog post here, and there’s a website here. A first stage of this process will ask for community input, with deadline December 18, via a portal that will open November 1. The next stage will be an Open Symposium to be held May 13-16 in Granada.
  • This week there’s a Workshop on Future Linear Colliders being held in Austin Texas. The big question being discussed there is whether the Japanese will decide to go ahead with a plan to build the ILC, a 250 GeV linear electron-positron machine. The current situation is described in detail here, with the crucial next step a decision from the Science Council of Japan expected by the end of November. If the ILC project does go forward, a tentative schedule has construction beginning in 2026 and commissioning in 2034.
  • For a theorist’s recent take on future colliders, see this from LianTao Wang. One thing Wang reports is an “excuse to have fun” (since it’s based on an unrealistic assumption), a community study in particle theory being organized by Michael Peskin, which would address the question “What would we learn from an electron accelerator of energy 10-50 TeV?”
This entry was posted in Experimental HEP News. Bookmark the permalink.

34 Responses to The End of LHC Run 2 and the Road Ahead

  1. dsm says:

    “This week there’s a Workshop on Future Linear Colliders being held in Austin. ”

    Nope, University of Texas at Arlington

  2. Peter Woit says:

    dsm,

    Thanks! Corrected.

  3. I just looked at the Wang slides. Well, it seems that now you need 100 TeV to probe naturalness, after the LHC ruled it out at 10 TeV. And are they seriously still talking about the WIMP miracle?

    You know, the most frequent question that journalists ask me about my book is: What’s the reaction of physicists in the fields that you have criticized? The answer is: none. They keep on doing exactly the same thing that hasn’t worked for 30 years. The Wang slides are a good demonstration of this utter lack of self-reflection.

    It really shouldn’t surprise me. I mean, the reason I wrote the book is that I have given up hope this community is able to correct its ways. Still I continue to be stunned by just how unscientific their procedures are. They have the data IN THEIR FACE. The data scream: “It’s not working. Naturalness doesn’t work. The WIMP miracle doesn’t work. There’s nothing to see here, move on!” But no one is listening.

    All this obsession with numerical coincidences is bad math. It’s wrong, and not even for particularly deep or interesting reasons. That a scientific community so large continues to use arguments that are not only wrong but clearly don’t work worries me considerably. Not so much because of the mass of gluinos or swhatever (to borrow Lee’s joke), because who really cares. It worries me because if this can happen in one scientific community, it can happen in others as well. Just that in the other cases I wouldn’t be able to tell what’s going on.

  4. Amitabh Lath says:

    Sabine, I am a little uncomfortable with statements like “LHC ruled it out at 10 TeV”.

    Maybe we’ve ruled out the really obvious signatures like dilepton resonances and large missing momentum signatures. But R-parity conservation is not necessary (as you and others have pointed out elsewhere) and R parity violation would make new physics really hard to find at LHC (full disclosure, that’s my area of interest).

    Also BSM could be long lived. Our tracking code is designed to find prompt tracks reasonably well but if displaced tracks? Not as good. It’s an area of concern.

  5. jls says:

    Sabine, saying people aren’t listening to the data is frankly ridiculous. Of course there are still people pushing SUSY + WIMP DM. But the main energy in the field right now is moving to other ideas, especially axion DM, other possibilities for DM, and qualitatively new ideas to address the hierarchy problem.

  6. Peter Woit says:

    Re Sabine’s comment and responses,

    I think she’s just properly reacting to Wang’s claim on slide 23 that “Naturalness is the most pressing question of EWSB”. The LHC results so far have disconfirmed the heavily promoted argument that “The naturalness problem means that BSM physics will show up at EWSB scale, so definitely by the TeV scale”. Reacting to this by changing your old argument to the new argument “The naturalness problem means that BSM physics will show up at EWSB scale, so definitely by the 10 TeV scale” is not a good idea.

    I do think though that the LHC results have had real impact, even if there are some theorists who don’t want to give up arguments they have been so comfortable with. Pre-LHC we saw a lot of claims about extra dimensions and black holes showing up at the TeV scale. Those quickly disappeared once data came in, and I haven’t seen any significant attempt to justify a new machine by invoking such things. The negative SUSY results have had a big impact, I see a lot less about SUSY these days, and it’s not a dominant topic used for justifying a next generation collider. The LHC searches now seem to be looking at a much wider range of possibilities than just SUSY signatures.

    Mostly Wang and others at these workshops discussing the future seem to me to be sensibly concentrating on the topic of investigating in detail the physics of the Higgs, which is something we know is there, and know that new machines could in principle study better than the LHC. Unless something new comes out of the LHC data that provides a compelling target for a new machine, questions about how good such a machine would be at studying the Higgs will be the dominant ones.

  7. tulpoeid says:

    Just an addition / correction:

    “The LHC searches now seem to be looking at a much wider range of possibilities than just SUSY signatures.”

    This has been the case since the start (talking the two large experiments). Dozens of different signatures and theories have been consistently investigated since day one of simulated work and are regularly updated. Actually these form the majority of LHC searches in sheer number of different analyses, although of course susy makes sure that its own number of sub-analyses proliferates.

    At the same time, sure susyists had been too vocal and had an occasional real impact on the resources available to LHC analyses. (Which search gets priority in computing time is a very very real decision and the experiments’ heads define clear priorities every few months.) But I don’t think that there is a shift in the actual choice of work with respect to the previous years (which is both a good … and a bad thing).

  8. Amitabh Lath says:

    Thanks tulpoeid I was about to say that non-SUSY has been there from Run zero but probably major conferences are paying a little more attention now.

    Also just because an experimental result is cast as SUSY does not mean it’s totally model dependent. For instance if one wants to search for a strong resonance decaying to 3 partons, you need some model to calculate acceptances and such. In the past you could have simulated a techni-rho going to 3 quarks via a intermediate techni-pion. But nowadays the best available simulation is an RPV gluino decaying via an intermediate squark. The detector acceptance is probably not all that different, but a “model independent” search just became a “gluino search”.

  9. David Appell says:

    I really dislike the unit “inverse fb,” but it’s more palatable, I think, if written as 158/fb.

  10. Matt Grayson says:

    Indeed. Miles per Gallon has the same units as inverse fb. We should use MPG instead. The conversion factor, if I didn’t make a silly mistake, is 1 inverse fb = 2.35215 10^37 MPG. Better? Inverse acres works as well. I’ll leave the conversion factor as an exercise.

  11. Amitabh Lath says:

    I would not be so hard on LianTao. What he is doing is ok^{*}. He is plugging the next big collider by saying we haven’t found what we know has to be there so let’s go to the next step. I never understood naturalness so can’t comment to appropriateness, but the LHC cannot be our last word in exploring fundamental interactions. We can talk about hadron vs. electron vs. muon but a civilization that has enough excess wealth to put a sports car in solar orbit for grins and PR shouldn’t be arguing about affordability of machines like these.

    And before you say “but wouldn’t those funds be better used doing research X”, if the demise of the SSC taught us anything it’s that funding is not a zero sum game. There was no $8Billion bump in {{insert your favorite research here}} because SSC got canned.

    *except the use of comic sans, that is inexcusable.

  12. Peter Woit says:

    Amitabh Lath,
    Personally I do hope we’ll see a next generation collider, and besides the argument for better understanding the Higgs, I think it is worth doing simply to see what’s there at a higher energy range, even if it turns out there’s nothing new.

    I share though Sabine’s reaction to some of the arguments being made that were not good arguments about what to expect at the LHC, failed conclusively there, and should now be allowed to rest peacefully underground. The case for SUSY-scale LHC was always a bad one (105 new parameters to explain nothing?), it’s an even worse argument for the next generation.

  13. Steven Patenaude says:

    One slide had this as a bullet point:
    “We are at a special historical juncture. About to make the next step beyond the Standard Model.”

    I know part of the reason for the presentation is to build excitement for the next project. Given that, for a layman, is there something particularly special about the next order-of-magnitude power increase? The lead up to the LHC was exciting because of the possibility of finding the Higgs and being able to test some important beyond-Standard Model theories.

    (I was slow to post. You have already answered for the most part.)

  14. Amitabh,

    If you are uncomfortable with it, you don’t know what I am talking about. The story has been that some new physics (particles or extra-dimensions or likewise) has to show up close by the Higgs-mass because otherwise the standard model is not natural and that shouldn’t be the case. This criterion has been proved useless: The Higgs-mass is unnatural, period, according to the very quantifiers of naturalness that folks have used in these areas. It doesn’t matter if there is something else lurking in the data still to be analyzed, we already know that all the “predictions” based on this idea of naturalness were wrong.

    If you don’t know what I am talking about, please read my book. I’ve made a lot of effort collecting references and quotes from people who now mostly pretend they never said what they said or in any case would rather not be reminded of it. For a brief summary, you may want to look at this. Or, in case you have a problem because the statement comes from me in particular, read this.

    That they now try to move “tests of naturalness” to 100 TeV is patently ridiculous. The honest thing to say would be that naturalness turned out to be a useless criterion, in which case, let’s please stop talking about it. And don’t get me started on people who are now trying to come up with other measures of naturalness according to which the standard model would still somehow be natural.

    Now, look, I don’t care all that much about naturalness. I just pick on this because it’s such a clear illustration for how badly knowledge discovery in this community works. If anyone had cared to look at the literature carefully they should have known it’s a bad criterion 20 years ago. This would have prevented ten-thousands of useless papers and one might hope that maybe theorists would have come up with something better. Not only did this not happen, they now refuse to learn from their failure. This demonstrates that the self-correction that science relies so heavily on is just broken. It’s not working. Someone, somewhere, should really do something about it.

  15. jls,

    You write: “Sabine, saying people aren’t listening to the data is frankly ridiculous. Of course there are still people pushing SUSY + WIMP DM. But the main energy in the field right now is moving to other ideas, especially axion DM, other possibilities for DM, and qualitatively new ideas to address the hierarchy problem.”

    They are “listening” to the data to the extent that they have to. Since experiments haven’t found anything, they can’t go around any more and pretend those experiments will soon find it.

    But your comment just illustrates the very problem I am talking about: They keep doing the same thing! It’s still SUSY, it’s still WIMPs and axions and trying to solve other problems that don’t exist. SUSY was supposed to be at a TeV because of naturalness (ie, a numerological argument). WIMPs were supposed to be there because of the WIMP-miracle (also a numerological argument). The original axion was supposed to solve another finetuning problem (the strong CP problem – also a numerological argument). Since the original axion was ruled out in the 70s, the present ones are already a fix accounting for an earlier failure. The hierarchy problem itself is yet another numerological problem.

    The data demonstrate those arguments are not working. Yet you think it’s totally okay to keep using them. You and some thousand of other people who have learned nothing.

  16. Anon says:

    I think it is good for physics community if everyone does not have the same views. It is good for one set of people to still try and look for naturalness in exotic parts of parameter space (probe R-parity violating SUSY etc) while another set accepts that already the evidence is compelling that the Higgs mass has failed the naturalness test — and do their thinking and research assuming there is 1% or worse fine tuning.

    Also I think the LHC has only implications for naturalness of quadratic divergences (Higgs mass). The naturalness of the dimensionless strong CP phase that Sabine Hossenfelder refers to is totally a different issue (and axion is just one approach to it — parity or left right symmetry is another approach).

    In general there is enough evidence that naturalness based arguments actually work in physics, science, detective work etc. LHC results cannot lead us to abandon naturalness arguments based on electron and neutron EDM experiments for example…. those arguments are important to test models/BSM ideas at even higher scales than the LHC and future colliders can probe.

  17. Peter Woit says:

    Steven Patenaude,
    Unfortunately, whatever indirect evidence we might have for something beyond the SM doesn’t point to any particular energy scale, and in particular not the energy scale just above what the LHC can probe.

    It’s understandable that both those trying to get funding for a new machine and those who will work on such a project should take an optimistic view. At the same time the LHC story has shown the danger of people getting too involved with dubious models that don’t really work or explain anything, just because these give hope for something observable at the required energy scale. Propaganda to the outside has an unfortunate way of blowing back.

  18. Amitabh Lath says:

    Sabine, I do know what you are talking about. I am familiar with theorists hanging on to pet theories well after the sell-by date. My dissertation was a precise measurement of the Weak mixing angle that basically killed Technicolor but they kept putting makeup on that corpse for several years after.

    Same thing happened with my advisor Henry Kendall and colleagues in the late 60’s. They found evidence of substructure inside the proton which severely contradicted Vector Meson Dominance but VMD kept happily chugging along for a long time.

    Look, I don’t have a dog in this intra-theorist fight. What I am afraid of is the collateral damage to experimental physics. You may not be saying this exactly, but someone reading could conjecture that the LHC has ruled out any new physics up to the Plank scale so let’s pack up and go home. You have a following among young people. They like your give-no-effs style. We need them to work on the clever new hardware and analysis to pull BSM signal out of the muck. This won’t happen if they come to believe that fundamental physics is dead.

    PS: I’ve asked the university library to get copies of Lost in Math.

  19. Anon says:

    Amitabh,

    You wrote “….ruled out any new physics up to the Plank scale…”

    I would basically just say that null results from LHC/electron EDM experiments leave us with no strong reason for believing that there is new physics at multi TeV scale. There is no good theoretical reason either for new physics to be at multi-TeV scale.

    The neutrino mass data provide strong hints for new physics below 10^15 GeV. Further, if we assume unknown dimesionless parameters ~1, this physics will kick in closer to 10^14 GeV than few TeV.

    Don’t know if this would demotivate experimentalists, but this seems to be the situation.

  20. Peter Woit says:

    Anon,
    Pre-LHC, null LEP + edm experiments indicated there were no good reasons to expect new physics other than the Higgs (or something like a Higgs that played the same role). Theorists made various bad arguments promoting unpromising SUSY, extra-dimensional models, etc, claiming they were likely to turn up at the LHC (many of them even bet money on this). I don’t see any reason to go from the mistake of announcing that our understanding of what lies beyond the SM implied new physics at the LHC to the opposite mistake of saying that our understanding of what lies beyond the SM implies no new physics at a higher-energy collider. We simply don’t know what lies beyond the SM, whether it has to do with neutrino masses, dark matter, or perhaps unexpected physics of the Higgs field. Acting as if we do know about this and discouraging anyone from looking is not a good idea.

  21. Anon says:

    Hi Peter,

    I think at LHC there was a strong theoretical reason to expect new physics — naturalness of Higgs mass/hierarchy problem meant that SUSY or some new physics which addresses the hierarchy problem is at the TeV scale that LHC would explore.

    Post-LHC many people believe that this naturalness argument is ruled out or doesn’t work for the Higgs mass…. and we are talking about colliders at 30 or 100 TeV scale (multi-TeV scale). For many people LHC may have been the last hope for a natural SUSY. Some believed that natural SUSY should have been found at LEP itself. Post LHC the Higgs mass fine tuning is at a 1% level. Pre-LHC it was pbly at a 10% level.

    There may still be some pushing for a higher collider on naturalness grounds (I think Nima would like to verify fine tuning of Higgs mass to 0.01% level via a 100 TeV collider), but for me 1% is enough to give up the idea of naturalness of Higgs mass.

    Agreed many things could be there at multi-TeV scale or higher scales — SUSY or heavier fermions etc. All I said was that there is no strong theoretical or experimental reason for anything to be there either at multi-TeV. I personally think this need not discourage experimentalists. There is no theoretical assurance or strong bias towards any discovery at multi TeV scale, but things can be there — they are not theoretically ruled out either.

    Had the neutrino masses pointed to TeV or multi TeV scale physics, that would have been something.

    Anyway maybe we differ in our beliefs.

  22. Amitabh Lath says:

    Anon, the era of precise roadmaps to new physics is fairly recent, starting with the W/Z discovery at the SppS, the top quark at the Tevatron, and most recently the Higgs at the LHC. Before that, our ancestors were sailing without charts not knowing what they would find. We are back to that previous era of wide open searches as opposed to a known signal where theory gives you production, decay, and everything else but the mass.

    Demotivating? Maybe for some. I find it exhilarating, frankly.

    PS: the electron EDM limits only rule out CP violating new physics.

  23. Niclas Granqvist says:

    There’s a considerable chance that hl-lhc will point us in new directions. No reason to give up on particle physics right now. I am optimistic that in some 30 years we have a more fundamental understanding of the world. It will take more experiments to work things out.

  24. If the analysis of the Run 2 data confirms the flavour anomaly in B mesons that Run 1 has consistently been seeing at around 3 sigma, this seems apparently likely to push the statistics of B meson flavour anomalies beyond 5 sigma and thus reveal new physics not by direct scattering processes, but by precision measurement of loop corrections. (here)

  25. Peter Woit says:

    Anon,
    I’ve often written about what’s wrong with the “hierarchy problem” (in short, it’s based upon making assumptions about what happens at higher energies that there is no evidence for, motivated by trying to get unsuccessful speculative ideas like SUSY to work). As for Arkani-Hamed and his vigorous promotion of the importance of naturalness, you should keep in mind this quote from him:
    “It’s important for me while I’m working on something to be very ideological about it. And then, of course, it’s also important after you are done to forget the ideology and move on to another one.”
    See
    https://www.math.columbia.edu/~woit/wordpress/?p=8002
    That “naturalness” has anything to do with the Higgs was a dubious ideology to begin with, now it’s a failed dubious ideology. Presumably Arkani-Hamed is now moving to a different ideology (the multiverse did it?), but whatever it is, one should keep in mind his quite self-aware quote.

    Actually, if you believe in “the multiverse did it”, I suppose you could argue that now that we’re moving past the electroweak scale, we’re entering energy ranges of no anthropic relevance at all. So, we expect things to just be completely random: all sorts of new particles and forces galore!

  26. Anon says:

    Peter, If there are two mass scales in the theory then naively there is a hierarchy problem…. sometimes one can see this at the tree level itself, sometimes at the loop level.

    Maybe it is a non-problem, but it is not clear to me why it would be a non-problem…. ie what is wrong with the naive analysis.

  27. It seems striking and unlikely to be a coincidence that the Higgs mass and potential comes out sitting right there on the metastability curve (here). (As opposed to the naturalness-argument, this is a numerical coincidence that does not depend on arbitrary choice of renormalization scheme.)
    If anything, it’s that fact which would suggest some deeper principle worthy of investigation.

    The Higgs metastability has been advertised as a hint of “asymptotic safety”, but I am not sure if closer inspection of the data justifies this (here).

    Investigation of the implications/meaning of the near-criticality of the Higgs exists, but is rare (e.g. He-who-must-not-be-named et. al. arXiv:1307.3536).

    Brave G. Kane dares to suggest that it’s actually this metastability which is a clue (“clue 4” in arXiv:1802.05199) for low energy susy. May be wrong, but is better than “naturalness”.

  28. Peter Woit says:

    Anon,
    But, besides the electroweak scale, what is the other mass scale causing this supposed problem? The GUT scale? But there is zero evidence for a GUT. The Planck scale? But we know nothing at all about this. What people have been doing is postulating unsuccessful purely speculative models with these scales, then announcing that there is a “problem” due to the smallness of the electroweak scale.

  29. Anon says:

    Hi Peter,

    In one of your replies on this thread I think you supported the idea of exploring the multi-TeV scale through a higher energy collider as new physics could be at that scale — say something that would need a 30 TeV or 100 TeV machine. So that scale — say 30 TeV — is then the second scale right? If we are to plan experiments to probe that scale, or start working out consequences of things that could be at that scale so that the experimentalists could look for those signals, we will see that there is a hierarchy problem in our calculations.

    If you have the point of view that there is no second scale, and so there is no hiearchy problem, essentially you are saying it is just the standard model with Dirac neutrinos all the way up to the Planck scale — then why continue with experiments that probe higher energy scales beyond LHC? (see ** below)

    If the neutrinos are Majorana particles (as most BSM theories assume), then there is a second scale, the seesaw scale < 10^15 GeV that the neutrino masses point to.

    ** You had written: " I don’t see any reason to go from the mistake of announcing that our understanding of what lies beyond the SM implied new physics at the LHC to the opposite mistake of saying that our understanding of what lies beyond the SM implies no new physics at a higher-energy collider. We simply don’t know what lies beyond the SM, whether it has to do with neutrino masses, dark matter, or perhaps unexpected physics of the Higgs field. Acting as if we do know about this and discouraging anyone from looking is not a good idea."

  30. Peter Woit says:

    Anon,
    The Higgs expectation value is 250 GeV, the LHC is giving us limits on new particles that are roughly 2 Tev if they’re strongly interacting, less than 1 TeV if no strong interactions. So, it’s not even getting to an order of magnitude above the Higgs expectation value. A next generation proton-proton machine at 2-7 times the energy I still don’t think you could describe as probing a new energy scale, one with a hierarchy problem with respect to 250 GeV. Same goes for a next generation lepton collider.

    I have absolutely no idea if there’s another relevant energy scale above the electroweak one, and I don’t think there’s a good argument either for or against one. Theorists just don’t have anything solid to say about this (seesaw models are not a solid argument), which is why experimentalists should try and go look.

  31. Anon says:

    Hi Peter,

    An important thing you may have missed is that it is the square of the Higgs mass that enters the Lagrangian — so the ratio of (2.5 TeV)^2 and (250 GeV)^2 is already a factor of 100. Increase this by factors of 4 to 49 (ie (2)^2 to (7)^2 rather than 2 to 7) and you begin to see that the ratio of new (collider scale)^2 to (weak scale)^2 is a factor of 400 to 4900.

    There is an undeniable hierarchy of scales.

    In fact this is the exact reason why SUSY (as a solution to hierarchy problem) is in trouble. Because post LHC, we are far above the weak scale, that it cannot be saved.

    If you are saying the next scale to be probed by the higher collider is pretty much same as the weak scale then you are essentially providing a hierarchy problem argument for the next collider, and succumbing to the physics is around the corner kind of ideology. Of course the argument is incorrect as you need to look at the square of the scales.

  32. Peter Woit says:

    Anon,
    Yes, I’m just saying that the LHC + LEP will not have fully explored the weak scale, which is one argument for building something higher energy that will more fully do so.

    The “hierarchy problem” is basically just the quadratic sensitivity of the Higgs field to the cut-off scale. Maybe this is a non-problem, maybe it’s telling us something, but we don’t know what (that it was telling us “you need a horrifically complicated extension of your theory with a hundred or more extra parameters” was always implausible). Either way, it’s irrelevant to the question of whether to explore higher energies or not. I don’t think it can be used as an argument for why or why not to build a new machine.

  33. Chris Oakley says:

    Just to note that expressing events recorded in ATLAS in miles per gallon would be a good way of showing the great value the public is getting.

    158 inverse femtobarn = 158×(10^43)×(4.546×10^-3)/1609 MPG =4.464×10^39 MPG. Even my super-eco hybrid does not come close.

  34. Anonyrat says:

    CERN writing guidelines:
    https://writing-guidelines.web.cern.ch/entries/inverse-femtobarn

    ….One inverse femtobarn corresponds to approximately 100 trillion (10^12) proton-proton collisions.
    ….
    Note: Do not use inverse femtobarn in the public section of the website where it can be avoided – it is unnecessarily technical. Convert to approximate numbers of collisions instead.

Comments are closed.