GAMBIT

The LHCP 2017 conference was held this past week in Shanghai, and among the results announced there were new negative results about SUSY from ATLAS with both ATLAS and CMS now reporting for instance limits on gluino masses of around 2 TeV. The LHC has now ruled out the existence of SUSY particles in the bulk of the mass range that will be accessible to it (recall for instance that pre-LHC, gluino mass limits were about 300 GeV or so).

Over the years there has been an ongoing effort to produce “predictions” of SUSY particle masses, based on various sorts of assumptions and various experimental data that might be sensitive to the existence of SUSY particles. One of the main efforts of this kind has been the MasterCode collaboration. Back in 2008 before the LHC started up, they were finding that the “best fit” for SUSY models implied a gluino at something like 600-750 GeV. As data has come in from the LHC (and from other experiments, such as dark matter searches), they have periodically released new “best fits”, with the gluino mass moving up to stay above the increasing LHC limits.

I’ve been wondering how efforts like this would evolve as stronger and stronger negative results came in. The news this evening is that they seem to be evolving into something I can’t comprehend. I haven’t kept track of the latest MasterCode claims, but back when I was following them I had some idea what they were up to. Tonight a large collaboration called GAMBIT released a series of papers on the arXiv, which appear to be in the same tradition of the old MasterCode fits, but with a new level of complexity. The overall paper is 67 pages long and has 30 authors, and there are eight other papers of length totaling over 300 pages. The collaboration has a website with lots of other material available on it. I’ve tried poking around there, and for instance reading a Physics World article about GAMBIT, but I have to confess I remain baffled.

So, the SUSY phenomenology story seems to have evolved into something very large that I can’t quite grasp anymore, perhaps a kind reader expert in this area can explain what is going on.

This entry was posted in Uncategorized. Bookmark the permalink.

20 Responses to GAMBIT

  1. Mitchell Porter says:

    It just seems to be a software library that allows you to do parameter fitting and likelihood estimation, for a variety of BSM field theories, in a standardized way.

  2. Peter Woit says:

    Mitchell Porter,
    I can see that much. What I don’t understand is what is new about this, how this is different than previous efforts like MasterCode. What will this do that MasterCode didn’t and why do the assumptions built into seem to be of a higher level of complexity than MasterCode?
    I guess part of what I don’t understand is that I would have expected that, as stronger and stronger LHC bounds rule out more and more of these kinds of models, I’d expect people to lose interest in this kind of thing, whereas instead we seem to be seeing a larger and larger group of people working on it.

  3. Ryan says:

    Yeah and apparently these efforts are well received by the pheno community. Here are some random tweets that popped up in my timeline:

    https://twitter.com/HEPAdelaide/status/867258318770683904
    https://twitter.com/Tristan_duPree/status/867259714496757760
    https://twitter.com/suchi_kulkarni/status/867273017474375680
    https://twitter.com/SaschaCaron/status/867301364904456193

    Maybe the last tweet sums up the mindset behind this kind of work: “Yes, nice to see people moving to more complex models.”

    No idea, what’s nice about “more complex models”. However, from a naive perspective it seems to make sense that the new bounds require more effort on the “model builder” side and thus more complex fitting codes…

  4. Peter Woit says:

    Ryan,
    Thanks. That captures part of what is confusing me here. What is the reason for “more complex models”?

  5. vmarko says:

    Peter,

    I think you are right to expect people to lose interest in this stuff, but it feels that you are a bit ahead of time. Losing interest will happen eventually. It’s just that some people are hard to give up, and SUSY models will take some more time to drop out of fashion, IMO.

    Best, 🙂
    Marko

  6. Peter Woit says:

    vmarko,
    Yes, but that’s not what I find hard to understand about this project. Looking at their papers I see as output various computed “likelihood” profiles of various quantities, but I find it hard to figure out what assumptions go into these and what their significance is. If the LHC tells you the gluino is above 2 TeV, what is the significance of a likelihood profile for where the mass is supposed to be above 2 TeV? More precisely, take a look at page 8 of this presentation
    https://indico.cern.ch/event/571190/contributions/2377454/attachments/1387436/2112013/Kvellestad_GAMBIT_LHC_recast.pdf
    about GAMBIT, explaining their take on how to perform a “global fit”. There seems to be so many assumptions and so much complexity built into this fit that I have no idea what it’s significance would be.

    I’d rather not have a discussion of the usual issues of SUSY sociology, am curious about this new development.

  7. Ben says:

    GAMBIT and MasterCode have overlapping capabilities – they’re two different collaborations working on similar things. One difference is that GAMBIT is open source and seems to have everything up on GitHub. To some people this is a big deal. It maybe also incorporates a few extra codes that MasterCode doesn’t, but that’s less clear to me.

    I would usually take “more complex models” to mean more complexity in the phenomenological modeling and likelihood computations, not necessarily more complexity in the fundamental theory. Usually when a theorist comes up with something new the first things one does are quick first-order calculations on what parameters are allowed. But this usually doesn’t include a deep understanding of experimental systematics, cross-experiment effects, etc. That’s where these sorts of codes come in.

    Disclaimer: I’m not involved with either project.

  8. Jake says:

    Peter,

    do you know the essay “Effective quantum field theories” by Georgi in “The New Physics” edited by P. Davies, where he writes about “how theoretical particle physics works as a sociological and historical phenomenon?”

    It’s from 1989, but I only recently stumbled upon it. I think it describes nicely the situation nowadays and it helps to understand what people are currently doing in particle physics. I quoted it, and wrote about it here: http://jakobschwichtenberg.com/making_sense/ (at the end of the post)

    The most relevant part is probably: “During such periods, without experiment to excite them, theorists tend to relax back into their ground states, each doing, whatever comes most naturally. As a result, since different theorists have different skills, the field tends to fragment into little subfields.”

  9. Vognet says:

    Peter, as an analogy GFITTER and other groups were doing similar fits for the standard model long before the Higgs was discovered, and such fits gave best fit points for a standard model Higgs mass (and other best fit points could be obtained for a non-standard higgs) that kept moving up and up as they were excluded. Now imagine a world where the limitations of technology only allowed you to go up 5 GeV at a time after passing 90 GeV, and the construction of such a machine would take a decade each time. Each decade you would get discouraged that the Higgs didn’t show up around the corner yet again, and eventually blogs like yours would start saying we had it all wrong, the arguments for the Higgs being light were nonsense, updating these fits is pointless, and we should just accept that experiment is telling us so. But discouragement is a human prior placed with a human timescale and dependent on human technology; in reality the Higgs mass will be what it is independently of this. In our actual world, that it “only” took us almost half a century to discover the Higgs is fortunate but didn’t have to be so, and the arguments for why the Higgs had to be there would have remained the same as what they were from the start regardless of the accumulation of null searches.

    Now before you say the situation was different with the no lose theorem of the Higgs guaranteeing some discovery, I am not talking about the motivation for building a higher energy collider, I am talking about the motivation for a fit to a model beyond what is currently known to get a statistical inference on what the indirect constraints are on such a model. This is how science proceeds in the “confronting theory to data” phase, and in the hypothetical world where we go up 5 GeV every decade it would still be motivated to update fits for not just the standard model Higgs, but all kinds of non-standard Higgs models, and even higgsless models, because we still wouldn’t know what lies at those energies up there, and what lies there doesn’t care that naysayers are getting impatient and tired of seeing updated fits with best fit points that keep moving up.

    As for GAMBIT it’s just a tool for doing such statistical inferences that is routine in all areas of science from biology to astrophysics and particle physics, I’m not sure what your bafflement is at this. It’s like if years ago Stephen Wolfram released a new-fangled software called Mathematica with ten example notebooks, one of which was a SUSY calculation, and you were similarly perplexed at why someone bothered writing an algebraic package to do a calculation in general that a SUSY group already did for their SUSY case before, and since you don’t care much for SUSY your reaction is to ask what is the point?

  10. Peter Woit says:

    Vognet,
    I don’t really see the analogy with Gfitter, something I do more or less understand. There essentially you had a model with one undetermined parameter, lots more measurements than parameters, some of which were weakly sensitive to the undetermined parameter, so you’d expect to have some information that could be used to get a sensible “best fit” for the undetermined parameter. Gfitter was giving best fit numbers for the Higgs even without using the direct search exclusions (96 +31 -24 in 2011) and if you dug around a bit you could see which measurements had some sensitivity to the Higgs mass and were giving non-trivial information, and at what point the SM would break as direct search exclusions eliminated that range from the bottom (in 2011 the best fit using direct search exclusions was 120 +12 -5).

    What’s the analog for a GAMBIT “best fit” for the gluino mass? I understand that they are using direct search exclusions, but all those can do is tell me a lower bound on the gluino mass, not give a non-trivial “best fit” somewhere. What are the experimental measurements (other than direct searches) indirectly sensitive to the gluino mass that are giving non-trivial best fit information? I understand there must be some such information, but what I’m mainly saying here is that I can’t get this from their description of what they are doing, since it is so complicated and the number of undetermined parameters is high. In the case of MasterCode, with a quick look at their papers I could see that they were getting information like this from certain specific deviations from the SM, eg. g-2 for the muon and see the relation to a non-trivial best fit. A similar quick look at GAMBIT just leaves me perplexed about what exactly they’re doing and wondering whether there’s anything going on here besides the direct search exclusion regions and artifacts of their assumptions and various choices.

  11. M says:

    Can somebody highlight one interesting result in these many papers?

  12. Tim says:

    If I understand Peter correctly, his objection is basically that among models in a particular class that are not excluded by experiment, what is the motivation for justifying that certain parameters that are compatible with experimental constraints are more probable than other parameters that are also compatible with experimental constraints?

    Absent some a priori assumptions like naturalness, there is no justifiable probability metric for non-excluded theories, and the LHC results seem to be strongly hinting that naturalness, as understood prior to the LHC results, is not particularly useful in predicting future experimental results.

  13. Peter Woit says:

    Tim,
    I need to keep emphasizing that my problem here really is the complexity of the choices being made, more than any particular choice itself. If you do a simple analysis with one or two simple choices, a human being may be able to understand how the result depends on the choices, and have an interesting discussion about the choices. In this case, when I try and read these papers, the sheer complexity of the choices being made cause me to get lost very quickly, with no feel for which choices might be important. For example, looking at the analysis done for the MSSM here,
    https://arxiv.org/abs/1705.07917
    right at the beginning in table 1 the authors list 7 different parameters they intend to scan over, with no particular motivation for the ranges chosen, and the priors chosen, which are listed as “flat”, “hybrid” and “split hybrid”. That’s just the beginning, going on from there things get really complicated, and no human being I think will be capable of figuring out how the complexities being invoked are reflected in the final results.

    If you have a really complicated problem, with really complicated data, it’s understandable that you might get involved in very complex analyses. The question in my mind is whether going down that path here sheds light on the questions supposedly being addressed. M’s question about whether this leads anywhere interesting is to the point.

  14. G. S. says:

    I’m guessing that the real point of this work is to provide a project for a group of high energy physicists that teaches them the skills they will need when they quit high energy physics and look for jobs in data science/machine learning/software engineering.

    That seems to be the path a lot of physicists are taking these days.

  15. Mitchell Porter says:

    Stacy McGaugh describes the situation in astrophysics, as one in which elaborate models of galactic dark-matter dynamics come with numerous opportunities for post-hoc fine-tuning, while the many successes of a simpler, more principled theory (MOND) are relatively neglected.

    I can certainly see the potential for this software package to sustain an analogous situation in high-energy physics, though perhaps it’s not as clear what simpler opportunities are being missed (Koide relation, Higgs criticality?). But I wonder how flexible it is. Could this package be useful for people working on quite different models, like discrete flavor symmetries in neutrino physics?

  16. Peter Woit says:

    All,
    MOND is off-topic here. The last thing the world needs is another source of dubious claims from non-experts about that.

  17. photongrapher says:

    Peter you seem to have put little time in trying to understand what they’ve done here. This is not just about SUSY, GAMBIT can be used for any BSM physics.

    I understand the desire for a simple statement of “gluions are excluded below 2TeV”, but they are taking a more nuanced approach. While the former contains a whole bunch of assumptions, their approach is trying to allow a simple way to explore what is possible in BSM theories. In the absence of BSM observations, it is necessary to be more ‘complex’ so that instead of making statements like:
    “there is no SUSY <2TeV*"
    (*where SUSY = MSSM and the LSP is all the DM.. etc),
    They might be able to more concretely say:
    "there is no SUSY <1TeV PERIOD"

  18. Peter Woit says:

    photongrapher,

    Yes, I’ve put some but not a lot of time into trying to understand what they’ve done here, the question in my mind is why it might be worth putting in more time. I do understand this is meant as a general tool, not just for SUSY theories. The gluino was just taken as an illustrative example.

    I also understand the use of this tool to scan for regions of SUSY parameter space not yet ruled out be experiment, assume that’s always been an active area of phenomenology research with various tools available, maybe this one is better, I don’t know.

    What I don’t understand is, for instance, most of the plots in
    https://arxiv.org/abs/1705.07917
    which are giving relative likelihoods and “best fits” for SUSY masses not in some corner of parameter space that people are trying to rule out, but at generic values way above the experimentally observable range. What is the point of these? Do they make any sense?

  19. Mitchell Porter says:
  20. Henry McFly says:

    Peter,

    Table 3 on that paper shows the contribution of each term used in the likelihood to the total. The results are also shown separately for each mechanism that gives the correct dark matter relic density. LHC sparticle searches don’t contribute since there is nothing observed yet, but Higgs discovery and b-physics do contribute a lot, so does Fermil-LAT gamma ray observations.

Comments are closed.