A New 30 GeV Particle?

Last night a preprint appeared on the arXiv, with a re-analysis of old 1992-5 LEP data, looking at the dimuon spectrum for b-tagged (identified as involving a b-quark) events. An excess around 30 GeV was found, which would indicate a possible new particle around that energy. The author quotes various significance numbers for the bump, with look-elsewhere effect included, of 2.4 to 2.9 sigma.

Thinking a bit about the look-elsewhere effect here, something very funny is going on. To properly compute the look-elsewhere effect, one really should know how many other channels the author looked at and found nothing, but there’s no mention of looking at other channels. Why did this particular physicist decide to go and reanalyze LEP data, looking only at the b-tagged dimuon spectrum (and it seems he’s doing this by himself)? It’s hard to understand why anyone would do this, unless perhaps they had heard that one of the LHC experiments might be seeing something in the b-tagged dimuon spectrum, say, around 30 GeV.

We’ll likely find out more about this story soon. If the LHC experiments haven’t been looking closely at this particular channel, they will do so now. 30 GeV is low enough that I don’t see why you would need the Run 2 13 TeV data, this should be in the older Run 1 data.

I should make the obvious remark though: this is an extraordinary claim, and the evidence for a new particle is very far from the extraordinary level. So, at a high confidence level, the probability is that there’s nothing there.

For much more about this, Tommaso Dorigo and Matt Strassler have just put out blog postings.

Update: Tommaso has an update with more about this: the author was not a member of ALEPH and that collboration does not support this but thinks this is bogus. It appears that the signal is spurious, with the muons coming from semileptonic b decays, not a new particle. Still a mystery: why was this physicist looking at this old data for one very specific signal?

Update: The talk today by Nate Odell of CMS at the LPC Physics Forum at Fermilab is not public, but the title is: “Dimuon 29 GeV analysis”. Any guess whether that has something to do with this story about 30 GeV dimuons?

This entry was posted in Experimental HEP News. Bookmark the permalink.

25 Responses to A New 30 GeV Particle?

  1. Hello Peter,

    nice post. I am thinking that even CDF and DZERO (leave alone DELPHI and OPAL) could look for something similar in their own oldie but goldie data. In fact, this whole business of looking for signals of anomalous particles in LEP data reminds me of the sbottom quark search of CDF, which returned an excess of multi-lepton dijet events, and the summer 2000 crisis. The signature of dijet events with leptons was sought by ALEPH, who initially found a 3-sigma signal, and then by the other LEP experiments, that found nothing. Will history repeat itself ?

    On a second though I should not derate the wannabe signal, and rather put out a bet on this one signal too – it looks like a nice way to earn my daily bread.


  2. Balazs Vagvolgyi says:

    I don’t get it. Why did scientists come up with 5-sigma, if they ignore it all the time.
    Why even bother publishing anything under 5-sigma, when it’s not up to the standards of modern physics?

    Some say 3-sigma still means that there is a 99.7% chance that it’s not a fluke, but I disagree. Look elsewhere should be applied not only within individual experiments but in the set of all the experiments being performed:
    Let’s say physicists in the world perform 1000 experiments a day. If all their data is just random noise, then there will still be 3 experiments every day that has a 3-sigma signal.
    To me, that just means that 3-sigma results are worthless.

  3. Peter Woit says:

    Balazs Vagvolgyi,

    As I wrote in the posting, something funny is going on here and I think it’s very likely there’s more to this than meets the eye. I doubt the author of this preprint randomly picked this analysis of old data to do, finding and publishing a 3 sigma effect. A possible explanation is that ATLAS or CMS has an unreleased analysis of b-tagged dimuons with a bump around 30 GeV, with a large enough significance to get people excited (e.g. 4 sigma or more). Put such an ATLAS or CMS result together with this new one and by some measure you have 5 sigma (and, a lot of ATLAS or CMS people pretty annoyed at this guy…).

  4. anon says:

    The author seems to be, or at least has been, a member of LHCb collaboration (at least his name is one of the 700 or so on their author lists). But going rogue would seem like sure way to leave the collaboration…

  5. Peter Woit says:

    Yes, I think we can be sure that inspiration didn’t come from unreleased LHCb results. It looks like this guy used to be in CMS, for what that’s worth.

  6. hronir says:

    What’s the timescale (knowing how things go in collaborations common practice) for any (official) statement from ATLAS/CMS about it? I mean, roughly in… days? weeks? more?

  7. Peter Woit says:

    If ATLAS/CMS haven’t already done this analysis, I’d guess it would take them a couple months to get it done and internally vet the results. If they already have something they’re working on, depends how far along that is. These people really don’t like to release results until they’re very sure about them.

    This year’s pp run is ending right around now. Results from the full dataset for this year’s results I’d assume will start to appear early next year, at the usual winter HEP conferences.

  8. anonymous says:

    Perhaps you want to take it in a slightly different way. As this guy released something like this, ATLAS or CMS might have some “pressure” to move ahead. This might be good for the guys working on the unreleased analysis. It will be interesting. As a side remark, why can’t he study the “bump” around 24-25 GeV, too. Following similar analysis to extract significance, he might have some sigmas in that region as around 30 GeV. Instead of one, he could have two or more.

  9. Dale C. says:

    So I guess shortly we are going to see a lot of preprints describing a theoretical model containing a 30 GeV weakly interacting “dark sector” particle, as in the 750 GeV diphoton excess case.

  10. katzeee says:

    I would not leave out the possibility of some kind of hoax: After the 750-GeV-Experience, some experimentalists decide to get people excited by reexamining old data and finding sth. around 3 sigma. Most of the known hep-bloggers get excited too (or at least spread the news) and speculate if there is sth. in the LHC-data to be seen. Things get on and theorists try to explain these mysterious bump by dozens of models in various preprints… Then the experimatalists share a good laugh.

  11. An (over-)ambitious overachiever, trying to break out? As “katzee” noted just above, “after the 750Gev experience”, hope springs eternal in some. ‘Twas ever thus. (The term used in criminal investigations comes to mind: A copycat crime. Or that well-known phenomenon, of strange occurrences happening in bunches.)

  12. Dale C. says:

    According to Tommaso, the signal is clearly spurious, as the muons are collinear with the b-jets emitted in the Z decay. Waiting for the next fluke…

  13. Low Math, Meekly Interacting says:

    I’ve tended to favor “Hanlon’s Razor” in most circumstances, but too often lately I’m finding out instead I’ve been terribly naive. Could katzeee be right?

  14. Verissimo says:

    Poor Jester got so fried from the 750GeV hype that he has no take on this

  15. Peter Woit says:

    He may just be showing his good judgment by ignoring it…

  16. Nick M. says:


    Ignoring it indeed! I submitted a comment, on Jester’s blog, asking if anyone there had any thoughts on this possible 30 GeV particle. He didn’t even bother to post it…. 😉 I guess he’s done getting excited over these 3-sigma, gaussian looking, bumps that have the word “fluke” written all over them!

  17. Pingback: Sobre el exceso a 30 GeV observado por el detector ALEPH de LEP | Ciencia | La Ciencia de la Mula Francis

  18. NotPhycsist says:

    >”Some say 3-sigma still means that there is a 99.7% chance that it’s not a fluke”

    I saw a lot of this surrounding the LIGO results. No, this is wrong in the most insidious way. The error you are making is called the “fallacy of the transposed conditional”: http://rationalwiki.org/wiki/Confusion_of_the_inverse

    In short, the probably of observing something assuming the model is correct does not equal the probability the model is correct given the observations. The sigma value refers to the former, you want it to refer to the latter but it does not. In this case, “the model” refers to whatever background only, no signal, “chance” model is being tested.

  19. Balazs Vagvolgyi says:

    @ NotPhycsist

    > The error you are making is called the “fallacy of the transposed conditional”

    I don’t agree with it, that’s why I wrote “Some say”.
    Thanks to you, now I know the name of this fallacy.

  20. NotPhycsist says:

    @ Balazs Vagvolgyi

    Thanks for clarifying. It is just an additional problem on top of what you and Peter identify as the “look elsewhere effect”. Btw, I think that problem can be better understood as using an incorrect model for the background. The correct model would automatically account for the actual researcher behavior.

    I know it is common practice across many fields to do post-hoc “adjustments” for multiple comparisons, but to me that is a misleading way to conceptualize the problem. The “adjustments” should be baked into the null model to begin with, so that it is as correct as possible.

    Another way of putting it is that the “bump” is real. The model being tested is known to be wrong though, it apparently assumes the researcher only looked at 10 sets of data, or whatever, that is less than reality. Rejecting this null model would be the right thing for the statistical machinery to do, but is rather pointless scientifically.

  21. RandomPaddy says:

    I don’t think all these swirling rumors do much for the credibility of particle physics.

    A lot of this could be completely avoided by just putting the data up on a publicly accessible server. It’s 2016 and these experiments involve more people than the average aircraft carrier. Just release the data and stop all these bump in the data murder mysteries .

  22. JD says:


    Go to Tommaso Dorigo’s blog to clarify your thinking. I agree with him that the data should be out there for general use, and apparantly it is. But that will not solve the problem, obviously. The problem lies in the culture of todays science.

  23. Peter Woit says:


    You’re right that this has nothing to do with public availability of data (except to show problems that can occur when doing this). I don’t think though it has to do with the culture of today’s science, rather, this is a quite unusual situation, where a single physicist, for reasons that it seems he’s not revealing and that would be interesting to know, has decided to go back and look for something very specific in old data from another experiment.

  24. Shantanu says:

    Peter and others ,
    I slightly disagree with this. I think HEP experiments should make their data public
    and individual folks should be encouraged to analyze this data, not just the collaborations. In fact in astronomy it is mandatary to make all data public and many people outside the collaboration have analyzed the data and written good papers.
    But for some reason this is not the norm in HEP. So its heartening to see individual
    folks analyzing experimental HEP data.

  25. Peter Woit says:

    I’ve added an update noting the title of a talk today (Nov. 3) by someone from CMS…

Comments are closed.