Why Colliders Have Two Detectors

Last year the D0 collaboration at the Tevatron published a claim of first observation of an Ωb particle (a baryon containing one bottom and two strange quarks), with a significance of 5.4 sigma and a mass of 6165 +/- 16.4 MeV. This mass was somewhat higher than expected from lattice gauge theory calculations.

Yesterday the CDF collaboration published a claim of observation of the same particle, with a significance of 5.5 sigma and a mass of 6054.4 +/- 6.9 MeV.

So, both agree that the particle is there at better than 5 sigma significance, but D0 says (at better than 6 sigma) that CDF has the mass wrong, and CDF says (at lots and lots of sigma..) that D0 has the mass wrong. They can’t both be right…

For a detailed discussion, see here, here and here.

This entry was posted in Experimental HEP News. Bookmark the permalink.

12 Responses to Why Colliders Have Two Detectors

  1. ObsessiveMathsFreak says:

    5.4 sigma is pretty accurate (~0.999999 probability of data being within this many standard deviations of the mean if my statistics is still correct).

    But how many of these experiments do they have to run in order to find a particle again?

  2. Dmitry says:

    Hi,

    I am from CDF, so I am biased.
    But: D0 result cannot be right for two reasons:
    1) they observe relative production rate of Omega_b/Xi_b to be almost 1 (0.8). Normally you expect a penalty of about 1/10 in production rate for additional s-quark. A picture which quite consistent across many experiments for different species (Xi_b/Lambda_b), (Xi_c/Lambda_c), (Omega_c/Xi_c).
    2) The mass they measure for Omega_b is way off of Xi_b (expect difference to be ~0.2GeV) which is again is observed in other systems (B_s vs B, Xi_b vs Lambda_b, Omega_c vs Xi_c, Xi_c vs Lambda_c)

    There is third, reason – theoretically these states are studied very well and there is no really a wriggle room for the Omega_b mass. It is pretty firm 6.05 +/- 0.01 GeV. If observed mass is significantly different it means tons to Heavy flavor physics. This means that HQET does not work for baryons – a statement which is a very bold statement since HQET has been a very precise tool in describing property of heavy flavors so far!

    (1) and (2) make D0 observation a very extraordinary claim and therefore require extraordinary scrutiny.

    CDF has just provided this scrutiny by performing a simple cut based analysis on a sampe which is almost 4 times bigger than one used by D0. Nothing in the claimed region and a nice peak in the anticipated region with good precision matching theory expectations.

  3. Dmitry says:

    Also: people should not get fooled too much by > 5*sigma claims. There have been way too many cases so far of > 5sigma claims that dissolved or failed to be reproduced. Independent confirmation is the only way to really establish am effect.

  4. zanzibar says:

    Dmitry says:

    “(1) and (2) make D0 observation a very extraordinary claim and therefore require extraordinary scrutiny. ”

    What is the effect of *not* subjecting all measurements to the same “extraordinary scrutiny”. A bias towards orthodoxy?

  5. Hi all,

    first of all many thanks to Peter, who is always very generous with links to my site. At least I can say I bought him a beer already… I hope I will have a chance to buy him a dinner another time, although he’ll probably try to fight for the check.

    Second, Dimitry is right, but there are more reasons, experimental ones, to say that the DZERO result is unfortunately wrong this time.

    1) First of all, the DZERO significance is computed by taking the probability of the -2 log (L_s+b/L_b) variation between the likelihood of a fit with signal and background to a likelihood with background only. Their signal has variable amplitude AND mass in the fit, so the s+b fit has TWO degrees of freedom more than the background-only one, but they compute the significance as if the delta log L distributed as a chisquared with ONE degree of freedom. Their true significance is 5.05 sigma, not 5.4 as mentioned in the paper.

    2) second, the systematic part of the mass uncertainty in the CDF mass measurement is below one MeV, the one of the DZERO mass measurement is more than tenfold. This means that if one of the two experiments got the mass wrong, it must be DZERO, since the statistical uncertainty is well-measured in both cases, and the two measurements are totally inconsistent. To be clear, if you had to inflate the systematics of one of the two experiments with a k-factor as the PDG does when they get their averages, you would have to inflate DZERO’s with a K=5, or CDF’s with a K=60. Your pick.

    3) third, CDF analyzed three times more data. If the rate measured by DZERO were right, CDF would have seen more than thirty events of signal in its dataset, in a sample which counts 35 (where CDF measures 12 of signal). This is utterly unlikely. If, instead, the CDF rate is right, then DZERO would have seen only five or six events in their data due to Omega_b production. They saw more, and that is not a too unlikely fluctuation.

    4) then theory agrees with CDF, disagrees wildly with DZERO both in rate and in mass. This of course can only be taken in consideration after all the rest.

    Best,
    T.

  6. Dmitry says:

    Hi Tomasso,

    I am not in argument with you, just my observations:

    1) 5.05*sigma vs 5.4*sigma is really a nitpicking.

    2) you assume that both experiments see the same particle and ask yourself who could be wrong on its mass CDF or D0. But given > 6.5 sigma discrepancy between CDF and D0 we can be sure they see *different* particles. So the question on who got Omega_b mass wrong is kind of irrelevant.

    3) Rate is the killer argument. that put a big question over the D0 measurement. Huge rate and extraordinarily high mass put a big question on their result.

    CDF of course looked for Omega_b all along even back in fall of 2007 (with 2.2 fb) . There was empty space in D0 mass range and something promising at 6.05 which was decided to be left alone until we had more data and therefore significant signal.

    The real interpretation of CDF result:
    – discovery of Omega_b
    – ruling out of structure @ 6.110 GeV previously reported by D0

    Given discrepancy with expected mass D0 must have called their work not a “First Direct Discovery for Omega_b” but “Evidence for a new particle in Omega, J/psi mass spectrum”.

    They justified that what they see is Omega_b by taking a broad sample of theoretical works going back 20 years. Where you may indeed find someone who thought that its mass could be 6.120. But since then the predictions have narrowed down significantly to 6.05 +/- 0.01 (based on input from measured states like Lambda_b, Sigma_b, Xi_b). Having assumed that this is expected particle they automatically overestimated the significance of the signal.

  7. Dmitry says:

    – ruling out of structure @ 6.165 GeV previously reported by D0

    I put wrong mass there….

  8. Dmitry says:

    and last post: The cool thing about CDF analysis is that the Omega hyperon (for Omega_b search) was actually tracked in silicon allowing for precise Omega_b vertex determinations and cutting down on combinatorial background. This is relatively sophisticated technique that provides high purity sample of Omega combined with pretty simple cut based analysis later on gave CDF enormous confidence in this result.

  9. dmitry, 5.05 differs from 5.4 by a factor seven in probbility! nit-picking? maybe, but enough to fire the PRL reviewers, if you ask me!
    cheers,
    T.

  10. Dmitry says:

    It is nitpicking cuz it does not do anything to prove that the result is wrong. Maximum and errata could be issues – “oops, this is 5.05 not 5.4 sorry, but still > 5., so we are cool”.

    My point is that D0 Results is Not Even Wrong (what a proper board we are discussing this stuff!), so poking small holes in it is waste of time.

    The production rate and mass value measured by D0 are so outrageous, as to should they be true we should be throwing Standard Model away!!! This is what is implied by D0! This is like all of a sudden someone tells you that 2+2 is not 4 anymore but 5 (not even 4.000000001), this is how badly it is violated. I am surprised than none pointed this out. As soon as I learned that they got almost the same strength signal as Xi_b in the same size sample I told our B-group conveners – be cool, this is rubbish and stopped paying attention.

    One cannot call Omega_b something that does not look like Omega_b.
    As soon as you remove that condition, and estimate probability of getting 17 events above background in any random place in wide mass range you will go below 5 sigma immediately.

    In this view, I recommend you to peruse D0 Xi_b observation, which I still don’t quite believe (so shell-shocked I was when I learned that we got scooped in 2007 – myself and Pat were authors of CDF analysis) .
    Note, their Xi_b/Lambda_b production rate is 2 times larger than ours (granted with errors big enough as not to cause a stir). Fluctuation?
    Does D0 thrive by fluctuations? B_s mixing infamous “first ever double sided limit”? First singe top 3 sigma evidence? Xi_b was lucky fluctuation (at the “right mass”) and Omega_b was unlucky one. Payback time 🙂

  11. Hi Dmitri,

    I take it that you did not read my last post on the matter, which shows conclusively what you are arguing, from a statistical standpoint.

    I however insist: the error in the D0 PRL had to be spotted by PRL reviewers. They get poor grades for that.

    Cheers,
    T.

  12. Dmitry says:

    Tomasso,

    Lets be simple. D0 has signal S = 17.8 on top of background B=1.75*6. =10.5 (looking by eye at their plot).

    converting probability of no signal to give signal into Gaussian significance is roughly the same as calculating:

    significance = S / sqrt(B) = 17.8/sqrt(10.5) = 5.5 sigma

    So I, personally, do not really doubt their figure for significance. Or rather I do not put *too much emphasis* on it. You probably know statistics better. IMO ratio of likelihoods is not a probability to get signal from background by background fluctuation even if you recast this ratio to look like chi2 difference.

    But *I do not want to argue this*. Read my post. You’re right, they should have put 5.05 sigma in paper. Whatever.

    I shoot in different direction – mass is off from theory by at least 4.4 sigma (even inflating theory error by at least 1.5). So according to this the probability that what D0 was seeing Omega_B is
    Erf(4.4/sqrt(2))=1.1e-5. This is low. Given that, and very high relative production cross-section (w.r.t to Xib) I would (If I was on D0 editorial board) have required more data to be analyzed, just to be sure. A valid demand, given that by 2008 D0 has twice more data compared to the sample they used to publish their Omega_b observation. May be they would have found it in the same place where CDF found it, then!

    Cheers,
    Dmitry

Comments are closed.