Collider Smackdowns

If you’re interested in particle physics and not regularly reading Tommaso Dorigo’s blog, you should be. His latest posting reports on incendiary claims by Michael Dittmar of the CMS collaboration that recent Tevatron Higgs mass limits are wrong and not to be believed. According to Dittmar, the Tevatron is basically useless for looking for a SM Higgs, with only the future LHC experiments ever having a chance to see anything or produce real limits. You can look at the slides and the blog posting and make up your own mind. From what I can tell, Dittmar doesn’t make a strong enough case to show that the Tevatron results are wrong. It remains true of course that the statistical significance of the limits being set (“95% confidence level”), is right at the edge of what is normally taken as capable of seriously ruling something out.

In the latest New York Review of Books, Freeman Dyson, in context of a review of Frank Wilczek’s The Lightness of Being, engages in his own smackdown of particle physics at colliders. Here’s what Dyson has to say about the LHC, and colliders in general:

Wilczek’s expectation, that the advent of the LHC will bring a Golden Age of particle physics, is widely shared among physicists and widely propagated in the press and television. The public is led to believe that the LHC is the only road to glory. This belief is dangerous because it promises too much. If it should happen that the LHC fails, the public may decide that particle physics is no longer worth supporting. The public needs to hear some bad news and some good news. The bad news is that the LHC may fail. The good news is that if the LHC fails, there are other ways to explore the world of particles and arrive at a Golden Age. The failure of the LHC would be a serious setback, but it would not be the end of particle physics.

There are two reasons to be skeptical about the importance of the LHC, one technical and one historical. The technical weakness of the LHC arises from the nature of the collisions that it studies. These are collisions of protons with protons, and they have the unfortunate habit of being messy. Two protons colliding at the energy of the LHC behave rather like two sandbags, splitting open and strewing sand in all directions. A typical proton–proton collision in the LHC will produce a large spray of secondary particles, and the collisions are occurring at a rate of millions per second. The machine must automatically discard the vast majority of the collisions, so that the small minority that might be scientifically important can be precisely recorded and analyzed. The criteria for discarding events must be written into the software program that controls the handling of information. The software program tells the detectors which collisions to ignore. There is a serious danger that the LHC can discover only things that the programmers of the software expected. The most important discoveries may be things that nobody expected. The most important discoveries may be missed.

He goes on to somehow count Nobel prizes for experimental results in particle physics, with the conclusion:

The results of my survey are then as follows: four discoveries on the energy frontier, four on the rarity frontier, eight on the accuracy frontier. Only a quarter of the discoveries were made on the energy frontier, while half of them were made on the accuracy frontier. For making important discoveries, high accuracy was more useful than high energy. The historical record contradicts the prevailing view that the LHC is the indispensable tool for new discoveries because it has the highest energy.

His argument that proton collider physics is problematic because of the huge backgrounds and difficulty of designing triggers just states the reasons why these are complicated and difficult experiments. Despite the difficulties, they have produced a huge number of new physics results. He doesn’t give the details of how he is counting and categorizing Nobel Prize winning results, so that part of his argument is hard to evaluate.

In opposition to colliders, Dyson wants to make the case for passive detectors, with his main example Raymond Davis’s discovery that the neutrino flux from the sun is 1/3 what it should be. I don’t really see though why he sets up such experiments in opposition to high energy accelerator experiments. Right now many of them actually are accelerator experiments (for example MiniBoone), with an accelerator being used to produce a beam of neutrinos sent to the passive detector. Dyson’s point that if one is very smart and lucky one may get indirect evidence about physics at high energy scales from passive detectors looking at cosmic rays is valid enough, but there is no shortage of people trying to do this, and it is every bit as problematic as working with colliders. There are inherent reasons that such experiments can’t directly investigate the highest energies or shortest distance scales the way a collider experiment can. It’s extremely hard to come up with a plausible scenario in which cosmic ray experiments will give you any information about the big remaining mystery of particle physics, electroweak symmetry breaking.

While I agree with Dyson that the huge sales job to the public about a new golden age of physics coming out of the LHC is a mistake. I don’t see any reason to believe that if it fails cosmic ray experiments are going to get us to a golden age. If and when particle physics reaches a final energy frontier, with higher energies forever inaccessible to direct experiment, hopes for a golden age are going to rest on theory, not experiment, and recent experience with such hopes isn’t very promising.

Update: This Sunday the New York Times will have a profile of Dyson, see here.

This entry was posted in Uncategorized. Bookmark the permalink.

26 Responses to Collider Smackdowns

  1. TSM says:

    Very naive newbee question by why did the LHC designers decide to go with a proton/proton system versus proton/anti-proton? Wasn’t the SSC suppose to be proton/anti-proton?

  2. TSM says:

    For got my last part: are anti-proton/proton experiments less noisy with regards to background?

  3. Shantanu says:

    Peter, proton decay is one example of a non-accelerator particle physics
    experiment which probes physics at very high energies. Or you are no longer sanguine about proton decay being discovered?
    Also any comments on Raphael Bousso’s colloquium at PI?

  4. Peter Woit says:

    TSM,

    I believe the SSC was also a proton-proton machine. The problem with antiprotons is that it’s hard to accumulate them and get an intense enough beam, limiting the luminosity one can achieve. As one goes to higher energies, the cross-sections one is interested in fall off, and you need higher luminosity. Maybe someone more expert than I can address the issue of relative backgrounds.

    Shantanu,

    I think the most likely end result of proton decay experiments will just be that protons don’t decay at any rate one can ever hope to observe. If so, these experiments won’t tell you anything about high energies, other than that baryon number stays conserved. If proton decay is observed, that would be extremely interesting, and would give the sort of thing Dyson is hoping for.

    As far as I can tell, Bousso’s talk is just more of the same. I don’t see any hope for getting real science out of that.

  5. Bee says:

    Sounds like what he is actually saying is: hey folks, even if the LHC fails, be prepared to spend more billions on a lepton collider.

  6. anon says:

    Hi Peter

    You said:
    “It remains true of course that the statistical significance of the limits being set (”95% confidence level”), is right at the edge of what is normally taken as capable of seriously ruling something out.”

    As far as I was aware it is only in the social or medical sciences that 95% or two sigma is considered even worth mentioning, in astronomy its usually 3 sigma and in particle physics five sigma. Many in astronomy think it should be five sigma as well. I think two sigma should never be taken as more than the slightest of a hint that something could be ruled out.

  7. AOJ says:

    In comparing the potential of cosmic ray experiments to collider experiments, you should remember cost scale. Auger, the biggest and most ambitious cosmic ray experiment had a price tag between $100 and $200 million. If you threw LHC-sized funding at a building a single, amazing cosmic ray detector, you would get immediate breakthroughs in multiple areas of astrophysics and quite likely particle physics as well. The center-of-momentum energy for collisions of the highest energy cosmic rays is ~100 TeV. We certainly won’t be probing that scale in collider experiments any time in the foreseeable future.

  8. Peter Woit says:

    AOJ,

    Yes, but the luminosity of cosmic ray experiments is way too small at TeV and above center of mass energy scales to be useful, and building a 50xAuger that cost as much as the LHC wouldn’t change that. You would learn more about astrophysics, but I don’t believe you’d learn anything about TeV-scale particle physics phenomena like electroweak symmetry breaking. For a 120 Gev Higgs, how big a cosmic ray experiment would you have to build to see evidence of it?

  9. David B. says:

    TSM

    Antiprotons are expensive to produce and maintain. At very high energies the collisions are dominated by gluons rather than quarks, so most of the events will be from glue collisions that do not distinguish protons from antiprotons.

    For some types of searches, it is better to have a proton anti-proton setup, if the physics depends heavily on having quark-antiquark collisions at high energies.

    If you are looking for colored particles or the Standard model Higgs, proton-proton will do just as good a job as proton anti-proton.

  10. Chris Oakley says:

    I am not quite sure what Dyson is getting at vis a vis choice of particles as at TeVs one tends to produce a godawful mess regardless of whether the beams are p/p, e+/e- or whatever.

  11. The Real Deal says:

    I don’t know if Dyson is smart or dumb.

    First the dumb part:
    Sitting on his high armchair in comfort, he criticizes the LHC in hindsight, even before it’s operational. The LHC costs some $4B to build, another $1B to run experiments, a dozen years to design and built, by thousands of physicists and engineers. The most sophisticated of intellectual efforts contributed by people who actually sweat it out instead of setting pretty in armchairs. Vast amount of public money, reputation, national dreams poured into it. It had better work. It had better produce. No further hyping necessary. The scale of the effort is ‘hype’ enough. Dyson should simply shut up and stop looking dumb.

    Now the smart part:
    If Dyson is so smart educating us all the reasons why the LHC should not be built, why don’t he propose his superior design to replace it? Try getting $10B to build it. And let other do the criticizing in their armchairs.

  12. Observer says:

    …It had better work. It had better produce…
    But what if not. We all have seen the main problems in the first run of the LHC are due to basically Quenching and heating, both of this problems relatively “basic” despite all the intelectual effort behind them.

  13. Zathras says:

    The Real Deal,

    Dyson has been criticizing p-p colliders for about 20 years now. I myself went to a talk of his in 1991 where he criticized the SSC for being as such. He is not coming late to this, and he has proposed alternative designs (a lepton collider). He just hasn’t been able to convince enough scientists of what he perceives as the superiority of lepton colliders.

  14. milkshake says:

    Dyson is mildly non-conformist – he also likes to write against the global warming agenda orthodoxy. I bet he finds the neatly- designed passive detector experiments far more marvelous than accelerator physics.

  15. Hi Peter,

    I feel obliged!

    I would like to stress, to answer anon’s comment 6 above, that HEP physicists do not use 95% CL limits as a claim of anything. However, I agree that unfortunately the importance of such results gets unduly overestimated. So much so that the occasional Dittmar can feel compelled to refute them and waste his reputation in meaningless talks.

    Cheers,
    T.

  16. John K says:

    TSM
    re : PBar (Antiproton) -P vs, PP

    Advantages of PP (vs. PBarP)
    –much easier to get intense P beam than PBar beam, so can get to much higher luminosity (interaction rate) –perhaps a factor of
    hundreds or even thousands
    –does not need the Pbar production system (which, in the case of the Fermilab PbarP collider, involves a very complicated system of production target, collector (lithium lens), cooling ring, accumulator, etc.
    –LHC at 7+7 TeV will have a much higher production crosssection of new physics, especially above 200 GeV–and can reach masses not available in the 1+1 TeV Tevatron

    Disadvantage of PP (vs. PBarP)
    –PBarP need only one ring of magnets (PBar and P goes in opposite direction in the same ring)
    –in low masses (such as 100-200 GeV, where the Higgs is expected to be), Signal to background is somewhat better
    than PP

  17. Pingback: LHC e nova física « Ars Physica

  18. Chris Oakley says:

    John K,

    Another advantage of the p-p_bar system is surely that the resonances are likely to be more interesting, having zero charge and baryon number. Did they not use that (in an e+e- system) to investigate the J/Ψ? Would this still be relevant at much higher energies?

  19. Coin says:

    The machine must automatically discard the vast majority of the collisions, so that the small minority that might be scientifically important can be precisely recorded and analyzed. The criteria for discarding events must be written into the software program that controls the handling of information. The software program tells the detectors which collisions to ignore. There is a serious danger that the LHC can discover only things that the programmers of the software expected… The most important discoveries may be missed.

    I don’t think this is a very compelling argument. As Peter points out this is certainly a challenge for the LHC operators, but it’s hardly a reason to cast aspersions on the LHC’s results.

    First off, the dangers of selectively discarding data sound entirely possible to mitigate. Even knowing very little about particle accelerators but knowing something about software I can think of basic ways to test for such a problem, like running tests with different triggers. Meanwhile I actually got a chance to ask one of the USLHC bloggers about this exact problem once and part of her response was:

    We can study these efficiency and biases using a very detailed Monte Carlo simulation of the trigger and detector. And we validate this simulation with the actual data using well-measured physics channels such as W, Z decays and QCD processes. As most SUSY models predict very energetic and multiple jets of particles, the trigger is expected to be very efficient and non-biased. But these are statements that we have to confirm before we can hope to publish.

    I take this last sentence to mean that part of the published data for the LHC will be their evidence that their trigger scheme is not biasing or eliminating signal from the data. If Freeman Dyson thinks that evidence is inadequate, he can write a paper presenting his argument and it will be a big deal and everyone will read it because he is Freeman Dyson.

    Second off, it kind of seems like this would be one of the easiest things about the LHC to upgrade. As far as I know the only reason for the automatic discarding of data is that storing and processing all that data is impractical. But what is practical in data storage changes all the time. If this is shown to be legitimately a problem it seems like it would be much easier to just slap in some fancy new hard drives than it would be to, say, increase the LHC’s luminosity.

  20. Coin says:

    Sounds like what he is actually saying is: hey folks, even if the LHC fails, be prepared to spend more billions on a lepton collider.

    Are there currently any plans to start building any lepton colliders at any point?

  21. Peter Woit says:

    Coin,

    I don’t disagree with Dyson that the necessity of using triggers is a serious problem for colliders, but there really isn’t any choice, and it’s up to the experimenters to do the best they can within the constraints imposed by the necessity of the triggers.

    The nature of the problem is not that you won’t be able to trust the results coming out of these experiments. If they look for an sparticle with certain properties, the trigger will be designed with this in mind, and the limits they quote or evidence they find should be trustworthy. The problem is with new physics that is not expected, that it might get missed because no one is looking for it. Dyson’s point is that in something like a proton-decay experiment, you look at every event and make sure you understand it. New physics will be quickly identified, no matter what it is. In a collider experiment, one can imagine that there might be new physics that no one has thought of, with an experimental signature that gets thrown out by the triggers. I don’t have much of a feel myself for what the likelihood of this might be, but I imagine it’s a worry that keeps some of the experimentalists up nights when they design triggers.

    There are two major on-going projects to design a lepton collider (ILC and CLIC). The current situation though is that no decision to build such a thing is likely until several years from now, after results from the LHC are in. Knowing what if any new physics is in the accessible energy range is crucial for the design of such a machine.

  22. Coin says:

    Ah, I hadn’t realized the ILC was a lepton collider.

  23. srp says:

    Dyson has written eloquently in the past about what he sees as the “ecological” error of putting too high a percentage of a field’s resources into a single project. That’s one reason why he opposed the SSC, criticized the USSR’s giant optical telescope, etc. Read his brilliant essay in From Eros to Gaia on the subject (I can’t remember the exact essay title–it had something to with first, second, and third worlds).

  24. Thomas D says:

    Perhaps Dyson hasn’t noticed that many theorists have been busy trying to think of classes of theories that ‘standard’ triggers and cuts miss, and have been working with the experimentalists to ensure that as little as possible gets away.

    Also that experimentalists have been developing ‘model-independent’ search techniques that look for excesses over background regardless of whether they fit a favourite model or not.

    I wouldn’t expect anything different from someone who has been retired from research in particle physics for a long time. Dyson is extremely old and clever and respected but(/and) has no reason to keep up with recent developments.

    The idea of trying to use cosmic rays to do electroweak scale particle physics is utterly impractical, because you would need literally square miles, if not cubic miles, of tracking detectors high in the atmosphere to see any interesting interaction. Auger is great by any cosmic ray detector standards but it just measures the overall parameters of a huge cloud of hadrons and photons.

    I understand that p-pbar collisions were disfavoured for LHC as they would produce significantly more hard q-qbar events which from the point of view of new physics are almost entirely background.

  25. Pingback: Dorigo contra Dittmar, un combate de boxeo dialéctico contra el Tevatrón en el CERN « Francis (th)E mule Science’s News

  26. Andrei says:

    Don’t you get the feeling that the Dittmars at CERN are getting scared? Until now they had this sort of monopoly on the Higgs boson as they thought that the Tevatron would never say anything meaningful on the matter. Not that the Tevatron has tresults, CERN is getting ants in the pants. Especially since the LHC is as far from running as it was 2 or 3 years ago.

    I think running deadlines should be set by the funding governments not some scientists and contractors interested in prolonging their jobs. In other words: operate by x date or lose it.

Comments are closed.