Tevatron vs. LHC

The news from Fermilab is that the Tevatron has set a new luminosity record, with a store last Friday that had an initial luminosity of 4.04 x 1032cm-2s-1, or, equivalently, 404 inverse microbarns/sec. For more about this, see a new posting from Tommaso Dorigo, where someone from Fermilab writes in to comment that they’re not quite sure why this store went unusually well.

Over in Geneva, commissioning of the LHC continues. There, the highest initial luminosity reported by ATLAS is 2.3 x 1027cm-2s-1, or 200,000 times less than the Tevatron number. I haven’t seen a recent number for the total luminosity delivered by the LHC to the experiments so far, but I believe it’s a small number of hundreds of inverse microbarns. The Tevatron is producing more collisions in one second than the LHC has managed in the three weeks since first collisions.

The current plan for the LHC is to devote most of the rest of 2010 to increasing the luminosity of the machine, with a goal of reaching something somewhat lower than the Tevatron luminosity (1-2 x 1032cm-2s-1) by the end of the year. Then the plan is to run flat out at this luminosity throughout 2011, accumulating data at the rate of about 100 pb-1/month, and ending up with a total of 1 fb-1. The hope is that this will allow them to be competitive for some measurements with Fermilab, the lower luminosity compensated by the advantage of a factor of 3.5 in beam energy that they now enjoy.

The Tevatron has already produced over 8 fb-1 of data, and the current plan is to run the machine through the end of FY 2011, reaching at least 10 fb-1, and then shut it down for good. The LHC is supposed to go into a long shutdown throughout 2012, not coming back into operation until 2013. Even if all goes well, it likely will not have accumulated enough data to decisively compete with the Tevatron until late 2013 or 2014. Under the circumstances, it’s hard to believe that there aren’t plans being proposed at Fermilab to keep the Tevatron running for several more years, until 2014. The machine should then be able to end up with a total of 15-20 fb-1 worth of data, which could be enough to allow them to see evidence of the Higgs at the 3 sigma level over the entire possible mass range.

Shutting down the Tevatron as planned would free up funds for other experiments, and free parts of the accelerator complex for use by neutrino experiments. It will be interesting to see whether instead of this, the decision gets made to go for a bid to outrace the LHC to the Higgs and other high energy frontier physics over the next few years.

Update: The integrated luminosity seen by the ATLAS detector for the current run so far is about 400 inverse microbarns, 350 at CMS (where they have 300 worth of recorded data).

Update: This morning the LHC has started delivering collisions with stable squeezed beams to the experiments, initial luminosity 1.1-1.2 x1028cm-2s-1.

Update: Integrated luminosity delivered to each experiment at the LHC is now up to around 1000 inverse microbarns.

This entry was posted in Experimental HEP News. Bookmark the permalink.

27 Responses to Tevatron vs. LHC

  1. Verified Armonyous says:

    Do you think the comparisons of instant luminosity you make are in any way significant?
    I think you can make your point on the need to run Tevatron longer (with which I fully agree) without trashing the LHC.

  2. Peter Woit says:

    I’m not in any way trashing the LHC, which is undeniably the long-term future of the field and is moving forwards towards fulfilling that role. What I’m doing is trying to put out the best numbers available about the current situation and what can be expected from the two machines over the next few years as well as noting the obvious questions these numbers raise.

  3. milkshake says:

    It is worth to point out that Tevatron is not obsolete yet, and it is not unfair to make a direct comparison because LHC promise of delivering breakthrough results soon has been over-publicized and meanwhile Fermilab has had hard time with funding…

  4. luminosity says:

    “The Tevatron is producing more collisions in one second than the LHC has managed in the three weeks since first collisions.”

    LEP ran from 1989-2000. One good weekend in 1994 produced as much integrated luminosity as the whole of the 1989 run. See the chart at bottom of page for integrated luminosity at DELPHI for LEP-I (“Z0 factory”)
    http://hepunx.rl.ac.uk/~adye/thesis/html/node9.html

    See also here for LEP-II integrated luminosity 1994-2000
    http://accelconf.web.cern.ch/Accelconf/e00/PAPERS/TUOBF101.pdf

    It is a matter of history that 70% of the integrated luminosity of LEP was delivered during its last three years of operation. All of this merely underscores the *enormous* technical challenge of commissioning and operating a (large) new accelerator. The Tevatron itself ran for years with instant luminosities of approx 1E30. So LHC delivers 2.3E27 now? Perhaps the right viewpoint is that 2.3E27 is better than zero.

  5. Paul Wells says:

    From a global perspective I think it would be more efficient to shut down Tevatron now.

    LHC will find the Higgs eventually if it exists.

    This reminds me of SLC and LEP competing to find the number of neutrinos from the width of the Z.

    Why not shift resources now to neutrino work ?

    Is this just a matter of ego ?

  6. eff says:

    Who is to define “efficiency” ?
    SLC and LEP — again a too-narrow focus on HEP expt (# neutrinos and Z width) and neglect of accelerator technology. SLC also pioneered a new acceleator principle/technology, that of a linear collider. These are not “turnkey” operations that one can buy off the shelf nor can they be designed “on paper” with computer simulations.

  7. Peter Woit says:

    Paul,

    By the same argument, any piece of scientific apparatus should be shut down as soon as there’s a viable plan for something better. That’s not the way science is done, for good reason. It’s not because of ego, but because actually getting results is important, so you don’t just stop before you have them when you think someone else will have a better shot in the future.

    In this case, no one knows how long it will actually take the LHC to reach the point of producing data that makes the Tevatron irrelevant. The history so far is that things have taken significantly longer than planned.

    In addition, for some sorts of measurements, higher center of mass energy is not so crucial. For example, for a low mass (115 GeV) Higgs, my understanding is that the higher energy of the LHC doesn’t really help.

    It’s also true that having multiple confirming observations of something is quite valuable. If a Higgs is seen, it’s going to be a difficult signal, and seeing it at four different experiments will help confirm it is really there.

    The argument for shutting down the Tevatron is not the LHC, but that once it has collected a certain amount of data, it’s not worth spending a lot more money to collect only marginally more data.

  8. Peter,
    do you think that americans feel themselves a little bit “frustrated”
    to constate that LHC has already surclassed Tevatron in the mass
    media propaganda?
    Dr.Kathrine M.

  9. Peter Woit says:

    Katherine,

    I don’t think this is a nationalistic issue. Among Americans, most non-physicists don’t know or care about the question, and most physicists are quite excited by the fact that the LHC is finally getting into operation and very much looking forward to what it will find.

    Tevatron vs. LHC is not exactly a hostile competition, since the experimentalists involved on both sides are often the same people (many work on both the Tevatron and LHC detectors). It’s also not much of a nationalistic competition, since many European physicists work on the Tevatron experiments, many Americans on the LHC experiments.

  10. Pingback: Hoy por hoy, lo que el Tevatrón del Fermilab logra en un segundo requiere tres semanas en el LHC del CERN « Francis (th)E mule Science's News

  11. Verified Armonyous says:

    There you have it. Now people have started quoting your unfair comparison, akin to comparing your reading skills with those of a three month baby.

  12. nessuno says:

    Armonyous,
    I don’t think the post is an “unfair comparison”; rather, it is a representation of the real current situation. By the way, I found a similar post some days ago (http://blog.vixra.org/2010/04/10/lhc-needs-more-luminosity/). But if you want to compare, Tevatron is like an old 8 tons truck running at its best and delivering stuff to the customers. LHC is like a brand new 40 tons truck, but only tested with a 4 grams load. And the plan to increase the load is, for the time being, unpredictable. So, if the customers think it is still useful to get some stuff within a predictable delay, they should keep the old truck running. Only when the new one delivers what it has promised, the old truck can be dismissed. After all SLC was not stopped before LEP produced decent luminosity.

  13. Verified Armonyous says:

    Sure, I’m fully convinced and in favor of keeping Tevatron running for much longer. Still I believe the wording of the comparison was not very fortunate and was certainly unnecessary to make the point. Only a theoretician could speak like that.

  14. th says:

    A true blue theoretician wouldn’t mention the LHC or Tevatron at all. Everything could be deduced from pure Platonic intellectualism (like strings?). I am much impressed by PW’s diligent efforts to present up-to-date reports on the progress/status of LHC and Tevatron (also RHIC, as with the private donation of funds by Simons to keep the machine running in 2006).
    http://www.math.columbia.edu/~woit/wordpress/?p=328

    Unfair comparison? Not at all!
    Kudos to PW!

  15. Coin says:

    Could eventually it be possible to do a single statistics analysis / bump hunt using merged data from the LHC and Tevatron?

  16. Peter Woit says:

    Coin,

    They run at different energies, and one is a proton-proton, the other a proton-antiproton machine. So, doesn’t really make sense to combine the data.

    If they do both end up with some sort of marginally significant Higgs signal, there probably is some sort of statistical analysis that would quantify the improved significance of the Higgs signal taking into account the two analyses.

  17. Coin says:

    Interesting, thanks.

  18. Bill K says:

    one is a proton-proton, the other a proton-antiproton machine.

    This has always left me scratching my head, for the following reason.

    1) Suppose that back in 1975 or whenever, someone had said to you, “Hey, let’s build a proton-antiproton collider at Fermilab.” Wouldn’t your reaction have been, “No, of course not, that’d be like shooting yourself in the foot. Just think how difficult it would be to generate the antiprotons and collimate them into a usable beam. You’d never be able to produce enough luminosity that way.” And yet the Tevatron does so, admirably.

    2) Given that success, suppose that back in 1985 or whenever, someone had said to you, “Hey, let’s build a collider at CERN and make it a proton-proton machine.” Wouldn’t your reaction have been, “No, of course not, that would just complicate the magnet design. Look what a great job the Tevatron does using antiprotons.”

    3) Now suppose in 2010 someone says to you, “Hey, let’s build an even larger machine.” Given that both the Tevatron and the LHC are now remarkably successful — of the two designs p-p or p-anti p (never mind the other options!) do you see either of them as having a real advantage, and if so which one and why?

  19. Remus says:

    In the case of the LHC, I guess it had to be proton-proton because the same accelerator was intended to accelerate, and collide, heavier nuclei as well.

  20. Ralph says:

    Well, LHC just got to 1.2e28 🙂 Still a way to go to 1e34…

    Of course, if Tevatron decides to make it a race, LHC might decide to play along and put off the 2012 shutdown; even one year could well be enough to increase the luminosity by a factor of ten at this stage of the LHC…

    [LHC needs to do its repairs before it gets too irradiated though – I wonder if that is a constraint?]

  21. Peter Woit says:

    Bill K,

    My understanding is that the reason the LHC is a proton-proton machine is that, as you go to higher energies, you want higher luminosity since cross-sections of interesting processes are falling off with energy. So the LHC design luminosity is 10^34 cm^-2s^-1, and there’s no way to get to this kind of luminosity with anti-protons. The Tevatron luminosity is very much limited by the ability of the accelerator complex to accumulate and store anti-protons in a beam.

    It looks like, as Ralph mentions, this evening they’ve had significant success, producing stable colliding squeezed beams of higher intensity, so getting a significant increase in luminosity.

  22. pbar says:

    Bill K ~ “1) Suppose that back in 1975 or whenever, someone had said to you, “Hey, let’s build a proton-antiproton collider at Fermilab.” Wouldn’t your reaction have been, “No, of course not, that’d be like shooting yourself in the foot. Just think how difficult it would be to generate the antiprotons and collimate them into a usable beam. You’d never be able to produce enough luminosity that way.” And yet the Tevatron does so, admirably.”

    We need a serious history lesson here! The idea of colliding proton-antiproton beams WAS FIRST PROPOSED AT FERMILAB IN 1976, using stochastic cooling to increase the phase-space density of the pbar beam to a useable level (i.e. sufficient luminosity in collisions), and the Fermilab management KICKED THEM OUT.

    The proponents of p-pbar colliding beams were Carlo Rubbia, David Cline and Peter McIntyre. See for example Peter McIntyre’s home page
    http://faculty.physics.tamu.edu/mcintyre/

    “In 1976 Prof. McIntyre was the first to propose the possibility of making colliding beams of protons and antiprotons using the large synchrotrons at Fermilab and at CERN. This work led to the discovery of the weak bosons at CERN in 1982.”

    McIntyre says “Fermilab and CERN” but is was Fermilab that they turned to first. The Fermilab people openly laughed and told them that stochastic cooliing violated Liouville’s theorem, etc., much as Bill K says above.

    So Rubbia and Cline went to CERN, which was hungry for a Nobel Prize, and CERN was willing to try the stochastic cooling idea (which idea had been invented by Simon van der Meer who was a CERN engineer in the first place). CERN converted the SPS into the SppbarS, it worked and produced the W and Z and led to the 1984 Nobel Physics Prize for Rubbia and van der Meer.
    http://nobelprize.org/nobel_prizes/physics/laureates/1984/

    All of this is well-documented. A book which describes the history of these events is “Nobel Dreams” by Gary Taubes
    http://www.amazon.com/Nobel-Dreams-Deceit-Ultimate-Experiment/dp/1556151128

    The Tevatron did not yet exist in 1976. It would have been necessary to convert the Fermilab Main Ring synchrotron into a p-pbar collider (as CERN did with the SPS). Pbar beams were first circulated at Fermilab in 1985, and the Tevatron became operational around then.

    Why were Rubbia etc kicked out by the Fermilab management? At least part of the reason is that Robert Wilson, who was then the director of Fermilab, had devoted significant resources to Rubbia’s expt at FNAL, leading to the high-y anomaly (go look THAT up on your own!) which was later debunked by CERN. This followed on from the “alternating neutral currents” fiasco, also perpetrated by Rubbia at his expt at FNAL (go look THAT up on your own, too!). Rubbia had a history of bogus grand claims leading to embarrassment at FNAL. The high-y anomaly business broke in 1977. When in 1976 Rubbia came up with yet another hair-brained (hare-brained?) scheme — it was obviously a very speculative and difficult idea — and was obviously no longer focusing on high-y, to which Wilson had devoted significant resources, Wilson realized that Rubbia was again off on another tangent. Robert Wilson lost his temper with Rubbia and kicked him out. It is a matter of history and irony that Robert Wilson, despite his many talents, gave priority to the high-y anomaly (which was a false phenomenon), and rejected the pbar stochastic cooling idea (which led to a Nobel Prize).

    The SSC was proposed as a proton-proton collider for precisely the same reason that the LHC is a proton-proton collider, exactly as PW says — it is difficult to produce antiprotons in sufficient quantity (and phase-space density) to attain adequate luminosity. The disadvantage of a proton-proton collider (RHIC, LHC) is that one needs two rings instead of one. This doubles (approximately) the cost of the final ring in the accelerator complex — LEP was a single ring — but it simplifies the beam production, storage and acceleration process. So there are trade-offs.

    But never doubt that the idea of colliding proton-antiproton beams was first proposed at Fermilab, which famously rejected the idea. Wilson resigned as director of FNAL around 1977, and Leon Lederman became the director. There was the famous “Armistice Day” meeting at Fermilab in Nov 1977, where everyone was invited to speak openly, and Lederman decided as a result that FNAL would not attempt to compete/race with CERN to make and collide pbar beams. It is a tribute to Robert Wilson’s foresight that he left enough space in the Fermilab Main Ring tunnel to build a second ring — he visualized the two rings as a p-p collider — but the Tevatron instead became a single-ring p-pbar collider. But Wilson also kicked out Rubbia for proposing stochastic cooling and p-pbar collisions instead of focusing on the high-y anomaly.

    Bill K #2 — “Given that success, suppose that back in 1985 or whenever, someone had said to you, “Hey, let’s build a collider at CERN and make it a proton-proton machine.” Wouldn’t your reaction have been, “No, of course not, that would just complicate the magnet design. Look what a great job the Tevatron does using antiprotons.””

    Understand first the history of the Tevatron. Understand the history of the idea of colliding p-pbar beams.

    Rubbia and the bogus grand claims – after discovering the W and Z, the UA1 collaboration went on to discover supersymmetry (monojets) and also the top quark (44 GeV). Remember those?

    CERN punished Rubbia for these transgressions by making him the DG. Bogosity pays!

  23. Hi all,

    although I do not share pbar’s rather negative view of Carlo Rubbia (who after indeed several fiascos and some dangerous politicking to steer the labs into financing his endeavours, hit on the right idea and over the course of a winter became the world’s expert of antiprotons, becoming the engine that brought the SppS to existence) his reconstruction is rather accurate.

    Regardless of neutral currents, y anomalies, monojets and top quarks, every ounce of Rubbia’s Nobel prize for showing the world that W and Z bosons were there is well-earned.

    Cheers,
    T.

  24. pbar says:

    My intent was really the history of p-pbar collisions and Fermilab, and Rubbia got more mention than I intended.

    BTW “surivivor” –> survivor (unless it’s deliberate?)

  25. P says:

    Thanks for a very interesting history lesson.

  26. Paul Wells says:

    Peter,

    Do you have a reference to what the LHC luminosity optimizations actually are and what future optimizations are planned ?

    Naively one would think a proton proton collider should have a huge advantage over a antiproton proton collider since protons are easier to make than anti-protons…

    Thanks
    Paul

  27. Peter Woit says:

    Paul,

    There are lots of sources of up-to-date info online at CERN about plans for what needs to be done to increase the LHC luminosity towards the ultimately planned design luminosity. I’m not linking to them since there’s an unfortunate history of CERN shutting off access to such information sources after seeing links from blogs.

    The main issue for luminosity increases this year at the LHC is the machine protection system. They’re starting to enter into a regime of “unsafe beams”, where a loss of control of the beam could lead to very serious damage to the machine. Because of this they plan to be very careful and go about this slowly, being sure they completely understand and control the behavior of the machine at one luminosity level before moving to the next.

Comments are closed.