Just to show that New Scientist doesn’t always get it wrong, there’s an unusually good article in this week’s issue by Davide Castelvecchi about prospects for new discoveries at the LHC. Besides the usual story, he concentrates on the question of how the data will be analyzed.
For one perspective on the problem, he gives an amusing quote from Ian Hinchliffe of Atlas who says:
People always ask me, “If you discover a new particle, how will you distinguish supersymmetry from extra dimensions?”. I’ll discover it first, I’ll think about it on the way to Stockholm, and I’ll tell you on the way back.”
Nima Arkani-Hamed optimistically claims that “The most likely scenario is that we’re going to have a ton of weird stuff to explain,’ and Castelvecchi quotes him and others as promoting a new sort of “bottom-up” data analysis. Here the idea (various implementations exist under names like VISTA and SLEUTH) is that instead of looking “top-down” for some specific signature predicted by a model (e.g. the Higgs, superpartners, etc.), one should instead broadly look at the data for statistically significant deviations from the standard model. Castelvecchi mentions various people working on this, including Bruce Knuteson of MIT. Knuteson and a collaborator have recently promoted an even more ambitious concept called BARD, designed to automate things and cut some theorists out of a job. The idea of BARD is to take discrepancies from the standard model and match them with possible new terms in the effective Lagrangian. Arkani-Hamed is dubious: “Going from the data to a beautiful theory is something a computer will never do.”
While the article focuses on the LHC, the role of such “bottom-up analyses” may soon be explored at the Tevatron, where they have a huge amount of data coming in, and have already put a lot of effort into the “top-down” approach (for the latest example, see Tommaso Dorigo’s new posting on CDF results about limits on Higgs decays to two Ws.) For the next few years all eyes will be on the LHC, but it will be the Tevatron experiments that will have a lot of data and be far along in analyzing it. Maybe lurking in this data will be the new physics everyone is hoping for, and it will be of an unexpected kind that only a broad-based “bottom-up” analysis might find. Doing so may raise tricky questions about what is statistically significant and what isn’t, and require a lot of manpower, at a time when these experiments will be losing lots of people to the LHC and elsewhere.