LHC Prospects

Just to show that New Scientist doesn’t always get it wrong, there’s an unusually good article in this week’s issue by Davide Castelvecchi about prospects for new discoveries at the LHC. Besides the usual story, he concentrates on the question of how the data will be analyzed.

For one perspective on the problem, he gives an amusing quote from Ian Hinchliffe of Atlas who says:

People always ask me, “If you discover a new particle, how will you distinguish supersymmetry from extra dimensions?”. I’ll discover it first, I’ll think about it on the way to Stockholm, and I’ll tell you on the way back.”

Nima Arkani-Hamed optimistically claims that “The most likely scenario is that we’re going to have a ton of weird stuff to explain,’ and Castelvecchi quotes him and others as promoting a new sort of “bottom-up” data analysis. Here the idea (various implementations exist under names like VISTA and SLEUTH) is that instead of looking “top-down” for some specific signature predicted by a model (e.g. the Higgs, superpartners, etc.), one should instead broadly look at the data for statistically significant deviations from the standard model. Castelvecchi mentions various people working on this, including Bruce Knuteson of MIT. Knuteson and a collaborator have recently promoted an even more ambitious concept called BARD, designed to automate things and cut some theorists out of a job. The idea of BARD is to take discrepancies from the standard model and match them with possible new terms in the effective Lagrangian. Arkani-Hamed is dubious: “Going from the data to a beautiful theory is something a computer will never do.”

While the article focuses on the LHC, the role of such “bottom-up analyses” may soon be explored at the Tevatron, where they have a huge amount of data coming in, and have already put a lot of effort into the “top-down” approach (for the latest example, see Tommaso Dorigo’s new posting on CDF results about limits on Higgs decays to two Ws.) For the next few years all eyes will be on the LHC, but it will be the Tevatron experiments that will have a lot of data and be far along in analyzing it. Maybe lurking in this data will be the new physics everyone is hoping for, and it will be of an unexpected kind that only a broad-based “bottom-up” analysis might find. Doing so may raise tricky questions about what is statistically significant and what isn’t, and require a lot of manpower, at a time when these experiments will be losing lots of people to the LHC and elsewhere.

This entry was posted in Uncategorized. Bookmark the permalink.

10 Responses to LHC Prospects

  1. AGeek says:

    “Beating a grand master in chess is something a computer will never do.”

    Sorry, couldn’t resist.

  2. luzo says:

    «“Beating a grand master in chess is something a computer will never do.”

    Sorry, couldn’t’t resist.»
    ———–x———–

    Well, you should have!

    Chess has a very deterministic set of rules which the computer has to follow to reach an optimal well defined state (winning the game).

    Extracting Physics from raw data implies induction and not deduction from well defined rules and defining the optimal outcome may actually be part of the problem.

    This is why I think this analogy is flawed.

  3. Von Neumann and Morgenstern invented Game Theory. John Forbes Nash, Jr., took it deeper, using polytope theory in his famous (but rarely read) PhD dissertation.

    John H. Conway et al then produced a Theory of Everything for games, including the transfinite and non-standard, of which Chess is merely a single point on the landscape of all possible games.

    John Baez has explained by Category theory what (“^” means exponentiation) is the structure of Chess^Go and Go^Chess.

    In a tournament, you don’t care what your opponent’s metatheory of Chess is, or whether thery consider Checkers to be in the same Calabi-Yau space with Chess. You simply want to falsify their theory, by winning!

    The analogy is thus: in the World Tournament of Physics, who wins the interminably protracted String Theory versus Everyone Else International Grandmaster Championship.

  4. Well, Luzo… “the computer”, one day (quite soon, some say) will have induction, too.
    The singularity is near…
    Cheers,
    T.

  5. luzo says:

    “Well, Luzo… “the computer”, one day (quite soon, some say) will have induction, too.
    The singularity is near…”

    ————x————-

    It has been near for some 20 years now without almost any concrete evidence that it’s getting closer.

    Kind of reminds me of a TOE that has also been very promising for about a litle over 20 years and has delivered very litle when one demands concrete substance or even something to back one’s “faith” in it.

    Sorry, now I was the one that couldn’t resist 😉

    Induction is not the problem, the biggest problem (in my opinion) is that knowing when you have an optimal outcome is not even evident.

    Bottom up approaches work in order to indicate a possible theory and then work top bottom to verify it.

  6. Seth says:

    Going from data to a beautiful theory is something a computer will never do, sure. Going from data to an effective Lagrangian, though, seems a more possible—at that point, though, the theorists will still have to step in.

    I don’t expect BARD to be useful so much as a tool for discovering new physics, as for pointing people in the right direction on what to look for via more traditional methods with the next round of data.

  7. AGeek says:

    Beauty is in the eye of the beholder. Writing down all terms of a QFT action consistent with Lorentz invariance and renormalizability can certainly be reduced to a perfectly mechanical task; fitting experimental data to them too. As noted by Seth, the result may not be “beautiful” to some – but then, so what?

    luzo: the term “Singularity” does not refer to old-style AI (though your claim that the latter has “delivered nothing” is wide off the mark), it refers to the extrapolation of long-term historical trends in a wide range of technological and economic areas, all pointing to the year 2050 or thereabout as the time when they go essentially vertical on the time scale of (contemporary) humans.

    If you are an optimist like Kurzweil, you take this to mean that we will then break through to a whole new regime (human minds transferred to immensely faster electronic substrate, or whatever technology Intel will be peddling by then). If you are a pessimist, you take it to mean that 2050 or so the latest possible time when our civilization becomes unsustainable (runs out of gas and other resources maybe, Peak Oil and all that). Me, I prefer to be an optimist, since I fully plan to be around well after 2050. 😉

  8. luzo says:

    AGeek, thanks for clarifying the singularity point. I was indeed referring to old style AI (since it’s what is related with the post) but didn’t actually say it delivered nothing. I intended to say initial prospects were FAR too optimistic. Go and ask some working in AI in say the early 1980’s whether they ever thought that by 2007 human computer interaction would be taking place using such contrived tools as the ones we are using right now. Most if honest will say they probably though by 2007 things would be much more advanced. Some would even thing that my doubts regarding things like BARD would be as irrelevant by 2007 as doubting the earth isn’t flat.

    From my life experience I have to say things look to be slowing down and I still haven’t seen the often mention (and desired) “paradigm shift”. This in almost every field from Physics, social structure, Biology, etc. Nothing really interesting and corroborated for decades now. Physics at leats had this in the early 20th century, other fields are still lacking.

    Regarding Physics, I’m not a pessimist and I do hope the time we are living now is analogous to say the time when Kelvin was very old. Neither do I think the things that need explanation currently will remain without one for much longer, neither do I hope that explanation can be made with the tools we currently have. I’m not a big fan of singularities in human behaviour.

    Regarding the LHC, I tremendously doubt automatic data analysis alone will give us any theory that predicts yet unobserved phenomena. I could be wrong though.

    What I am really hoping for is for some really unexpected event observed at the LHC. Without a good wake up call from mother nNature this stagnation will most likely persist.

    One criterion for beauty is efficiency and conciseness (for me at least):)

  9. Strange point. I thought that all the people who do “low” energy phenomenology (including myself): B or D-physics, Peskin-Takeuchi parameters stuff, flavor-changing neutral currents in quark and lepton sectors, do just that — advocate a bottom-up approach by providing new low-energy observables which should be used in a combined analyses with direct studies at the LHC… after all, that’s how any particular model’s parameter space is constrained…

  10. About NS getting things wrong, a friend (David Orban) pointed me to a short article on the Higgs boson in the latest edition of the New Scientist, also online, which quotes me…

    In it, my name is misspelled! 🙁

    T.

Comments are closed.