This Week’s Hype

The announcement at CERN tomorrow of a likely-looking signal for a 125 GeV mass Standard Model Higgs will probably unleash a flood of hype from theorists claiming this as evidence for their favorite Beyond the Standard Model scenario. One obvious problem with any such claim is that the CERN results correspond well so far to the Standard Model with no additions whatsoever, so spinning them as providing support for things like supersymmetry and string theory will require some work.

For the last decade we have known that the Higgs mass is above 114 GeV (from LEP) and unlikely to be very much higher than that (from precision electroweak results). This summer’s LHC results disfavored masses above about 130 GeV, so for the last few months we’ve known that if the Standard Model Higgs is there, it should be between 114 and about 130 GeV. For a couple weeks news has been circulating widely from ATLAS and CMS that they are both seeing something around 125 GeV.

First out of the gate in the hype derby is Gordy Kane, who is quoted by Davide Castelvecchi at Scientific American claiming that string theory predicts the Higgs mass to be between 122 and 129 GeV:

“If it’s in that range it’s an incredible success for connecting string theory to the real world,” Kane says. He says he is confident that the upcoming LHC announcements, if they pan out as predicted, will constitute evidence for string theory. “I don’t think my wife will let us bet our house, but I’ll come close,” he says.

It’s unclear exactly what he’s willing to bet the house on. If it’s just that the Higgs is in that range, this might have something to do with the plots from the experiments widely circulating privately the last few days. In a remarkable coincidence, after more than 25 years of unsuccessfully trying to extract a definite experimental prediction from string theory, Kane and collaborators were able to achieve the holy grail of the subject (a prediction of the one unknown parameter in the SM, the Higgs mass) just a week before the CERN announcement. They submitted their paper to the arXiv the evening of Monday December 5, a few days after rumors of a 125 GeV Higgs were posted on blogs Friday December 2.

The paper deals with a “prediction” you get based on a host of assumptions about which particular class of string theory compactifications to look at. The main result is that in this particular class of models, you can relate the Higgs mass to the parameter tan(β) that occurs in SUSY extensions of the SM. As you increase tan(β) from around 2, the Higgs mass lies in a band, increasing from 105 GeV and a maximum of about 129 GeV:

We will demonstrate that, with some broad and mild assumptions motivated by cosmological constraints, generic compactified string/M -theories with stabilized moduli and low-scale supersymmetry imply a Standard Model-like single Higgs boson with a mass 105 GeV < M_h < 129 GeV if the matter and gauge spectrum surviving below the compactification scale is that of the MSSM, as seen from Figure 1. For an extended gauge and/or matter spectrum, there can be additional contributions to M_h.

This conclusion and Figure 1 correspond closely to what is in the slides of Kane’s talk at String Phenomenology 2011 this past August. The plot of Higgs masses as a function of tan(β) is there, giving a range of 108 GeV to 127 GeV. There is an intriguing comment on the conclusion slide:

Single light Higgs boson, mass about 127 GeV unless gauge group extended.

I can’t tell where the number 127 came from. Since 127 GeV is the top limit in the figure, and the wording “unless gauge group extended” is used, one guess would be that Kane meant that 127 GeV was the upper bound on the Higgs mass in this class of models.

There’s nothing in Kane’s August talk about a 122-129 GeV range for the Higgs mass, but in the December 5 paper it appears explicitly three times:

  • In the abstract there’s:

    When the matter and gauge content below the compactification scale is that of the MSSM, it is possible to make precise predictions. In this case, we predict that there will be a single Standard Model-like Higgs boson with a calculable mass 105 GeV < M_h < 129 GeV depending on tan β (the ratio of the Higgs vevs in the MSSM). For tan β > 7, the prediction is : 122 GeV < M_h < 129 GeV.

    I don’t see where the tan β > 7 comes from, presumably it’s in one of their other papers.

  • In the introductory paragraph there’s:

    Furthermore, in G2-MSSM models [1] we find that the range of possible Higgs masses is apparently much smaller, 122 GeV < M_h <129 GeV.

  • G2-MSSM models are mentioned only briefly again in the paper, at the third occurrence of this particular mass range:

    For instance, in G2-MSSM models arising from M theory, Witten’s solution to the doublet-triplet splitting problem [38] results in μ being suppressed by about an order of magnitude. Hence, in these vacua, the Higgs mass sits in the range 122 GeV < M_h < 129 GeV.

  • Kane has chosen Lubos Motl’s blog as the place to guest post and promote this string theory “prediction”, concluding there:

    If generic compactified string theories with stabilized moduli correctly predict there is effectively a single Higgs boson and correctly predict its mass, it will be a huge success for the main directions of particle physics beyond the Standard Model, for supersymmetry and for string theory, both of which are crucial for the prediction. It will be a huge success for LHC and the accelerator physicists and experimenters who made the collider and the detectors and the analysis work. It will put us firmly on the path to understanding our own string vacuum, and toward the ultimate underlying theory. The value of the Higgs boson mass not only confirms the approach that predicts it, remarkably depending on its numerical value it may allow an approximate measurement of tan β, the μ parameter, the squark and gravitino masses, that the gauge group and matter content of the theory below the string scale is that of the MSSM, and that light (TeV scale) gluinos and dark matter are likely.

    I don’t quite see how finding a SM Higgs at 125 GeV is going to give all these different numbers and pieces of information. Kane claims that these string theory predictions also predict gluinos visible at the LHC within months from now:

    Then the gluino should be detected at LHC, and because of the heavy scalars the gluino decays are different from the ones usually discussed, being dominantly to third family quarks, top and bottom quarks. I won’t explain that here because of space and time; we can return to it as the gluinos are being detected in coming months. They have not yet been systematically searched for.

    In addition, there’s a dark matter prediction that should be tested in “1-2 years”.

    If I were Kane, I wouldn’t bet my house (or go to the press claiming the 125 GeV Higgs as a “huge success” for string theory) just yet. Assuming he’s right, within months the gluinos will be there, and his ticket to Stockholm will be assured.

    Update: My prediction in the first paragraph is coming true faster than I thought. In tonight’s hep-ph listings one finds:

    http://arxiv.org/abs/1112.2415 (m_H= 127 +/- 5 GeV)

    http://arxiv.org/abs/1112.2462 (m_H < 128 GeV)

    http://arxiv.org/abs/1112.2659 (m_H =126 +/- 3.5 GeV)

    http://arxiv.org/abs/1112.2696 (m_H > 120 GeV)

    There are going to be a lot of these…

    Update: Over at a Nature live chat, Kane is giving the public some interesting explanations:

    “for a higgs to be meaningful it must be part of a supersymmetric theory, so the superpartners should be found. The form it takes implies that gluinos should be found with masses around a TeV, maybe less, by summer, and decaying mainly into topquarks and bottom quarks.”

    “Recently we have published string theory calculations that imply the higgs boson mass is 125 GeV so if it is there are strong implications for connecting string theory to the real world, and for what the higgs discovery implies. we did that before the data.”

    “string theories are now well enough understood to predict higgs physics”

    There will be another live chat involving Kane tomorrow, this one at Science magazine.

    Update: Just took a look at the Science chat. Kane seems to have completely lost touch with reality, somehow deciding that the two experiments have reached the 5 sigma level needed to claim a discovery. As far as I know, he’s the only one in the world to think this.

    Gordy Kane:
    YES. an experimenter from one experiment can’t say that, but theorists see that two different experiments both saw a signal at about the same mass, and also saw additional channels, so it’s a discovery!

    Gordy Kane:
    The 5 sigma is a criteria people have chosen. i think as soon as the data are combined from two detectors, which is entirely legitimate, then the signal will indeed be over 5 sigma.

    The following claims make about the same amount of sense as the discovery one:

    Comment From Tom
    Does the existence of a 125 Gev Higgs give any support to supersymmetry?

    Gordy Kane:
    Yes. first, for a long time it has been known that the lightest higgs boson of supersymmetry should be lighter than about 135 GeV (actually closer to 140 GeV but people make assumptions), so this is consistent. Then the supersymmetric string theories as i mentioned do predict the 125 number and it is a supersymmetric lightest higgs boson.

    Update: The hype goes on, with a column today at Nature.

    Posted in This Week's Hype | 37 Comments

    Higgs Predictions and Results

    Tomorrow at 14:00 CET CERN will start to unveil the results of this year’s LHC Higgs search, see here. Jester has just posted details consistent with what I’ve been hearing for the past couple weeks, although his numbers are slightly different. The big news is that both experiments are seeing something that looks exactly like a Standard Model Higgs in the Higgs -> gamma-gamma channel around 125 GeV. Some details about this signal:

  • I’ve heard that ATLAS sees this with significance a bit less than 3 sigma, 3.5 if you combine results from other channels. To take into account the look-elsewhere effect, the significance of seeing such a fluctuation anywhere in the range studied is 2.2 sigma. Jester has

    The combined significance is around 3 sigma, the precise number depending on statistical methods used, in particular on how one includes the look-elsewhere-effect.

  • What makes this something to take very seriously is that CMS is also seeing much the same thing in the gamma-gamma channel around the same mass. I’ve heard their significance is 2.5 sigma. Jester has

    All in all, the significance at 125 GeV in CMS is only around 2 sigma.

  • Another channel that might start to have some sensitivity to a Higgs of this mass is the “golden channel”, where you look for 4 leptons reconstructed as coming from 2 Z’s. Here Jester says that ATLAS has three events near 125 GeV, contributing to the higher significance in the combined result. CMS also sees an excess involving three events, but they see this in the range 117-121 GeV, which should be too low to come from a 125 GeV Higgs. They also see a 2.1 sigma bump in gamma-gamma their combination at 119 GeV [due to the 4 lepton excess, they see nothing in gamma-gamma at 119 GeV] .
  • In this business, for each experiment a 2 sigma bump is not worth taking seriously, around 3 sigma is where it gets serious (and 5 sigma is the conventional standard to claim a discovery, they’re definitely not there yet). But a 3 sigma bump from one experiment and a 2.5 sigma bump from the other, at the same place, is serious evidence indeed. It is still quite conceivable though that this kind of signal could disappear with more data (for this we’ll have to wait until mid-2012). I think Jester has it about right:

    There is a good chance we’re finally looking at the real thing, I’d say 50% based on the data alone and 80% adding our sincere convictions that Higgs must really be in that mass range.

    One thing that can be predicted with certainty is a flood of papers from theorists claiming that their favorite model predicts this particular Higgs mass. Something to keep in mind when evaluating such claims is that for more than ten years we have known from LEP that the Higgs mass is above 114 GeV, and from precision electroweak measurements that it can’t be too much above that value. Preliminary data early this summer showed some indications of a signal around 140 GeV (leading some to claim this as vindication of the multiverse, see here). By the end of the summer this was gone, with the combined CMS+ATLAS data excluding a Higgs down to 140 GeV or so at 95% confidence level, around 130 at 90% (tomorrow’s data should have each experiment excluding down to 130 GeV at 95%, with a Philip Gibbs combination result sure to follow soon). Rumors of the gamma-gamma signal at 125 GeV have been circulating for at least the past two weeks.

    We’ll soon move from rumors to results, and I’ll add anything new or different we hear tomorrow to this posting. Surely there will be a press release and a lot of versions of the results coming out of CERN, covered by the media and many bloggers. For some good recent blog postings explaining what to look for tomorrow, see Matt Strassler and Tommaso Dorigo.

    Update: One correction. Besides the 124-125 bump in gamma-gamma, there’s another bump in the CMS combination at low mass, around 119 GeV, but this is due just to the “golden channel” excess there, they see nothing there in gamma-gamma.

    Posted in Experimental HEP News | 8 Comments

    String and M-theory: answering the critics

    Mike Duff has a new preprint out, a contribution to the forthcoming Foundations of Physics special issue on “Forty Years of String Theory” entitled String and M-theory: answering the critics. Much of it is the usual case string theorists are trying to make these days, but it also includes vigorous ad hominem attacks on Lee Smolin and me (I’m described as having an “unerring gift for inaccuracy”, and we’re compared to people who campaign against vaccination “in the face of mainstream scientific opinion”). One section consists of a rather strange 3-page rant about Garrett Lisi’s work and the attention it has gotten, a topic that has just about nothing to do with string theory.

    Duff explains that his motivation for answering the critics is that we have been successful on the public relations front, supposedly responsible for the British EPSRC “office rejecting” without peer review grant proposals on string theory. I know nothing of this, but I think it’s clear to everyone that the perception of string theory among physicists has changed, and not for the better, over the past decade. One dramatic way to see this is to notice that at this point, US physics departments have essentially stopped hiring string theorists for permanent appointments (i.e. at the tenure-track level).

    String theorists have a problem not just with the public, but with their colleagues. The main reason for this is not Smolin or me, but the failure of the string theory research program. Duff’s take on whether the landscape is pseudo-science is that string theory can’t even tell whether there is a landscape, and he is “doubtful whether the kind of issues we are considering here will be resolved any time soon.” On the question of the time scale for possible progress, he invokes the two millennia it took to get from Democritus in 400 BC to quantum theory early last century. His list of greatest achievements of string theory in recent years has just two items: applications to fluid mechanics and his own work on entanglement in quantum information theory. Given this, it’s hard to see why he’s surprised the EPSRC is cutting back on support for string theory.

    While Duff has detailed complaints about exactly what Smolin wrote in The Trouble With Physics, he mentions my book without saying anything about what is in it (one suspects his policy of how to deal with it is that of Clifford Johnson and some other string theorists: refuse to read it). He does have some specific complaints about material from my blog:

  • According to Duff:

    he [Woit] wrongly credits me with having told author Ian McEwan about the Bagger-Lambert-Gustavsson model in M-theory, which he then proceeds to criticise.

    This is based on a book review about Ian McEwan’s novel Solar, where I wrote about M-theory references in the book that “McEwan seems to have gotten this from Mike Duff, who is thanked in the acknowledgments”. Since Duff is an expert on these topics and the only particle theorist thanked, this was an obvious guess, worded as such. In this review I wasn’t criticising M-theory, just noting an interesting occurrence of it in popular culture. My only criticism was of McEwan, for the minor anachronism of a topic from 2007 showing up in a book set in 2000. In a segment from the novel that I quoted, one character is expressing opinions about M-theory research which you could call critical, but this material was written by the novelist, not by me (and I’m still wondering where McEwan would have gotten this from, other than from Duff).

  • In two cases, Duff claims that I misrepresented his words on the blog. Both are cases where I wrote the blog entry based on information from someone who had heard him talk, since I didn’t have access to his words themselves when I first wrote the blog entry. In general I try to be very careful about what I quote, making sure it is accurate and in context. In these cases, what was reported here was clearly labeled as someone else’s impression of his talk, and Duff has some reason to be annoyed at not being quoted accurately, although it wasn’t me doing it.

    The first case was a posting about the debate in 2007 between Duff and Smolin (see also Clifford Johnson’s blog, which includes comments from Smolin), where one attendee described the scene following Smolin’s talk as:

    Smolin sat down. Duff stood up. It got nasty.

    The trouble with physics, Duff began, is with people like Smolin.

    The transcript actually shows:

    Good evening everyone. The trouble with physics, Ladies and Gentlemen, is that there is not one Lee Smolin but two.

    followed by an extensive description of Smolin as deceptive and two-faced, saying completely different things at the debate and in his book. From the transcript, I’d describe the “Duff stood up. It got nasty” part as completely accurate, the “The trouble with physics, Duff began, is with people like Smolin” much less so.

  • The second case has to do with a posting about a recent BBC program on superluminal neutrinos, where Duff discussed string theory explanations for this. Based on two e-mails people had sent me who had watched the program, I wrote that it “evidently featured trademark hype from string theorist Mike Duff about how string theory could explain this.” The first commenter, who had also seen the program wrote in “I watched this tonight and can confirm that it did include stringy hype.” Duff complains that

    I said that, although superluminal travel is in principle possible in the “braneworld” picture of string theory, in my opinion this was NOT the explanation for the claims

    It still seems to me that going on a TV program to claim string theory as a possible explanation for this kind of experimental result can accurately be described as “hype”, even if, since no one believes the experimental result, you express the opinion that string theory isn’t the right explanation in this case.

  • Duff is much less interested in the virtues of accuracy when he describes my words. I guess I’ve joined Smolin on his list of targets because of what I’ve had to say on the blog (see here, here and here) concerning his publicity campaign claiming a “prediction of string theory” about qubits (recall that he thinks this is one of the two main advances in string theory of this decade). He claims that “falsifiability of string theory is the single issue of Peter Woit’s ‘single-issue protest group'”, and that my argument about the qubit business “may be summarised as (1)It’s wrong (2)It’s trivial (3)Mathematicians thought of it first.” One can read the postings and decide for oneself, but I’d summarise the argument quite differently: Duff has nothing that can possibly be described as a “prediction of string theory” and it’s misleading hype to issue press releases claiming otherwise. The experimentally testable “prediction” is that “four qubits can be entangled in 31 different ways”, but if experimentalists make measurements of four qubits that show something different, one can be sure that the headlines will not be “string theory shown to be wrong in a lab”.

    Duff’s article contains an appendix about this, in the form of a “FAQ”, where he explains that he approved the text of the press release headlined “Researchers discover how to conduct first test of ‘untestable’ string theory” which is misleading hype by any standard. Initially someone who was successfully misled in the Imperial media team added the subtitle “New study suggests researchers can now test the ‘theory of everything’”, which was later removed. Duff claims that Shelly Glashow, Edward Witten and Jim Gates told journalists that they didn’t agree with this because of the “theory of everything” subtitle, implying that otherwise they were fine with the “first test of ‘untestable’ string theory” business (except for Gates noting that in any case this is just supergravity, not string theory). It would be interesting to hear from the three of them if they’re really on-board with this “first test of ‘untestable’ string theory”.

    What Duff and some other string theorists don’t seem to understand is that this sort of “answering the critics” is exactly what has gone a long way to creating the situation at the EPSRC that he is worried about. Unfortunately it has damaged not just the credibility of string theory, but of mathematically sophisticated work on particle theory in general. According to Duff

    Just recently, in fact, EPSRC completely abolished its Mathematical Physics portfolio.

    Update: Matin Durrani at Physics World (also a target of Duff’s ire) has a blog entry about this here.

    Update: Lee Smolin sent me the following comments on the Duff article:

    Maybe it would help if I provide some context for the debate Mike Duff took part in with Nancy Cartwright and myself in London in 2007. The occasion for the debate was the publication of my book TTWP in the UK and the reason for the debate was that I had insisted that, as the point of the book was to explore the role of disagreement and competing research programs in science, the best way to illustrate it was to have a debate. String theory was discussed in the book as a case study illustrating the issues and so it seemed appropriate to have a debate with a string theorist. I also insisted that in each of these debates a philosopher of science would be included to highlight the fact that the main themes of the book were longstanding issues in philosophy of science, having to do with how consensus forms within a scientific community on issues on which there is initially wide disagreement.

    There were two such debates in the UK, the other was at Oxford with Philip Candelas and Simon Saunders. That went very well, as Philip gave a strong defence of string theory that stayed focused on the scientific issues.

    Duff’s construction of two me’s is, so far as I can tell, a debating tactic to avoid addressing the key issues my book raises. He starts with

    “Who can dispute that the ultimate goal of a scientific theory is to make experimentally testable predictions? Who will challenge the need to keep an open mind and listen to unorthodox views? Who can disagree with the assertion that our current understanding is only partial and that the ultimate truth has yet to be uncovered? What Lee Smolin said in the London debate [29] was so uncontroversial that, had I confined my response [11] to these remarks, the evening would have would have fizzled out in a bland exchange of truisms.”

    Indeed the constant theme of my book is the development of those “truisms.” What Duff does not explore is that in spite of the agreement there may be over these “truisms”, they have strong consequences for the evaluation of research programs in fundamental physics. Apparently we disagree about the implications for string theory. What Duff could have done is acknowledged these disagreements and explored the reasons for them. Instead he claims to attack my book, but it is striking that he does so, not by criticizing the text I actually wrote-but by attacking first the publicity blurb on the cover and then responses from journalists. As I have stated many times, the material on the cover was neither my text nor my choice and is more strongly worded than anything in the actual book. I hope it is obvious also that you cannot attack a book by pointing out inaccuracies in reviews.

    When he finally does get around to quoting from the book, he makes a few good points mixed in with distortions gotten by quoting out of context. Had he stuck to the good points he had we could have had a useful debate that would have shown the audience the role of disagreement among scientists faced with difficult questions. Had he done that, there would have been no need to construct a fiction of two me’s. I am happy to leave it to readers of my book to judge whether its text is or isn’t completely consistent with the “truisms” he asserts we agree about.

    There is one aspect of Duff’s rant which deserves correction, which is his attack on me related to Garrett Lisi. What Duff says is, “So when Lee Smolin described him [Lisi] as the next Einstein, the publicity juggernaut moved into overdrive”. There are several untruths in this short sentence.

    First, this refers to a Discover article of March 2008 which says, “With Smolin’s aid, DISCOVER has scoured the landscape and found six top candidates who show intriguing signs of that Einsteinian spark” of whom Lisi is one. This was, so far as I recall, based on a phone call with an editor at Discover following a piece I had written for Physics Today on the challenges faced by those who do high risk-high payoff research. I think anyone who looks up the full list of six will see that the editors were aiming to illustrate a wide range of approaches to fundamental research, of which Lisi is at one pole. And as they make clear-the choice of the list was theirs and not mine.

    Furthermore, the media attention on Lisi had begun and peaked already in November of 2007, sparked by a New Scientist article, following immediately the posting of his article on arxiv.org. And while there was a very exaggerated media response-which I and others did our best to advise against-there was no “publicity juggernaut” ie no attempts by Lisi or anyone to seek publicity for him, no press releases, no publicist, no calls to journalists except to strongly advise the story was premature. I told everyone who asked not to write a story on Lisi because the preprint had just been uploaded and there had not been time for experts to evaluate it. Indeed, New Scientist had quoted me very much out of context, ignoring emails I sent them advising them not to write a story on Lisi’s paper before the experts could evaluate it. So the reality was the opposite of the impression created by Duff’s sentence.

    None of this is new, none of it is said for the first time. It is depressing to revisit these debates from five years ago. Most of us have moved on. At least I have, as readers of my next books, as well as the article I was invited to write for the same special issue, will, I hope, see.

    Update: The following is Garrett Lisi’s response to the Duff article. I should note that I’m not complaining about Duff’s listing of my titles. If you want to make up your mind who is right based on titles, Duff’s your man in this argument.

    Michael Duff’s article is full of deceptive half-truths. To attack the commentary on Lee’s book, while avoiding Lee’s actual arguments, is just one example of this fundamentally dishonest tactic. A similar example is his reference to Peter as “Computer Administrator and Senior Lecturer in Discipline,” as if Peter was not also a very knowledgable mathematical physicist. Duff then launches an attack on my work, once again focusing on a large volume of commentary by others rather than on my actual arguments. Also, Duff refers only to my first paper, saying it’s never been peer-reviewed and published, avoiding the fact that I’ve since published papers on the theory, including “An Explicit Embedding of Gravity and the Standard Model in E8.”

    Scouring Duff’s rhetoric, baseless statements, and ad hominem attacks in search of some factual argument supporting his attack on my work, I can find only this:

    “Nature (and the standard model of particle physics) has three chiral families of quarks and leptons. ‘Chiral’ means they distinguish between left and right, as they must to account for such asymmetry in the weak nuclear force. But as rigourously proved by Jacques Distler and Skip Garibaldi, Lisi’s construction permits only one non-chiral family.”

    This is, once again, a misleading half-truth, avoiding the fact that a chiral family of quarks and leptons can be part of a non-chiral representation space, as is the case in E8. I cannot credit Duff alone for this deception, as its source is Jacques Distler — a master of the half-truth — but I can blame Duff for supporting it.

    For anyone who actually cares about the state of E8 theory, I would recommend my recent papers. Apparently Duff considers the work sufficiently threatening to the string program that he needs to attack it in this dishonest manner. If string theory models are as twisted and misleading as the statements in Michael Duff’s paper, it’s no wonder they’re dying.

    Posted in This Week's Hype | 67 Comments

    A 125-126 GeV Higgs?

    Some more detail on Higgs rumors I’ve been hearing recently. Evidently the latest ATLAS data shows an excess in the gamma-gamma channel around 126 GeV, of the size expected if the Higgs is there, and CMS is also seeing an excess (2 sigma?) around 125 GeV in the same channel. I haven’t heard anything about confirmation of this in other channels. Independently, someone has posted a similar rumor at viXra log, and Philip Gibbs is writing about it here. This looks to be still not a conclusive Higgs signal, but the closest thing yet. More details may or may not emerge before the public talks on December 13.

    Update: Latest rumor is that the significance of the ATLAS gamma gamma bump is almost 3 sigma.


    Update
    : This morning’s rumors are a 3.5 sigma 126 GeV excess at ATLAS in the ATLAS-only combination, and 2.5 sigma at 124 GeV for CMS. Heuer’s message to all CERN personnel says the December 13 announcements will be “significant progress in the search for the Higgs boson, but not enough to make any conclusive statement on the existence or non-existence of the Higgs.” Presumably they’re waiting for 5 sigma before claiming conclusive proof.

    Posted in Experimental HEP News, Favorite Old Posts | 77 Comments

    More Higgs Non-News

    The latest Higgs non-news is that there is news about when there will be news. The Scientific Policy Committee at CERN will meet on December 12 and 13, with the agenda for December 13 featuring a 15 min presentation by the CERN Director-General on “CERN plans for communications on the Higgs boson search at the LHC” in the morning. This will be followed in the afternoon by a public event including half-hour updates on the SM Higgs searches from each of the experiments, and a “joint public discussion” about what it all means.

    Before the LHC I tended to be 50/50 on the odds for a SM Higgs vs. no SM Higgs. As data has come out during the past year, and rumors have arrived in recent months, my take on these odds has gone back and forth. First it looked like maybe there was a 140 GeV Higgs, then not. Then I was hearing that nothing was being seen by either experiment, followed by rumors about something being seen by one, but not the other, making a SM Higgs start to look unlikely. Lately though, I’m starting to hear that maybe both experiments are seeing something in the Higgs to gamma-gamma channel. Is it at the same mass? What’s the statistical significance if you combine the results? Looks like we’ll hear about this on December 13 (unless someone leaks the news to a blogger first…).

    For now, I’m back to 50/50. According to the latest Higgs coverage in the New York Times, back in 2005 Frank Wilczek was willing to give 10/1 odds in favor of the Higgs (although he wants a SUSY version), at least if the stakes were in Nobel chocolate coins.

    Update: This month’s Physics World has a long article by Matthew Chalmers (not available free online I think, see here) about the search for SUSY. In includes details about David Gross’s SUSY bet (with Ken Lane):

    SUSY is “alive and well” according to the Nobel-prize-winning physicist David Gross of the Kavli Institute for Theoretical Physics in Santa Barbara, who helped to create quantum chromodynamics – the theory of the strong
    force. “People shouldn’t pay too much attention to the bounds now because it’s signals that matter,” he told Physics World. “When will I give up on SUSY? I have a serious bet with Ken Lane that it will be found after 50
    inverse femtobarns [of data],” he says.

    The LHC won’t accumulate that amount of data until quite a while after it comes back up at or near design energy in 2014. So, it looks like Gross won’t have to pay off his gambling debts until 2015 or so.

    Posted in Experimental HEP News | 30 Comments

    The Ultimate Guide to the Multiverse

    Yet another cover story about the Multiverse can be found this week at New Scientist, which calls it The Ultimate Guide to the Multiverse. As just one more in a long line of such stories over the last decade, a trend that shows no signs of slowing down, one can be pretty sure that this is not yet the “ultimate” one, nor even the penultimate one.

    The content is the usual: absolutely zero skepticism about the idea, and lots of outrageous hype from the usual suspects (Bousso, Tegmark, Susskind, etc.) We’re told that scientists are now performing tests of the idea, even at the LHC. The LHC test has been a great success: Laura Mersini-Houghton used the multiverse to predict that the LHC would not see supersymmetry, and that prediction has worked out very well so far. There’s a companion editorial Neutrinos and multiverses: a new cosmology beckons, which tells us that the multiverse is now orthodoxy, backed by “almost everything in modern physics”:

    The widest crack of all concerns a theory once considered outlandish but now reluctantly accepted as the orthodoxy. Almost everything in modern physics, from standard cosmology and quantum mechanics to string theory, points to the existence of multiple universes – maybe 10500 of them, maybe an infinite number.

    If our universe is just one of many, that solves the “fine-tuning” problem at a stroke: we find ourselves in a universe whose laws are compatible with life because it couldn’t be any other way. And that would just be the start of a multiverse-fuelled knowledge revolution…

    These are exciting, possibly epoch-making, times.

    This past week also saw the premiere of the Multiverse episode of Brian Greene’s Fabric of the Cosmos series on PBS. It’s more or less an hour-long infomercial for the Multiverse, with the argument against it pretty much restricted to some short grumpy comments by David Gross about how he didn’t like it. Brian’s pro-multiverse argument was that many new advances in physics are all pointing to a multiverse, and he showed support for the idea as resting on a three-legged structure. One of the legs was string theory, and I’ve described elsewhere recently how circular reasoning makes this one very shaky.

    The multiverse propaganda machine has now been going full-blast for more than eight years, since at least 2003 or so, and I’m beginning to wonder “what’s next?”. Once your ideas about theoretical physics reach the point of having a theory that says nothing at all, there’s no way to take this any farther. You can debate the “measure problem” endlessly in academic journals, but the cover stories about how you have revolutionized physics can only go on so long before they reach their natural end of shelf-life. This has gone on longer than I’d ever have guessed, but surely it has to end sooner or later, and I have no idea what rough beast will slouch onto future covers of New Scientist and episodes of Nova a few years down the road.

    Posted in Multiverse Mania | 52 Comments

    The Infinity Puzzle

    There’s a fascinating new book by Frank Close out this week about the history of the Standard Model, called The Infinity Puzzle. Until now I’ve always recommended The Second Creation, by Crease and Mann, as the best popular book for this history, but Close’s new book gives that one a run for its money. While Crease and Mann is a comprehensive overview, covering theory and experiment, as well as a longer time-frame, Close gives an insider’s look focused on the decade or so that led up to the Standard Model coming together around 1973.

    Knowing the history of a subject has always seemed to me an integral part of really understanding it, so I’d argue that anyone who wants to really understand modern particle physics should spend some time with a book like this. In addition, there’s an outside chance we may soon be seeing the collapse of one of the central pillars of the Standard Model, the Higgs field, and if this happens, an understanding of where the Higgs came from may very well be relevant to anyone who wants to think about how to live without it.

    About a year ago I spent some time looking into the history of what I think is best called the Anderson-Higgs mechanism, writing a long posting about it here. Particle physicists have long overlooked the fact that it was condensed matter theorist Philip Anderson who not only first understood the basic physics that was going on, but even wrote a paper aimed at explaining it to particle theorists (which they ignored). Anderson’s insights grew out of his work on the BCS theory of superconductivity, a subject in which the role of gauge invariance was not so easily understood. If the Higgs field needs to be replaced, the analogy with BCS theory might provide a clue about what could replace it. Another book I’ve been reading recently is a collection of Anderson’s essays, called More and Different: notes from a thoughtful curmudgeon. Many of the people and topics he discusses there are much less familiar to me, but I confess to enjoying the curmudgeonly tone, and wishing I knew more about the history and physics behind the superconductivity research that he describes. Included in his collection is a review of my book I was very pleased by. His prediction about what the LHC will see is one I’m very sympathetic to: no supersymmetry, and “we will probably discover unexpected complexity in the Higgs phenomenon.”

    Anderson is justifiably scornful that the APS awarded the Dannie Heinemann prize [Anderson’s mistake, it was the J. J. Sakurai prize] for work on the Higgs to no less than seven [actually it was six] people, managing to leave out Anderson. Close gives Anderson his due, but also gives by far the most detailed and well-researched account available of the work of Higgs, Brout, Englert, Guralnik, Hagen and Kibble in this area. He puts the most dramatic revelation from his research in a footnote (page 388):

    A bizarre coincidence is that on Monday, October 5, just a week before Guralnik, Hagen, and Kibble’s paper was received by the editor of Physical Review Letters in New York, and hence around the time that it would have been submitted to the journal, Peter Higgs gave a seminar about his mechanism at Imperial College. Neither Guralnik nor Kibble has any memory of this, and extensive correspondence between us has failed to shed light on this.

    If the Higgs particle does show up at the LHC and the Nobel committee starts debating who should get the prize, this may become relevant. Another thing that I learned from Close that argues for Higgs in this context is that he was the first (in 1966) to write down a model with Yukawas giving masses to the fermions [Oops, this is wrong, my misunderstanding of a footnote that I didn’t check. It was the gauge boson masses being referred to].

    Close knows especially well the British cast of characters in this story, and one issue he devotes attention to is the unusual story of J. C. Ward’s eventful career and the question of why Ward and Kibble [as well as Guralnik] weren’t the ones to come up with the Weinberg-Salam model. Ward and Salam had worked on unified electroweak theory, minus the Higgs, and Kibble was very much involved in the Higgs story. One of the factors at play according to Close’s account was Ward’s rather paranoid nature, which made him unwilling to share ideas.

    Another Nobel-related part of the book that will likely be controversial is the discussion of Salam’s case for sharing the Nobel with Weinberg and Glashow. This topic recently was raised by Norman Dombey in a preprint on the arXiv (discussed a bit here), which refers to Close’s book. Close gives a detailed description of Salam’s activities around the time he was supposedly doing the Nobel Prize winning work, raising the possibility that he may not have had the right idea independently of Weinberg. One thing that is clear about this particular story though is that no one involved, including Weinberg and Salam, understood the significance of the Weinberg-Salam model at the time.

    An argument might be made that the book has quite a lot of “inside baseball”, about who exactly did what, and what people’s relative cases for recognition might be. If you really detest this sort of thing and want nothing but the physics, maybe you should stick to The Second Creation. But, if like me, you’re fascinated by this history and want to learn something new about it, go out and get a copy soon.

    Update: I should make it clear that what I wrote here about Salam is my own interpretation of the story, not that of Close. He explains that Salam learned about the Higgs mechanism from Kibble, and had a unified electroweak theory with Ward, so it makes perfect sense that he would come up with Weinberg-Salam, independently of Weinberg, and he was lecturing about something. Still, the lack of any written record of exactly what Salam had pre-Weinberg makes one wonder…

    Update: For the first-hand case that Salam did lecture on the Weinberg-Salam model pre-Weinberg (which is also described in Close’s book) here’s this from Robert Delbourgo:

    Dear Peter

    There have been murmurs on your blog-site, following Dombey’s article I think, which cast doubt on Salam’s worthiness for the prize. I wish to refute the innuendos and aspersions which are circulating.

    I was indeed present at the talks given by Salam on SSB for weak interactions where the famous model was described. Paul Matthews also attended, but being Oct that year, Tom Kibble was away on sabbatical at Rochester. I am prepared to take an oath on that.

    It was more than one lecture, but I cannot remember whether it was two or three talks which he gave as it quite long ago. Then I went to the library and spotted Weinberg’s paper, newly arrived, and pointed it out to Salam and urged him to write up his own independent discovery ASAP. Matthews also encouraged him to do so and the first opportunity was the Nobel Symposium. That is the long and short of it.

    I hope that ends the the rumours and controversy!!!

    Bob Delbourgo

    Posted in Book Reviews | 22 Comments

    Still Waiting for Supersymmetry

    The headline story at the APS Physics site is Still Waiting for Supersymmetry, by Sven Heinemeyer, which reports on a PRL article from CMS reporting no evidence for supersymmetry.

    According to Heinemeyer:

    It’s important to realize that CMS’s results do not exclude supersymmetric theories. Rather, they only conclusively say one of two things. One possibility is that the CMSSM (the specialized version of the MSSM) is realized in nature, but the supersymmetric partner particles, the gluinos and squarks, are relatively heavy—too heavy to be produced in large numbers at the LHC so far…

    The other interpretation is even simpler: while supersymmetry is realized in nature, it might not take the form described by the CMSSM, but possibly that of any one of the many (GUT scale) models. Different versions of supersymmetry make different predictions for the outcomes of high-energy proton-proton collisions. Many of these outcomes are more complicated than what is shown in Fig. 1, and to see them would require experiments to investigate many more collisions (and to study them for a longer time). Consequently, in these other models, it will only be possible to place much weaker bounds on the new particle masses (so far, however, no such dedicated analysis has been performed).

    I would have thought that there’s an even simpler third alternative: no supersymmetry in nature at all, but I’m not a SUSY phenomenologist…

    Posted in This Week's Hype | 41 Comments

    Higgs Non-News

    The combination of summer ATLAS and CMS Higgs results has finally appeared today (see here and here). This was originally supposed to be ready back in August, and has been circulating in various versions for quite a while. The bottom line (95% exclusion for 141-476 GeV) was mentioned here last week. They also quote limits using a much more stringent standard (99% exclusion for 146-443 GeV, excepting three small regions). Also worth mentioning is the 90% exclusion result, which reaches down to 132 GeV, leaving a SM Higgs possible only within the region 114-132 GeV.

    What everyone really wants to know is when the experiments will release results based on the much larger full 2011 data set. Today’s HCP 2011 talk just says:

    LHC experiments will analyze the x3 data already collected before 2012 Winter Conferences.

    Tevatron will provide the final results on 10 fb^{-1} by the 2012 Summer Conferencences.

    On the same time scale, there will be a combination LHC + Tevatron.

    On this schedule, a possible 95% Higgs exclusion would not happen before next summer. However… I’ve seen comments from Fermilab that they should have results ready for Moriond in early March, and they expect to be able to rule out the Higgs at 95% (if it isn’t there), over the relevant mass region. More immediately, the LHC experiments have been tasked to provide updates of their Higgs results, including per/experiment combinations, for the CERN Council Week (December 12-16). Rumors from the two experiments indicate that one experiment is seeing no excesses that could be attributed to the Higgs, the other only a very small number of events in one channel (ZZ->4l). It seems not impossible that the results available (publicly or not…) mid-December will come within striking distance of ruling out the Higgs (at 90% or 95% level) over the relevant low mass range.

    One interesting aspect of today’s data release is that it agrees closely with what Philip Gibbs put together back in September. For more about this, see here, especially this plot. In the past, many have speculated that the first observation of the Higgs would be reported on a blog. Now, it’s looking not unlikely that a possible exclusion of the Higgs will be first reported at viXra log…

    Update: CMS has released a video including footage of their internal discussions back in August when they decided not to release the ATLAS/CMS combination. There’s no real explanation of what changed, but by November people’s concerns had been addressed and they decided to release the combination.

    Posted in Experimental HEP News | 31 Comments

    Knots and Quantum Theory

    A commenter on the last posting pointed to the new video available at the IAS site of Witten’s recent public talk there on Knots and Quantum Theory. The talk is aimed at a general audience, including supporters of the IAS, so it’s rather non-technical. For the technical details behind what Witten is talking about (his recent work on Khovanov homology and QFT), see this survey for mathematicians, a survey for physicists at Strings 2011, and this paper.

    For me an interesting part of Witten’s talk was how he described the evolution of his ideas about this topic, and the relationship to geometric Langlands. He also had interesting comments about number theory and the Langlands program, denying any real knowledge of the subject, but arguing that sooner or later (probably later, after his career is over), there would be some convergence of number theory Langlands and physics. He finds the coincidence of geometric Langlands showing up in QFT so remarkable as to indicate that there are deep connections there still to be explored. I suspect that he sees the likely path of information going more from physics to math, with QFT ideas giving insight into number theory. While I agree with him about the existence of deep connections, I suspect the influence might go the other way, with the powerful ideas behind the Langlands program in number theory someday providing some clues about QFT useful to physicists.

    Also on the Langlands program topic, this semester we’re having a wonderful series of lectures on the topic by Dick Gross. He’s a fantastically gifted lecturer, and this series is pitched at just the right level for me, explicating many of the parts of the subject I’ve been trying to learn in recent years but have found quite confusing. It’s a beautiful, very deep, but rather intricate subject, bringing together a range of remarkable ideas about mathematics. In the end though, the Langlands program is really mostly about new ideas in representation theory, and since I’m convinced that deeper understanding of QFT will require new ideas about how to handle symmetries, which is the same thing as representation theory, perhaps finding connections between the subjects won’t have to await Witten’s retirement.

    Posted in Langlands | 26 Comments