Past the End of Science

I haven’t yet seen a copy of Marcelo Gleiser’s new book, but this weekend the Wall Street Journal had a review by John Gribbin, author of the 2009 multiverse-promotional effort In Search of the Multiverse. I don’t know how Gleiser treats this, but Gribbin emphasizes the multiverse as new progress in science (for some reason he’s now calling it the “metaverse”):

Within the metaverse, the story goes, there are regions that form inflating bubbles. Our universe is one such region or bubble. As Mr. Gleiser explains, the implication is that there are other universes, other bubbles far away floating across an inflating sea.

This seemingly speculative idea counts as a genuine scientific hypothesis, because it makes testable predictions. If other “bubble universes” exist in the metaverse, it is possible that, long ago, one or more of them may have collided with our universe, like two soap bubbles touching and moving apart. One effect of such a collision, Mr. Gleiser points out, would be to make ripples in the space of both bubble universes; they would leave a distinctive if faint ring-shaped pattern, known as a “cosmic wake,” in the background radiation that fills the universe. Data from the Planck satellite is being used to test this prediction right now. Is the metaverse real? We may well know in the next year or so.

This seems to be a reference to work by Matthew Kleban and collaborators, which I saw Kleban talk about recently (see here). My impression from that talk is that the actual state of affairs with Planck is that it has already looked for and ruled out most hoped-for signals of “bubble collisions”. I don’t know anyone besides Gribbin who believes that the next round of Planck data is going to answer the question “Is the metaverse real?”.

The really odd thing about the review is that Gribbin uses the multiverse to argue that John Horgan’s claims about physics in The End of Science are wrong. This is just bizarre. Gribbin and his multiverse mania for untestable theories provides strong ammunition for Horgan, since it’s the sort of thing he was warning about. Actually, I don’t recall anything in Horgan’s book about the multiverse, and suspect the idea that physics would end up embracing such an obviously empty idea was something that even he didn’t see coming. As the multiverse mania gains strength, physicists are blowing past the “End of Science” to something that has left conventional science completely behind.

Update: I took a look again at a copy of The End of Science, and, as I remembered, the chapter on “The End of Physics” has no mention of the multiverse pseudo-explanation of why one can’t ever understand the parameters of the Standard Model. Horgan ends the chapter with a vision of physics descending into “ironic science”, endlessly studying untestable string theory models and interpretations of quantum mechanics. With the multiverse we may already have gone past that point.

In the next chapter though, “The End of Cosmology”, there’s a long section about Linde and his “self-reproducing universe theory”, so Horgan more than 20 years ago already was writing about the place we’re ending up. I was interested to see the comment he got at the time from Howard Georgi about this kind of model:

quite amusing. It’s like reading Genesis.

Georgi also is quoted as describing inflation as:

a wonderful sort of scientific myth, which is at least as good as any other creation myth I’ve ever heard.

Of course what is different now is that 20 years ago the theory establishment saw Linde’s multiverse as kind of a joke, not at all part of science. Things have changed…

Update: While my favorite local bookstore doesn’t have a copy of the Gleiser book The Island of Knowledge, you can see parts of it on Google Books. Searching on “multiverse” you can read chapters 15 and 16 of the book which deal with the issue of the testability of the string theory multiverse. Reading these shows that Gribbin seriously misrepresents what Gleiser has to say about the multiverse. The context of his discussion of “Cosmic Wakes” and the possibility of seeing them in the Planck data is to argue that even if this happened (which he describes as having an “extremely small” probability), all that would show is evidence for a neighboring universe, not a multiverse:

However, I stress again that even a positive detection of a neighboring universe would not prove the existence of a multiverse. Within the present formulation of physics the multiverse hypothesis is untestable, however compelling it may be. [Page 129]

Posted in Multiverse Mania, Uncategorized | 15 Comments

Big Bang Blunder Bursts the Multiverse Bubble

This week’s Nature has an article by Paul Steinhardt, with the title Big Bang blunder bursts the multiverse bubble. The subtitle of the piece describes the BICEP2 frenzy of last March as “premature hype”, and the description in the body of the article is:

The results were hailed as proof of the Big Bang inflationary theory and its progeny, the multiverse. Nobel prizes were predicted and scores of theoretical models spawned. The announcement also influenced decisions about academic appointments and the rejections of papers and grants. It even had a role in governmental planning of large-scale projects.

Given recent arguments that BICEP2 may be seeing dust, not primordial gravitational waves, the March media frenzy quite possibly was highly premature, if not completely misguided. Steinhardt goes on to argue that in the future

announcements should be made after submission to journals and vetting by expert referees. If there must be a press conference, hopefully the scientific community and the media will demand that it is accompanied by a complete set of documents, including details of the systematic analysis and sufficient data to enable objective verification.

He also takes the occasion to note the odd fact that while BICEP2 results have been claimed to be proof of inflation and the multiverse, if they turn out to be wrong, that’s fine too:

The BICEP2 incident has also revealed a truth about inflationary theory. The common view is that it is a highly predictive theory. If that was the case and the detection of gravitational waves was the ‘smoking gun’ proof of inflation, one would think that non-detection means that the theory fails. Such is the nature of normal science. Yet some proponents of inflation who celebrated the BICEP2 announcement already insist that the theory is equally valid whether or not gravitational waves are detected. How is this possible?

The answer given by proponents is alarming: the inflationary paradigm is so flexible that it is immune to experimental and observational tests. First, inflation is driven by a hypothetical scalar field, the inflaton, which has properties that can be adjusted to produce effectively any outcome. Second, inflation does not end with a universe with uniform properties, but almost inevitably leads to a multiverse with an infinite number of bubbles, in which the cosmic and physical properties vary from bubble to bubble. The part of the multiverse that we observe corresponds to a piece of just one such bubble. Scanning over all possible bubbles in the multi­verse, every­thing that can physically happen does happen an infinite number of times. No experiment can rule out a theory that allows for all possible outcomes. Hence, the paradigm of inflation is unfalsifiable…

Taking this into account, it is clear that the inflationary paradigm is fundamentally untestable, and hence scientifically meaningless.

Steinhardt was on a panel last Friday night here in New York at the World Science Festival, which can be watched here. The panel included Guth and Linde (who earlier in the week got $1 million for their work on inflation), as well as John Kovac of BICEP, and Amber Miller, Dean of Science here at Columbia. The last part of the video includes an unsuccessful attempt by Steinhardt to pin down Kovac on the significance of the BICEP2 evidence for primordial gravitational waves claim, as well as an exchange with Guth and Linde. They both defend inflation as the best model of the alternatives.

Multiverse promotion continues apace, with Steinhardt one of a rather small number of physicists publicly objecting. On Monday Alexander Vilenkin will explain to the public at the American Museum of Natural History that “the Big Bang was not a unique event in cosmic history and that other Big Bangs constantly erupt in remote parts of the universe, producing new worlds with a great variety of physical properties” (see here). A recent story on livescience has Brian Greene on the multiverse. Over at Massimo Pigliucci’s Scientia Salon Coel Hellier is starting a multipart series arguing against multiverse skeptics with The multiverse as a scientific concept — part I. Nothing in Part I about the problematic issues (untestable claims that fundamental physics is “environmental”), maybe in Part II…

Posted in Multiverse Mania | 27 Comments

This Week’s Hype

About every three years KEK issues a hype-filled press release announcing that Jun Nishimura and collaborators have used a supercomputer to get evidence for string theory. Back in 2008, the announcement was of a numerical simulation on a supercomputer of a supersymmetric QM system that supposedly showed that superstring theory explained the properties of black holes (press release here, preprint here, blogging here). In 2011, the claim was of a numerical simulation on a supercomputer that used superstring theory to understand the birth of our universe (press release here, preprint here, blogging here). Both of these papers were published in PRL.

The 2014 press release is now out (see here), based on this preprint from last December. The latest claim is that the authors have solved the black hole information paradox, have shown that we live in a hologram, as well as showing that string theory provides a self-consistent quantization of gravity, all by doing a numerical simulation of a QM system. Even better, they have made the quantum gravity problem just as well-understood and tractable as QCD:

In short, we feel that problems involving quantum gravity have become as tractable as problems involving the strong interaction. The latter can be studied by simulating a gauge theory on a four-dimensional (4D) lattice, and such a method has recently been used to reproduce the mass spectrum of hadrons (28) and the nuclear force (29). We can now apply essentially the same method to study quantum gravity, which has been thought to be far more difficult.

This latest version of the KEK-hype has gotten a lot more attention than the previous two versions. Based on the preprint, late last year for some reason Nature covered this with a story about how Simulations back up theory that Universe is a hologram and this got a lot of media attention (see here for example).

The paper has now been published, and this time it’s not in PRL, but in Science magazine (submission there was a month after the preprint came out, could it be that PRL wouldn’t have it?). Science is giving it a high profile, including together with it a piece by Juan Maldacena, which claims the paper as “further evidence of the internal consistency of string theory”. Science provides the following one-line summary of the Maldacena piece:

A numerical test shows that string theory can provide a self-consistent quantization of gravity.

One obvious problem with this is that even if you take the most optimistic view of it all, what is being described is quantum gravity in 10d space-time. The Japanese authors deal with this problem with a footnote:

Theoretical consistency requires that superstring theory should be defined in ten-dimensional space-time. In order to realize our four-dimensional space-time, the size of the extra six dimensions can be chosen to be very small without spoiling the consistency.

Remarkably, Maldacena has another answer: the multiverse, which now he seems to take as accepted fact.

Of course, the 10-dimensional space under consideration here is not the same as the four-dimensional region of the multiverse where we live. However, one could expect that such holographic descriptions might also be possible for a region like ours.

Absurd hype about string theory is a continuing problem, and it’s not one that can be blamed on journalists, with this latest example getting help from HEP lab press releases, a highly reputable journal, and an IAS faculty member.

Posted in This Week's Hype | 18 Comments

Quick Links

Just returned from a few days in Boston, will try and catch up here on various topics:

  • This past week I was able to attend some of the talks at the conference in honor of David Vogan’s 60th birthday. I’m still trying to make sense of Bernstein’s talk on Stacks in Representation theory, where he argued that the category of equivariant sheaves on a certain stack is a better-behaved construction than the category of representations. I’ve always wondered whether this would be helpful in the case of representations of a gauge group, where the stack has something to do with equivalence classes of connections. It was his first use of Beamer, and some slides went by too fast. I noticed that a few people were documenting talks for themselves with phones/tablets taking pictures of slides/board.

    Among the other interesting talks, Jim Arthur discussed the conjectural automorphic Langlands group, along the lines of this older paper. He indulged in some speculation I’d never heard before, that Langlands functoriality might imply the Riemann hypothesis (somehow by analogy to something from Langlands about the Ramanujan conjecture appearing in Deligne’s proof of the RH in the function field case). Unfortunately the laptop being used to show his slides decided to start installing Windows Updates two-thirds of the way through his talk. For whatever reason, I didn’t manage to follow his comments at the end of the talk about something new having to do with Weil’s explicit formula in number theory. Consulting with some experts later though, I couldn’t find anyone optimistic about the Langlands implies RH speculation.

  • Also last week, the draft P5 report on a strategic plan for US HEP over the next 20 years was released, with discussion at an HEPAP meeting. Besides planned LHC upgrades, high priority goes to neutrino physics based at Fermilab, with a plan to attract more international participation. Other directions getting a high priority are on-going dark matter experiments and CMB research. A continued move of funding from research grants to construction projects will likely keep pressure on grants to university theory groups. Research into muon colliders is down-played, with a recommendation to “consult with international partners on the early termination of MICE.”
  • Skepticism about the BICEP2 primordial gravitational wave claims continues, with for instance this story at Science, and this preprint. In retrospect, it’s curious that the possible problems with foregrounds did not get more attention at the time of the original high-profile announcement.

    See here for a Caltech workshop on the BICEP2 results. Andrei Linde’s talk started with his complaining about the popular and textbook coverage of inflation. He said that when journalists call and ask him what BICEP2 will tell us about Grand Unification, he responds “nothing”. At the end of the talk, Sean Carroll asked him about the multiverse, with Linde’s response emphasizing what a great thing it is to have a theory that can’t be disproved:

    If you cannot disprove it, then you have this powerful weapon of thinking about and explaining things around you in an anthropic way.

  • This coming week here in New York there will be lots of events associated to the World Science Festival. One aimed not so much at a popular audience that I’ll likely attend will be a day-long Symposium on Evidence in the Natural Sciences, which will be at the Simons Foundation. It will end with a discussion between Jim Baggott (author of the recent Farewell to Reality) and Brian Greene (sold out now I fear).

Update: The Princeton crowd now has a preprint out, with the detailed argument that BICEP2 can’t distinguish “gravitational waves” from “galactic schmutz”, see here.

Posted in Langlands, Multiverse Mania, Uncategorized | 9 Comments

Walter Burke Institute for Theoretical Physics

Caltech has just announced the establishment of the Walter Burke Institute for Theoretical Physics, with Hirosi Ooguri as director. It will have a permanent endowment of around $74 million, with $30 million of that new funds from the Sherman Fairchild Foundation and the Gordon and Betty Moore Foundation.

To get some idea of the scale of this, the recent worries about HEP theory funding in the US have been due to a drop in funding by the DOE of university research from around $27.5 million/year to $24 million/year. So, a few million/year from this endowment should help make up for that, while continuing the trend of changing over theoretical physics funding from government support to philanthropy by the .01%.

Update: In other developments from the .01%, Physics World has the news that nominations are now open for the $3 million Milner prize in physics. You can submit nominations here. Nominations close June 30, announcement of winners will be November 9.

There’s also now a website for the Milner/Zuckerberg $3 million mathematics prize. Not much info there except that it will reward “significant discoveries across the many branches of the subject.” I’m guessing that, like the other prizes, initial picks will be from Milner/Zuckerberg themselves, with those people going on to form the committee to pick future winners.

Posted in Uncategorized | 11 Comments

BICEP2 News

Recall that this past March results from BICEP2 were announced, claiming a greater than 5 sigma detection of a primordial B-mode signal in the CMB polarization. This result received a huge amount of world-wide attention (and see some commentary here). Yesterday saw a very curious situation, with Adam Falkowski at the Resonaances blog claiming that the BICEP2 foreground analysis was flawed, and that “Various rumors place the significance of the corrected signal between 0 and 2 sigma.” Adrian Cho at Science magazine has a story about this, quoting Clement Pryke, co-PI of BICEP as saying “We stand by our paper”, while acknowledging that, with respect to a plot of Planck data they used to estimate the foreground “It is unclear what that plot shows”.

The controversy surrounds slide 6 of this presentation, with the BICEP foreground analysis evidently relying on scraping data from this slide. The claim at Resonaances is that they didn’t take into account the “Not CIB subtracted” notation on the slide:

However, it seems they misinterpreted the Planck results: that map shows the polarization fraction for all foregrounds, not for the galactic dust only (see the “not CIB subtracted” caveat in the slide). Once you correct for that and rescale the Planck results appropriately, some experts claim that the polarized galactic dust emission can account for most of the BICEP signal.

This is backed up by David Hogg’s report of a talk by Raphael Flauger at NYU yesterday:

At lunch, Raphael Flauger (NYU) gave a beautiful talk on foreground uncertainties related to the BICEP2 results. He built his foreground models as did the BICEP2 team by scraping data out of Keynote ™ presentations posted on the web! I have to say that again: The Planck team showed some maps of foregrounds in some Keynote presentations and posted them on the web. Flauger (and also the BICEP2 team before him) grabbed those presentations, scraped them for the all-sky maps, calibrated them using the scale bars, and worked from there. The coolest thing is that Flauger also simulated this whole process to account in his analysis for the digitization (scraping?) noise. Awesome! He concludes that the significance of the BICEP2 results is much lower than stated in the paper, which makes him (and many others) sad: He has been working on inflation models that produce large signals.

It sounds like this issue is not going to get resolved until there is something more substantial from Planck about this than a slide suitable for data scraping. In the meantime, blogs are your best source of information. Or maybe Twitter, where Erik Verlinde tweets from the Princeton PCTS workshop Searching for Simplicity that:

News from Princeton: BICEP2 polarization data are due to dust foreground and not caused by primordial gravity waves.

Update: There’s also a New Scientist story here. It should be emphasized that the BICEP team are denying that there is any need to revise what is in their paper, with New Scientist quoting John Kovac of BICEP as follows:

Kovac says no one has admitted anything. “We tried to do a careful job in the paper of addressing what public information there was, and also being upfront about the uncertainties. We are quite comfortable with the approach we have taken.”

See comments in the comment section here from Sesh Nadathur and Shaun Hotchkiss explaining why there may not be very much significance to this issue.


Update
: Sesh Nadathur has a detailed post up now about this, New BICEP rumours, nothing to see here. Bottom line is:

The BICEP result is exciting, but because it is only at one frequency, it cannot rule out foreground contamination. Other observations at other frequencies are required to confirm whether the signal is indeed cosmological. One scenario is that Planck, operating on the whole sky at many frequencies but with a lower sensitivity than BICEP, confirms a gravitational wave signal, in which case pop the champagne corks and prepare for Stockholm. The other scenario is that Planck can’t confirm a detection, but also can’t definitively say that BICEP’s detection was due to foregrounds (this is still reasonably likely!), in which case we wait for other very sensitive ground-based telescopes pointed at that same region of sky but operating at different frequencies to confirm whether or not dust foregrounds are actually important in that region, and if so, how much they change the inferred value of r.

Until then I would say ignore the rumours.

Peter Coles also has a blog post here, with bottom line

I repeat what I’ve said before in response to the BICEP2 analysis, namely that the discussion of foregrounds in their paper is disappointing. I’d also say that I think the foreground emission at these frequencies is so complicated that none of the simple approaches that were available to the BICEP2 team are reliable enough to be convincing. My opinion on the analysis hasn’t therefore changed at all as a result of this rumour. I think BICEP2 has definitely detected something at 150 GHz but we simply have no firm evidence at the moment that it is primordial. That will change shortly, with the possibility of other experiments (specifically Planck, but also possibly SPTPol) supplying the missing evidence.

I’m not particularly keen on the rumour-mongering that has gone on, but then I’m not very keen either on the way the BICEP2 result has been presented in some quarters as being beyond reasonable doubt when it clearly doesn’t have that status. Yet.

Update: There will be a talk about this issue in Princeton tomorrow morning, see here.


Update
: Slides from the Flauger talk at Princeton are here. I’ll leave discussion of the results presented to the better-informed, but will comment that this work appears to definitely involve new heights in the technology of data-scraping from Keynote presentations.

Update: Video of the Flauger talk is here. Quite interesting are the introductory remarks of Paul Steinhardt, and the concluding remarks of Lyman Page. See also new blog posts from Jester and Sesh Nadathur. Sesh (via Eiichiro Komatsu at Facebook) includes a transcription of part of Page’s comments on the situation:

This is, this is a really, peculiar situation. In that, the best evidence for this not being a foreground, and the best evidence for foregrounds being a possible contaminant, both come from digitizing maps from power point presentations that were not intended to be used this way by teams just sharing the data. So this is not – we all know, this is not sound methodology. You can’t bank on this, you shouldn’t. And I may be whining, but if I were an editor I wouldn’t allow anything based on this in a journal. Just this particular thing, you know. You just can’t, you can’t do science by digitizing other people’s images.

From looking at all this, and seeing what the people in Princeton are saying, my non-expert opinion is that the BICEP2 result should be interpreted as an observation of B-mode polarization, but there’s no convincing data yet about the crucial question of whether this is foreground or cosmological. The BICEP2 data could not address this, and the relevant Planck data is not yet available (other experiments will soon also provide the data needed to resolve this question). The BICEP2 press release claiming “the first direct evidence for cosmic inflation” now looks like it may have been a highly premature claim.

Update: Talks ongoing at Caltech today about this at a workshop, videos later here. On Twitter, you can follow the BICEP/Planck fight via Sean Carroll:

At Caltech CMB workshop. #BICEP2 folks seem completely unconcerned about recent worries about galactic foregrounds. Wait for Planck paper…

Zaldarriaga on CMB grav waves vs. dust: sane answer is “let’s just wait.” On the other hand… we just can’t. No scientist is that patient…

MZ: Planck hasn’t measured dust in #BICEP2 region. But extrapolating from where they did measure, apparently can fit B-mode signal.

MZ: “I’m not happy this is on Facebook and Twitter.”

Seems to me we’re now stuck with Planck saying they think this is dust, BICEP saying they think it’s not. Planck is the side that has data about dust, BICEP is the side that has something they scraped off a slide of a Keynote presentation…

Update: Excellent article about this in the Washington Post from Joel Achenbach: Big Bang backlash.


Update
: Zaldarriaga: I believe the case in favor of a detection of primordial B modes is not convincing (hopefully just temporarily). See more here and here.

Posted in Uncategorized | 26 Comments

Quillen Notebooks

Daniel Quillen, one of the greatest mathematicians of the latter part of the twentieth century, passed away in 2011 after suffering from Alzheimer’s. For an appreciation of his work and an explanation of its significance, a good place to start is Graeme Segal’s obituary notice, and there’s also quite a bit of material in the AMS Notices.

It’s very exciting to see that the Clay Math Institute now has a project to make available Quillen’s research notebooks. Segal and Glenys Luke have been working on cataloging the set, producing lists of contents of the notebooks, so far from the earliest ones in 1970 up to 1977. Quillen’s work ranged very widely, and for much of the 1980s he was very much involved in what was going on at the boundary of mathematics and quantum field theory. His work on the Mathai-Quillen form provided a beautiful expression for the Thom class of a vector bundle using exactly ingredients that formally generalize to the infinite-dimensional case, where this provides a wonderful way of understanding certain topological quantum field theories. The Mathai-Quillen paper is here, see here for a long expository account of the uses of this in TQFT.

I’ve just started to take a look through the notebooks, and this is pretty mind-blowing. The Mathai-Quillen paper is not the most readable thing in the world; it’s dense with ideas, with motivation and details often getting little attention. Reading the Quillen notebooks is quite the opposite, with details and motivation at the forefront. I just started with Quillen’s notes from Oct. 15 – Nov. 13, 1984, in which he is working out some parts of what appeared later in Mathai-Quillen. This is just wonderful material.

Besides his own ideas, there are notes taken based on other people’s talks. See for instance these notes from a private talk by Witten in Raoul Bott’s office on Dec. 15, 1983.

I was already having trouble getting done a long list of things I am supposed to be doing. Having these notebooks available is going to make this a lot worse….

Posted in Uncategorized | 10 Comments

Quick Links

  • The Defense Department has awarded a $7.5 million grant to Steve Awodey of CMU, Vladimir Voevodsky of the IAS and others to support research in Homotopy Type Theory and the foundations of mathematics. I had thought that getting DARPA 10 years ago to spend a few million on Geometric Langlands research was an impressive feat of redirection of military spending to abstract math, but this is even more so.
  • On some kind of opposite end of the spectrum of government spending on mathematics, there’s the story of the NSA, the largest employer of mathematicians in the US. Tom Leinster has an article in New Scientist about the ethical issues involved. More at the n-category cafe.

    Seven years after the NSA-backdoored NIST standard was discovered by Microsoft researchers, and seven months after Snowden documents confirmed this (see here), the NIST has now removed the backdoored standard from its random number generator standards. As far as I know there has never been an explanation from the NIST explaining how the backdoored algorithm was made a standard, or why anyone should trust any of the rest of their cryptographic standards at this point. Earlier in the year they issued a Draft report on their standards development process which explained nothing about what had happened. The language about the NSA in the report is:

    NIST works closely with the NSA in the development of cryptographic standards. This is done because of the NSA’s vast expertise in cryptography and because NIST, under the Federal Information Security Management Act of 2002, is statutorily required to consult with the NSA on standards.

    which seems to indicate they have no intention of doing anything about the problem of NSA backdoors.

  • On the Langlands front, for those who don’t read French Vincent Lafforgue has produced an English translation of the summary version of his recent summary of his recent work on global Langlands for function fields (already proved by his brother, but he has a way of doing things without using the trace formula).

    Langlands continues to add material to his web-site at the IAS. See for instance his long commentary on some history at the end of this section and his recent letter to Sarnak with commentary at the end of this section, where he gives his point of view on the state of the understanding of functoriality and reciprocity.

  • Sabine Hossenfelder has some interesting commentary on her experiences in the academic theoretical physics environment here.

    Mark Hannam has some related commentary on academia at his new blog here.

  • I’m still trying to finish a first draft of notes about quantum mechanics and representation theory (available here). I recently came across some similar notes which are quite good by Bernard, Laszlo and Renard.

    David Renard also has here some valuable notes on Dirac operators and representation theory.

  • Last Friday and Saturday at the University of South Carolina there was a Philosophy of the LHC Workshop, with talks here. Many of the talks were about the nature of the evidence for the Higgs and its statistical significance. James Wells talked about the supposed Higgs naturalness problem. He argues (see paper here) that you can’t base the problem on the Planck scale and quantum gravity since you don’t know what quantum gravity is (I strongly agree…). Where he loses me is with an argument that there must be lots more scalars out there than the Higgs (because string theory says so, or it just doesn’t seem right for there to only be one), and these cause a naturalness problem. Of course, once you have the naturalness problem, SUSY is invoked as the only known good way to solve it.
  • Posted in Uncategorized | 17 Comments

    Raising the Bar

    If you’re looking for something to do next Tuesday evening here in New York, an event called Raising the Bar has recruited 50 people to give talks at bars around the city. There are some quite interesting talks on the list, but I’ll have to miss them, since I’m scheduled to talk about What We Don’t Know About Fundamental Physics at the Blind Tiger on Bleecker Street at 8:30pm. Not sure yet exactly what I’ll talk about, but the general idea is to start by explaining that the current situation is that we have a fundamental theory (SM + GR) that is frustratingly good in terms of agreement with experiment, but also frustratingly incomplete. I’ll see what I can do to explain the ways in which the SM and GR are incomplete, and what current prospects are for doing better.

    Posted in Uncategorized | 27 Comments

    Supersymmetry and the Crisis in Physics

    The May issue of Scientific American has a very good cover story by Joe Lykken and Maria Spiropulu, entitled Supersymmetry and the Crisis in Physics (the article is now behind their subscriber paywall, but for those with access to Nature, it will soon be here).

    Here are some excerpts:

    It is not an exaggeration to say that most of the world’s particle physicists believe that supersymmetry must be true—the theory is that compelling. These physicists’ long-term hope has been that the LHC would finally discover these superpartners, providing hard evidence that supersymmetry is a real description of the universe…

    Indeed, results from the first run of the LHC have ruled out almost all the best-studied versions of supersymmetry. The negative results are beginning to produce if not a full-blown crisis in particle physics, then at least a widespread panic. The LHC will be starting its next run in early 2015, at the highest energies it was designed for, allowing researchers at the ATLAS and CMS experiments to uncover (or rule out) even more massive superpartners. If at the end of that run nothing new shows up, fundamental physics will face a crossroads: either abandon the work of a generation for want of evidence that na­­ture plays by our rules, or press on and hope that an even larger collider will someday, somewhere, find evidence that we were right all along…

    During a talk at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, Nima Arkani-Hamed, a physicist at the Institute for Advanced Study in Princeton, N.J., paced to and fro in front of the blackboard, addressing a packed room about the future of supersymmetry. What if supersymmetry is not found at the LHC, he asked, before answering his own question: then we will make new supersymmetry models that put the superpartners just beyond the reach of the experiments. But wouldn’t that mean that we would be changing our story? That’s okay; theorists don’t need to be consistent—only their theories do.

    This unshakable fidelity to supersymmetry is widely shared. Particle theorists do admit, however, that the idea of natural supersymmetry is already in trouble and is headed for the dustbin of history unless superpartners are discovered soon…

    The authors go on to describe possible responses to this crisis. One is the multiverse, which they contrast to supersymmetry as not providing an answer to why the SM parameters are what they are, although this isn’t something that supersymmetry ever was able to do. Another is large extra dimensions as in Randall-Sundrum, but that’s also something the LHC is not finding, with few ever thinking it would. Finally there’s the “dimensional transmutation” idea about the Higgs, which I wrote about here last year. About this, the authors write:

    If this approach is to keep the useful virtual particle effects while avoiding the disastrous ones—a role otherwise played by supersymmetry—we will have to abandon popular speculations about how the laws of physics may become unified at superhigh energies. It also makes the long-sought connection between quantum mechanics and general relativity even more mysterious. Yet the approach has other advantages. Such models can generate mass for dark matter particles. They also predict that dark matter interacts with ordinary matter via a force mediated by the Higgs boson. This dramatic prediction will be tested over the next few years both at the LHC and in underground dark matter detection experiments.

    It’s great to see such a high-profile public discussion of the implications of the collapse of the paradigm long-dominant in some circles which sees SUSY extensions of the Standard Model as the way forward for the field. One place where I disagree with Lykken and Spiropulu is their claim that “It is not an exaggeration to say that most of the world’s particle physicists believe that supersymmetry must be true.” Actually I think that is an exaggeration, with a large group of theorists always skeptical about SUSY models. For some evidence of this, take a look at this document from 2000, which shows a majority skeptical about SUSY at the LHC. By the way, I hear those on the right side of that bet haven’t yet gotten their cognac, with the bet renegotiated to wait for results from the next LHC run.

    Update: I hear that the 2000 bet was revised in 2011, with a copy displayed publicly at the Niels Bohr Institute. The new bet is about whether a superpartner will be found by June 16, 2016, and the losers must come up with a bottle of good cognac. There are 22 on the yes side (including Arkani-Hamed and Quigg), and 22 on the no side (including ‘t Hooft, Komargodski, Bern). Also, 3 abstentions. It explicitly is an addendum to the 2000 wager, with those who lost the last one given the option of signing again, forfeiting two bottles of cognac, or accepting that “they have suffered ignominious defeat.”

    Update: This report from the APS spring meeting includes the following about Spiropulu’s talk there:

    Supersymmetry and dark matter have become so important to particle physicists that “we have cornered ourselves experimentally,” said Spiropulu. If neither is detected in the next few years, radical new ideas will be required. Spiropulu compared the situation to the era before 1905, when the concept of ether as the medium for all electromagnetic waves could not be verified.

    You can watch the talk and see for yourself here.

    Posted in Uncategorized | 51 Comments