The End of Time

I’ve been critical of multiverse pseudo-science because it doesn’t make any testable predictions, but it seems that tonight there really is one. According to this new preprint, multiverse arguments guarantee that time will end, with the expected amount of time left before the end about 5 billion years and

There is a 50% chance that time will end within the next 3.3 billion years.

The argument seems to be that multiverse arguments require introducing an artificial cut-off to get finite numbers, so the cut-off must be there and we’re going to hit it relatively soon on cosmological time scales. The age of the universe is about 13.75 billion years, but we’re getting near the end, already entering late middle-age to senior-citizen time-frame. One interpretation given of this result is that:

we are being simulated by an advanced civilization with a large but finite amount of resources, and at some point the simulation will stop.

It turns out that you don’t even need the whole apparatus of eternal inflation to see that time is going to end. All you need to do is to think about sleeping and waking up, which, according to the paper, leads to the “Guth-Vanchurin” paradox:

Suppose that before you go to sleep someone flips a fair coin and, depending on the result, sets an alarm clock to awaken you after either a short time or a long time. Local physics dictates that there is a 50% probability to sleep for a short time since the coin is fair. Now suppose you have just woken up and have no information about how long you slept. It is natural to consider yourself a typical person waking up. But if we look at everyone who wakes up before the cutoff, we find that there are far more people who wake up after a short nap than a long one. Therefore, upon waking, it seems that there is no longer a 50% probability to have slept for a short time.

How can the probabilities have changed? If you accept that the end of time is a real event that could happen to you, the change in odds is not surprising: although the coin is fair, some people who are put to sleep for a long time never wake up because they run into the end of time first. So upon waking up and discovering that the world has not ended, it is more likely that you have slept for a short time. You have obtained additional information upon waking – the information that time has not stopped – and that changes the probabilities.

However, if you refuse to believe that time can end, there is a contradiction. The odds cannot change unless you obtain additional information. But if all sleepers wake, then the fact that you woke up does not supply you with new information.

Update: Lubos doesn’t think much of the paper:

But holy crap, if physicists don’t lose all of their scientific credit by publishing this pure garbage and nothing else for years, can they lose their credibility at all? Does the institutionalized science have any checks and balances left? I think that all the people are being bullied into not criticizing the junk written by other people who are employees of the academic system, especially if the latter are politically correct activists. And be sure, some of the authors of this nonsense are at the top of it.

This is just bad. I urge all the sane people in Berkeley and other places to make it very clear to Bousso et al. – and to students and other colleagues – that they have gone completely crazy.

Update: In other pseudo-science news, the latest Scientific American features a piece by Hawking and Mlodinow based on their recent book.

Update: Not only does New Scientist think this nonsense deserves to be covered in a lead article, they also have an editorial urging us not to “roll your eyes” about this.

Posted in Multiverse Mania | 49 Comments

LHC Update

There’s been great progress made recently at the LHC, with successful commissioning of “trains” of bunches, allowing significantly higher collision rates. Last night’s fill produced an integrated luminosity of .684 inverse picobarns (or 684 inverse nanobarns), which can be compared to the total integrated luminosity up until this week of about 3.5 inverse picobarn. The highest instantaneous luminosity reached is now at about 1/5 the goal set for this year, with a further increase in number of bunches planned for this weekend. For more details, there’s a message from the CERN DG here.

Update: The latest fill, with 104 bunches, recently started, with an initial luminosity now at 1/3 of the goal for this year.

Update: For more information about latest events at the LHC and upcoming plans, see here. Latest luminosity plots are here, now including highest instantaneous luminosity.

Posted in Experimental HEP News | 8 Comments

Tevatron Funding

The Fermilab web-site today has a message from Director Oddone about prospects for funding an extension of the Tevatron run after FY2011, as recommended by the Physics Advisory Committee. He has asked the DOE for additional funding of $35 million/year to pay part of the cost of an extended run, with the rest to come out of slowing down other planned experiments. If the DOE turns this down, it seems the plan is to shutdown the Tevatron next year.

This leaves prospects for the Tevatron’s future very much up in the air, especially given the dysfunctional nature of the US federal budget process. With FY2011 about to begin, the Congress has yet to pass a budget, shows no signs of doing so anytime soon, and is in the middle of an election campaign dominated by calls for cutting federal spending. The general assumption is that they’ll deal with this by continuing funding at FY2010 levels, until finally getting around to passing appropriations bills all together as part of an “omnibus” bill sometime deep into the fiscal year. The process of dealing with the FY2012 budget starts next February with the President’s budget request, but again there’s no reason to believe there actually will be a budget until long after they’ve already started spending the money. Luckily, the Fermilab people by now have many years of experience dealing with this system.

Posted in Experimental HEP News | 9 Comments

The Shape of Inner Space

Besides the Hawking book, which was a disappointment in many ways, I recently also finished reading a much better and more interesting book which deals with some of the same topics, but in a dramatically more substantive and intelligent manner. The Shape of Inner Space is a joint effort of geometer Shing-Tung Yau and science writer Steve Nadis. Yau is one of the great figures in modern geometry, a Fields medalist and current chair of the Harvard math department. He has been responsible for training many of the best young geometers working today, as well as encouraging a wide range of joint efforts between mathematicians and physicists in various areas of string theory and geometry.

Yau begins with his remarkable personal story, starting out with a childhood of difficult circumstances in Hong Kong. He gives a wonderful description of the new world that opened up to him when he came to the US as a graduate student in Berkeley, where he joyfully immersed himself in the library and a wide range of courses. Particularly influential for his later career was a course by Charles Morrey on non-linear PDEs, which he describes as losing all of its students except for him, many off to protest the bombing of Cambodia.

He then goes on to tell some of the story of his early career, culminating for him in his proof of the Calabi conjecture. This conjecture basically says that if a compact Kahler manifold has vanishing first Chern class (a topological condition), then it carries a unique Kahler metric satisfying the condition of vanishing Ricci curvature. It’s a kind of uniformisation theorem, saying that these manifolds come with a “best” metric. Such manifolds are now called “Calabi-Yau manifolds”, and while the ones of interest in string theory unification have six dimensions, they exist in all even dimensions, in some sense generalizing the special case of an elliptic curve (torus) among two-dimensional surfaces.

Much of the early part of the book is concerned not directly with physics, but with explaining the significance of the mathematical subject known as “geometric analysis”. Besides the Calabi conjecture, Yau also explains some of the other highlights of the subject, which include the positive-mass conjecture in general relativity, Donaldson and Seiberg-Witten theory, and the relatively recent proof of the Poincare conjecture. Some readers may find parts of this heavy-going, since Yau is ambitiously trying to explain some quite difficult mathematics (for instance, trying to explain what a Kahler manifold is). Having tried to do some of this kind of thing in my own book, I’m very sympathetic to how difficult it is, but also very much in favor of authors giving it a try. One may end up with a few sections of a book that only a small fraction of its intended audience can really appreciate, but that’s not necessarily a bad thing, and arguably much better than having content-free books that don’t even try to explain to a non-expert audience what a subject is really about.

A lot of the book is oriented towards explaining a speculative idea that I’m on record as describing as a failure. This is the idea that string theory in ten-dimensions can give one a viable unified theory, by compactification of six of its dimensions. When you do this and look for a compact six-dimensional manifold that will preserve N=1 supersymmetry, what you find yourself looking for is a Calabi-Yau manifold. Undoubtedly one reason for Yau’s enthusiasm for this idea is his personal history and having his name attached to these. Unlike other authors though, Yau goes into the question in depth, explaining many of the subtleties of the subject, as well as outlining some of the serious problems with the idea.

I’ve written elsewhere that string theory has had a huge positive effect on mathematics, and one source of this is the array of questions and new ideas about Calabi-Yau manifolds that it has led to. Yau describes a lot of this in detail, including the beginnings of what has become an important new idea in mathematics, that of mirror symmetry, as well as speculation (“Reid’s fantasy”) relating the all-too-large number of classes of Calabi-Yaus. He also explains something that he has been working on recently, pursuing an idea that goes back to Strominger in the eighties of looking at an even larger class of possible compactifications that involve non-Kahler examples. One fundamental problem for string theorists is that of too many Calabi-Yaus already, so they’re not necessarily enthusiastic about hearing about more possibilities:

University of Pennsylvania physicist Burt Ovrut, who’s trying to realize the Standard Model through Calabi-Yau compactifications, has said he’s not ready to take the “radical step” of working on non-Kahler manifolds, about which our mathematical knowledge is presently quite thin: “That will entail a gigantic leap into the unknown, because we don’t understand what these alternative configurations really are.”

Even in the simpler case of Calabi-Yaus, a fundamental problem is that these manifolds don’t have a lot of symmetry that can be exploited. As a result, while Yau’s theorem says a Ricci-flat metric exists, one doesn’t have an explicit description of the metric. If one wants to get beyond calculations of crude features of the physics coming out of such compactifications (such as the number of generations), one needs to be able to do things like calculate integrals over the Calabi-Yau and this requires knowing the metric. Yau explains this problem, and how it has hung up any hopes of calculating things like fermion masses in these models. He gives a general summary of the low-level of success that this program has so far achieved, and quotes various string theorists on the subject:

But there is considerable debate regarding how close these groups have actually come to the Standard Model… Physicists I’ve heard from are of mixed opinion on this subject, and I’m not yet sold on this work or, frankly, on any of the attempts to realize the Standard Model to date. Michael Douglas… agrees: “All of these models are kind of first cuts; no one has yet satisfied all the consistency checks of the real world”…

So far, no one has been able to work out the coupling constants or mass…. Not every physicist considers that goal achievable, and Ovrut admits that “the devil is in the details. We have to compute the Yukawa couplings and the masses, and that could turn out completely wrong.”

Yau explains the whole landscape story and the heated debate about it, for instance quoting Burt Richter about landscape-ologists (he says they have “given up.. Since that is what they believe, I can’t understand why they don’t take up something else – macrame for example.”) He describes the landscape as a “speculative subject” about which he’s glad to, as a mathematician, not have to take a position:

It’s fair to say that things have gotten a little heated. I haven’t really participated in this debate, which may be one of the luxuries of being a mathematician. I don’t have to get torn up about the stuff that threatens to tear up the physics community. Instead, I get to sit on the sidelines and ask my usual sorts of questions – how can mathematicians shed light on this situation?

So, while I’m still of the opinion that much of this book is describing a failed project, on the whole it does so in an intellectually serious and honest way, so that anyone who reads it is likely to learn something and to get a reasonable, if perhaps overly optimistic summary of what is going on in the subject. Only at a few points do I think the book goes a bit too far, largely in two chapters near the end. One of these purports to cover the possible fate of the universe (“the fate of the false vacuum”) and the book wouldn’t lose anything by dropping it. The next chapter deals with string cosmology, a subject that’s hard to say much positive about without going over the edge into hype.

Towards the end of the book, Yau makes a point that I very much agree with: fundamental physics may get (or have already gotten..) to the point where it can no longer rely upon frequent inspiration from unexpected experimental results, and when that happens one avenue left to try is to get inspiration from mathematics:

So that’s where we stand today, with various leads being chased down – only a handful of which have been discussed here – and no sensational results yet. Looking ahead, Shamit Kachru, for one, is hopeful that the range of experiments under way, planned, or yet to be devised will afford many opportunities to see new things. Nevertheless, he admits that a less rosy scenario is always possible, in the even that we live in a frustrating universe that affords little, if anything in the way of empirical clues…

What we do next, after coming up empty-handed in every avenue we set out, will be an even bigger test than looking for gravitational waves in the CMB or infinitesimal twists in torsion-balance measurements. For that would be a test of our intellectual mettle. When that happens, when every idea goes south and every road leads to a dead end, you either give up or try to think of another question you can ask – questions for which there might be some answers.

Edward Witten, who, if anything, tends to be conservative in his pronouncements, is optimistic in the long run, feeling that string theory is too good not to be true. Though, in the short run, he admits, it’s going to be difficult to know exactly where we stand. “To test string theory, we will probably have to be lucky,” he says. That might sound like a slender thread upon which to pin one’s dreams for a theory of everything – almost as slender as a cosmic string itself. But fortunately, says Witten, “in physics there are many ways of being lucky.”

I have no quarrel with that statement and more often than not, tend to agree with Witten, as I’ve generally found this to be a wise policy. But if the physicists find their luck running dry, they might want to turn to their mathematical colleagues, who have enjoyed their fair share of that commodity as well.

Update: I should have mentioned that the book has a web-site here, and there’s a very good interview with Yau at Discover that covers many of the topics of the book.

Update: There’s more about the book and an interview with Yau here.

Posted in Book Reviews | 16 Comments

Short Items

  • The Fermilab Physics Advisory Committee recently recommended that the Tevatron be kept running for an additional three years (until 2014). By the end of that time it should be able to accumulate a total of 20 fb-1 of data, which would give sensitivity to a standard model Higgs at the 3-sigma level over the entire interesting mass range. The cost for this would be a total of about \$150 million, which would likely have to come out of Fermilab’s $810 million/year budget. While the idea of continuing to do physics at the high-energy frontier and possibly beat the LHC to the Higgs, for less than 10% of the lab budget/year seems to be a no-brainer, director Oddone still may not be completely sold on the idea. Keeping the Tevatron going would set back some of the projects the lab has planned for its future in a post-Tevatron world. There’s also significant concern about the future federal budget situation, and how to make sure that the best possible case is made for a future of Fermilab, in an environment where people may be looking for large, expensive programs that could be cut. For more about this, see Adrian Cho’s article Higgs or Bust? in Science.

    One huge consideration in this decision is that of what will happen at the LHC. CERN is facing its own budgetary problems, and has just decided to shut down during 2012 not just the LHC (for repair of magnet interconnections), but the entire accelerator complex. Work continues this year on trying to raise the luminosity of the machine, but progress is slow. They still are an order of magnitude lower than where they want to be by the end of the year, with only a few more weeks left before the machine is shutdown as a proton-proton collider and reconfigured for a heavy-ion run. If all goes according to plan, by late 2011 the LHC would have 1 fb-1 of data, enough to compete with the Tevatron in the Higgs search. But, so far, plans like this have turned out to be overly optimistic, with things taking longer than expected.

    In today’s CERN Bulletin and Fermilab today, Oddone and CERN DG Heuer issued a joint statement downplaying the competition between their labs:

    The press makes much of the competition between CERN’s LHC and Fermilab’s Tevatron in the search for the Higgs boson. This competitive aspect is real, and probably adds spice to the scientific exploration, but for us such reporting often feels like spilling the entire pepper shaker over a fine meal.

  • CheapUniverses.com is now selling a Universe Splitter iPhone app for $1.99, complementing its other products. At $3.95, the Basic Universe:

    Using quantum physics, we split your universe into two branches, then we send you an email to inform you which branch you’re in.

    As celebrity endorser, they have Garrett Lisi explaining:

    The functioning of this app is in complete agreement with the many-worlds interpretation of quantum mechanics.

  • The author is always the last to know such things, but I’ve heard rumors that someone intends to bring out a Czech edition of Not Even Wrong.
  • High quality videos of talks from the Princeton IAS summer school on supersymmetry are available here.
  • In Langlands-related news, there’s an excellent new preprint by David Nadler about the fundamental lemma and Ngo’s proof. This is one of the most ferociously difficult topics to understand in current math research, and Nadler’s article is about the best expository piece on the subject that I’ve seen.

    This semester there’s a program on Langlands Duality in Representation Theory and Gauge Theory at Hebrew University.

    There’s a fascinating recent preprint by Kevin Buzzard and Toby Gee on The conjectural connections between automorphic representations and Galois representations. They conjecture a reciprocity sort of relation between algebraic automorphic representations and Galois representations, not just for GL(n), but for arbitrary reductive groups. This involves invoking a twist by “half the sum of the positive roots”, a phenomenon that arises in various places in representation theory, often indictating that spinors are involved (“half the sum of the positive roots” is the highest weight of the spinor representation).

  • Posted in Experimental HEP News, Langlands, Multiverse Mania, Not Even Wrong: The Book | 16 Comments

    Hawking Gives Up

    David Gross has in the past invoked the phrase “never, never, never give up”, attributed to Churchill, to describe his view about claims that one should give up on the traditional goals of fundamental physics in favor of anthropic arguments invoking a multiverse. Steven Hawking has a new book out this week, called The Grand Design and written with Leonard Mlodinow, in which he effectively announces that he has given up:

    We seem to be at a critical point in the history of science, in which we must alter our conception of goals and of what makes a physical theory acceptable. It appears that the fundamental numbers, and even the form, of the apparent laws of nature are not demanded by logic or physical principle. The parameters are free to take on many values and the laws to take on any form that leads to a self-consistent mathematical theory, and they do take on different values and different forms in different universes.

    Thirty years ago, in his inaugural lecture as Lucasian professor, Hawking took a very different point of view. He argued that we were quite close to a final unified theory, based on N=8 supergravity, with a 50% chance of complete success by the year 2000. A few years after this, N=8 supergravity fell into disfavor when it was shown that supersymmetry was not enough to cancel possible ultraviolet divergences in the theory. There has been a recent revival of interest as new calculational methods show unexpected and still not completely understood additional cancellations that may fully eliminate ultraviolet divergences. Hawking shows no interest in this, instead signing on to the notion that “M-theory” is the theory of everything. The book doesn’t even really try to explain what “M-theory” is, we’re just told that:

    People are still trying to decipher the nature of M-theory, but that may not be possible. It could be that the physicist’s traditional expectation of a single theory of nature is untenable, and there exists no single formulation. It might be that to describe the universe, we have to employ different theories in different situations

    The book ends with the argument that

  • Our TOE must contain gravity.
  • Supersymmetry is required to have a finite theory of gravity.
  • M-theory is the most general supersymmetric theory of gravity.
  • ergo

    M-theory is the unified theory Einstein was hoping to find. The fact that we human beings – who are ourselves mere collections of fundamental particles of nature – have been able to come this close to an understanding of the laws governing us and our universe is a great triumph.

    This isn’t exactly an air-tight argument…

    The book begins in a more promising manner, with a general philosophical and historical discussion of fundamental physical theory. There’s this explanation of what makes a good physical model:

    A model is a good model if it:

    1. Is elegant
    2. Contains few arbitrary or adjustable elements
    3. Agrees with and explains all existing observations
    4. Makes detailed predictions about future observations that can disprove or falsify the model if they are not borne out.

    The fact that “M-theory” satisfies none of these criteria is not remarked upon.

    The book is short (about 100 pages of actual text, interspersed with lots of color graphics and cartoons), and contains rather little substantive science. There are no references of any kind to any other sources. The discussion of supersymmetry and M-theory is often highly misleading. For example, we are assured that

    various calculations that physicists have performed indicate that the [super]partner particles corresponding to the particles we observe ought to be a thousand times as massive as a proton, if not even heavier. That is too heavy for such particles to have been seen in any experiments to date…

    With no references, one has no idea what these “various calculations” might be. If they are calculations of masses based on the assumption that the supersymmetry and electroweak-symmetry breaking scales are similar, they typically predict masses visible at the Tevatron or LEP. I suspect that the logic is completely backwards here: what is being referred to are calculations based on the Tevatron and LEP limits that require masses in the TeV range.

    As for the fundamental problem of testability of M-theory, here’s the only thing we get:

    The theory we describe in this chapter is testable…. The amplitude is reduced for universes that are more irregular. This means that the early universe would have been almost smooth, but with small irregularities. As we’ve noted, we can observe these irregularities as small variations in the microwaves coming from different directions in the sky. They have been found to agree exactly with the general demands of inflation theory; however, more precise measurements are needed to fully differentiate the top-down theory from others, and to either support or refute it. These may well be carried out by satellites in the future.

    This looks like one of many dubious claims of “testability” of multiverse theories, which tend to founder on the measure problem and the fact that one has no idea what the underlying theory actually is. Without any details or references though, it’s hard to even know exactly what the claim is here.

    One thing that is sure to generate sales for a book of this kind is to somehow drag in religion. The book’s rather conventional claim that “God is unnecessary” for explaining physics and early universe cosmology has provided a lot of publicity for the book. I’m in favor of naturalism and leaving God out of physics as much as the next person, but if you’re the sort who wants to go to battle in the science/religion wars, why you would choose to take up such a dubious weapon as M-theory mystifies me. A British journalist contacted me about this recently and we talked about M-theory and its problems. She wanted me to comment on whether physicists doing this sort of thing are relying upon “faith” in much the same way as religious believers. I stuck to my standard refusal to get into such discussions, but, thinking about it, have to admit that the kind of pseudo-science going on here and being promoted in this book isn’t obviously any better than the faith-based explanations of how the world works favored by conventional religions.

    For some reviews of the book showing a bit of skepticism, see ones by Craig Callender, Fred Bortz, and Roger Penrose. For much more credulous reviews, see for example James Trefil (who evidently has his own multiverse book coming out). The Economist has a news story about this, which assures us that Hawking is

    a likely future recipient of the Nobel prize in physics (if, as expected, his 1974 theory that black holes emit radiation despite their notorious all-engulfing gravitational pull is confirmed by experiments at the Large Hadron Collider in CERN).

    Update: There’s a new posting at physicsworld.com by Hamish Johnston that brings up the issue of the potential damage caused by this to the cause of science funding in Britain:

    This morning there was lots of talk about science on BBC Radio 4’s Today Programme — but I think it left many British scientists cringing under their duvets.

    Hawking explained that M-theory allows the existence of a “multiverse” of different universes, each with different values of the physical constants. We exist in our universe not by the grace of God, according to Hawking, but simply because the physics in this particular universe is just right for stars, planets and humans to form.

    There is just one tiny problem with all this — there is currently little experimental evidence to back up M-theory. In other words, a leading scientist is making a sweeping public statement on the existence of God based on his faith in an unsubstantiated theory…

    Physicists need the backing of the British public to ensure that the funding cuts don’t hit them disproportionately. This could be very difficult if the public think that most physicists spend their time arguing about what unproven theories say about the existence of God.

    Update: Today’s Wall Street Journal has a quite positive review of the book by Sean Carroll.

    Update: See here for John Horgan’s take on the Hawking book:

    I’ve always thought of Stephen Hawking—whose new book The Grand Design (Bantam 2010), co-written with Leonard Mlodinow, has become an instant bestseller—less as a scientist than as a cosmic, comic performance artist, who loves goofing on his fellow physicists and the rest of us…

    Toward the end of the meeting [in Sweden, 1990], everyone piled into a bus and drove to a nearby village to hear a concert in a Lutheran church. When the scientists entered the church, it was already packed. The orchestra, a motley assortment of blond-haired youths and wizened, bald elders clutching violins, clarinets and other instruments, was seated at the front of the church. Their neighbors jammed the balconies and seats at the rear of the building.

    The scientists filed down the center aisle to pews reserved for them at the front of the church. Hawking, grinning ear to ear, led the way in his motorized wheelchair. The townspeople started to clap, tentatively at first, then passionately. These religious folk seemed to be encouraging the scientists, and especially Hawking, in their quest to solve the riddle of existence.

    Now, Hawking is telling us that unconfirmable M-theory plus the anthropic tautology represents the end of that quest. If we believe him, the joke’s on us.

    Posted in Book Reviews, Multiverse Mania | 83 Comments

    Researchers Discover How to Conduct First Test of “Untestable” String Theory

    A couple people this morning pointed me to today’s press release from Imperial College, headlined Researchers Discover How to Conduct First Test of “Untestable” String Theory and subtitled “New study suggests researchers can now test the ‘theory of everything'”. In case you miss the headline and subtitle, and thus the point that string theory is now testable due to the efforts of Imperial College researchers, the rather short press release repeatedly drives the point home:

    The new research, led by a team from Imperial College London, describes the unexpected discovery that string theory also seems to predict the behaviour of entangled quantum particles. As this prediction can be tested in the laboratory, researchers can now test string theory…

    Using the theory to predict how entangled quantum particles behave provides the first opportunity to test string theory by experiment…

    The discovery that string theory seems to make predictions about quantum entanglement is completely unexpected, but because quantum entanglement can be measured in the lab, it does mean that at last researchers can test predictions based on string theory…

    There’s a blog posting written about the preprint of this paper when it first appeared here, in which I pointed out that the result worked out in the paper is just an example of a well-known piece of mathematics that comes down to classifying nilpotent orbits. This is based on a famous 1971 theorem of Kostant-Rallis, and Nolan Wallach worked out in detail here the specific example considered by Duff et al. in lecture notes for a 2004 summer school. The initial preprint didn’t refer to this mathematical literature, but a revised version was soon issued in which a reference to the Wallach notes was added to the bibliography. There’s no trackback to the discussion on Not Even Wrong at the arXiv listing for the paper due to the arXiv’s censorship policy, but perhaps one or more of the authors of the preprint are regular readers here…

    I have no idea how this paper is supposed to contain a “test” of string theory. The simple quantum mechanics problem at issue comes down to classifying orbits of a group action on a four-fold tensor product, exactly what Wallach worked out in detail in his notes, as an example of Kostant-Rallis. If you do an experiment based on this and it doesn’t work, you’re not going to falsify string theory (or Kostant-Rallis for that matter). By now there’s a long history of rather outrageous press releases being issued about the discovery of supposed “tests” of string theory. This one really takes the cake…

    Update: The press release is having its intended effect, generating stories headlining false claims about string theory . So far today, there’s String Theory:Testing the Untestable?, New study suggests researchers can now test the ‘theory of everything’, Scientists Say They Can Now Test String Theory and Researchers Devise the First Experimental Test of Controversial, Confusing String Theory. There’s even UK Scientists discover way to test untestable string theory, which has the test already performed:

    Scientists at the Imperial College London have managed to conduct the first string theory test, destroying previous beliefs that it was untestable….

    The discovery will please physicists, most of whom consider string theory the best available for explaining the universe.

    Unfortunately, no details on how the test turned out…

    Update: The subtitle on the press release has been changed. It used to be “New study suggests researchers can now test the ‘theory of everything'”, now it’s “New study presents unexpected discovery that string theory may predict the behaviour of entangled quantum particles.”

    Update: No press campaign for a “finally string theory is testable” claim is complete without a Slashdot story (actually, stories, here and here):

    Big news for theoretical physicists who are fed up with the inability to test String Theory…

    Update: Lisa Grossman has a story about this at Wired Science. She went to the trouble of contacting well-known string theorists for their opinion, which is unanimous that this is not a “test of string theory”:

    “Already I can imagine enemies sharpening their knives,” Duff said.

    And they are. A chorus of supporters and critics, including Nobel laureate and string theory skeptic Sheldon Glashow and string theorists John Schwarz of Caltech, James Gates of the University of Maryland, and Juan Maldacena and Edward Witten of the Institute for Advanced Study in Princeton agree that Duff’s argument is “not a way to test string theory” and has nothing to do with a theory of everything.

    I’m still trying to figure out what the supposed test of string theory is, since I can’t find such a thing in the published paper. The Wired article has a bit more explanation from Duff:

    Whether the result is some fundamental principle or some quirk of mathematics, we don’t know, but it is useful for making statements about quantum entanglement.

    As far as I can tell, we do know where their results come from, a “quirk of mathematics” known as the Kostant-Rallis theorem, applied to the invariant theory question that comes up in quantum entanglement.

    The article also contains quotes from me, saying about what you’d expect.

    Update: Science News has completely uncritical coverage of the “First Test of String Theory” claims.

    Posted in This Week's Hype | 14 Comments

    Everything is Emergent

    During the past year Erik Verlinde has made a splash (most recently in the New York Times). with his claim that the reason we don’t understand gravity is that it is an emergent phenomenon, an “entropic force”. Now he and Peter Freund are taking this farther, with a claim that the Standard Model is also emergent. Freund has a new paper out on the arXiv entitled “Emergent Gauge Fields” with an abstract:

    Erik Verlinde’s proposal of the emergence of the gravitational force as an entropic force is extended to abelian and non-abelian gauge fields and to matter fields. This suggests a picture with no fundamental forces or forms of matter whatsoever.

    Freund thanks Verlinde, who evidently has much the same idea:

    I wish to thank Erik Verlinde for very helpful correspondence from which it is clear that he independently has also arrived at the conclusion that not only gravity, but all gauge fields should be emergent.

    He remarks that this new theoretical idea is remniscent of Geoffrey Chew’s failed “bootstrap program” of the sixties:

    It is as if assuming certain forces and forms of matter to be fundamental is tantamount (in the sense of an effective theory) to assuming that there are no fundamental forces or forms of matter whatsoever, and everything is emergent. This latter picture in which nothing is fundamental is reminiscent of Chew’s bootstrap approach [9], the original breeding ground of string theory. Could it be that after all its mathematically and physically exquisite developments, string theory has returned to its birthplace?

    It’s very unclear to me why this is supposed to be a good thing. In his Nobel prize lecture, David Gross, a student of Chew’s explains:

    I can remember the precise moment at which I was disillusioned with the bootstrap program. This was at the 1966 Rochester meeting, held at Berkeley. Francis Low, in the session following his talk, remarked that the bootstrap was less of a theory than a tautology…

    Posted in Uncategorized | 49 Comments

    This Week’s Hype

    I’m rather busy these days with a move to a new apartment, but maybe there’s time for a quick edition of “This Week’s Hype”.

    A commenter on the previous posting points to Amanda Peet’s recent talk entitled String Theory for the Scientifically Curious. In the question and answer section, she responds to someone who asks her to comment on Phil Anderson’s claim that string theory makes no falsifiable predictions. She describes this claim as “absolutely fundamentally completely utterly wrong” and says that Anderson should “be smacked around the head” for saying it. She then goes on to a vigorous and extensive personal attack on Lee Smolin.

    Her argument that string theory really is falsifiable is that a paper by Distler and collaborators shows this and has been published. The paper she is referring to is this one, which started off as a preprint with the title Falsifying String Theory through WW scattering, but was only published after a forced change of title to “Falsifying Models of New Physics Via WW Scattering”. One reason for this is that there’s actually nothing about string theory in the paper. Evidently Peet just saw the preprint, not the published version. If you want to know more about this particular piece of hype, see blog postings here, here, and here.

    For more Amanda Peet in action, there’s a classic video from the KITP, blogged about here.

    Posted in This Week's Hype | 17 Comments

    Geometric Langlands at the KITP

    There’s a very interesting program going on at the KITP discussing recent work of mathematical interest on 4d supersymmetric gauge theories (N=2 and N=4). These include various connections of 4d gauge theory to geometric Langlands uncovered by Witten and collaborators a few years ago, as well as last year’s conjecture by Alday-Gaiotto-Tachikawa of a relation between 4d gauge theory and 2d Liouville conformal field theory. In his introductory talk, Edward Frenkel discusses the possibility of a relationship between these ideas and the much earlier ideas about 2d conformal field theory that were inspirational at the beginnings of research on geometric Langlands (about which he has written extensively).

    Yesterday and today Witten gave two talks on some new work. The first was about the very basic problem of how you quantize a finite dimensional symplectic manifold, which he approached using the phase-space path integral. The idea was similar to that described in a 2008 paper with Gukov, where the quantum mechanical problem gets turned into a 2d topological QFT problem. The innovation here is that he does this explicitly at the level of the path integral, using the kind of techniques for complexifying the problem, using holomorphicity and choosing appropriate path integral integration contours, that he pioneered in his recent paper on Analytic Continuation of Chern-Simons Theory. The second talk applied these ideas to the case of Chern-Simons theory. The path integral there is somewhat like the phase-space sort of path integral, and he expressed it in terms of a 4d QFT. He claims to be able to thus solve a well-known problem, that of how to get a QFT that gives Khovanov homology, which is a topological invariant with Euler characteristic the Jones polynomial. Unfortunately I get lost at the end when he has to go to 5 dimensions and perform some duality transformations. I gather he’ll have a paper about this relatively soon, and I’ll try again to see exactly how this works then.

    Perhaps a collection should be taken up to buy a new camera for the KITP. The resolution of the one they have now been using for years is such that you often can’t quite read what the speaker is writing on the blackboard. Still, it’s wonderful to be able to follow along as they quickly put a lot of high-quality talks on-line.

    Posted in Langlands | 6 Comments