Next, something slightly less important: money. The Simons Foundation in recent years has been having a huge (positive, if you ask me…) effect on research in mathematics and physics. Their 2018 financial report is available here. Note that not only are they spending \$300 million/year or so funding research, but at the same time they’re making even more (\$400 million or so) on their investments (presumably RenTech funds). So, they’re running a huge profit (OK, they’re a non-profit…), as well as taking in each year \$220 million in new contributions.

Various particle physics-related news:

- The people promoting the FCC-ee proposal have put out FCC-ee: Your Questions Answered, which I think does a good job of making the physics case for this as the most promising energy-frontier path forward. I don’t want to start up again the same general discussion that went on here and elsewhere, but I do wonder about one specific aspect of this proposal (money) and would be interested to hear from anyone well informed about it.
The FCC-ee FAQ document lists the cost (in Swiss francs or dollars, worth exactly the same today) as 11.6 billion (7.6 billion for tunnel/infrastructure, 4 billion for machine/injectors). The timeline has construction starting a couple years after the HL-LHC start (2026) and going on in parallel with HL-LHC operation over a decade or so. This means that CERN will have to come up with nearly 1.2 billion/year for FCC-ee construction, roughly the size of the current CERN budget. I have no idea what fraction of the current budget could be redirected to new collider construction, while still running the lab (and the HL-LHC). It is hard to see how this can work, without a source of new money, and I have no idea what prospects are for getting a large budget increase from the member states. Non-member states might be willing to contribute, but at least in the case of US, any budget commitments for future spending are probably not worth the paper they might be printed on.

Then again, Jim Simons has a net worth of 21.5 billion, and maybe he’ll just buy the thing for us…

- Stacy McGaugh has an interesting blog post about the sociology of physics and astronomy. His description of his experience with physicists at Princeton sounds all too accurate (if he’d been there a couple years earlier, I would have been one of the arrogant, hard-to-take young particle theorists he had to put up with).
McGaugh’s specialty is dark matter and he has some comments about that. If you want some more discouragement about prospects for detecting dark matter, today you have your choice of Sabine Hossenfelder, Matt Buckley, or Will Kinney. I don’t want to start a discussion of everyone’s favorite ideas about dark matter, but wouldn’t mind hearing from an expert whether my suspicion is well-founded that some relatively simple right-handed neutrino model might both solve the problem and be essentially impossible to test.

- Lattice 2019 is going on this week. Slides here, streaming video here.
- Strings 2019 talk titles are starting to appear here. I’ll be very curious to hear what Arkani-Hamed has to say. His talk title is “Prospects for contact of string theory with experiments (vision talk)” and while he’s known for giving very long talks, I don’t see at all how this one could not be extremely short.

On a more personal front, yesterday I did a recording for a podcast from my office, with the exciting feature of an unannounced fire drill happening towards the end. Presumably this will get edited out, and I’ll post something here when the result is available.

Next week I’ll be heading out for a two week trip to Chile, with one goal to see the total solar eclipse there on July 2. Will start out up in the Atacama desert.

]]>If I go to the Scholarpedia entry for Bell’s theorem, I’m told that:

Bell’s theorem asserts that if certain predictions of quantum theory are correct then our world is non-local.

but I don’t see this at all. As far as I can tell, for all the experiments that come up in discussions of Bell’s theorem, if you do a local measurement you get a local result, and only if you do a non-local measurement can you get a non-local result. Yes, Bell’s theorem tells you that if you try and replace the extremely simple quantum mechanical description of a spin 1/2 degree of freedom by a vastly more complicated and ugly description, it’s going to have to be non-local. But why would you want to do that anyway?

The Greenstein book is short, the author’s very personal take on the usual Bell’s inequality story, which you can read about many other places in great detail. What I like about the book though is the last part, in which the author has, at 11 am on Friday, July 10, 2015, an “Epiphany”. He realizes that his problem is that he had not been keeping separate two distinct things: the quantum mechanical description of a system, and the every-day description of physical objects in terms of approximate classical notions.

“How can a thing be in two places at once?” I had asked – but buried within that question is an assumption, the assumption that a thing can be in

oneplace at once. That is an example of doublethink, of importing into the world of quantum mechanics our normal conception of reality – for the location of an object is a hidden variable, a property of the object … and the new science of experimental metaphysics has taught us that hidden variables do not exist.

I think here Greenstein does an excellent job of pointing to the main source of confusion in “interpretations” of quantum mechanics. Given a simple QM system (say a fixed spin 1/2 degree of freedom, a vector in **C**^{2}), people want to argue about the relation of the QM state of the system to measurement results which can be expressed in classical terms (does the system move one way or the other in a classical magnetic field?) . But there is no relation at all between the two things until you couple your simple QM system to another (hugely complicated) system (the measurement device + environment). You will only get non-locality if you couple to a non-local such system. The interesting discussion generated by an earlier posting left me increasingly suspicious that the mystery of how probability comes into things is much like the “mystery” of non-locality in the Bell’s inequality experiment. Probability comes in because you only have a probabilistic (density matrix) description of the measurement device + environment.

For some other QM related links:

- Arnold Neumaier has posted a newer article about his “thermal interpretation” of quantum mechanics. He also has another interesting preprint, relating quantum mechanics to what he calls “coherent spaces”.
- Philip Ball at Quanta magazine explains a recent experiment that demonstrates some of the subtleties that occur in the quantum mechanical description of a transition between energy eigenstates (as opposed to the unrealistic cartoon of a “quantum jump”).
- There’s a relatively new John Bell Institute for the Foundations of Physics. I fear though that the kinds of “foundations” of interest to the organizers seem rather orthogonal to the “foundations” that most interest me.
- If you are really sympathetic to Einstein’s objections to quantum mechanics, and you have a lot of excess cash, you could bid tomorrow at Christie’s for some of Einstein’s letters on the topic, for instance this one.

- For the latest news on US HEP funding, see presentations at this recent HEPAP meeting. It is rarely publicly acknowledged by scientists, but during the Trump years funding for a lot of scientific research research has increased, often dramatically. This has been due not to Trump administration policy initiatives, but instead to the Republican party’s embrace of fiscal irresponsibility whenever there’s a Republican in the White House. After bitter complaints about the size of the budget deficit and demands for reduction in domestic spending during the Obama years, after Trump’s election the congressional Republicans turned on a dime and every year have voted for huge across-the-board spending increases, tax decreases, and corresponding deficit increases. Each year the Trump administration produces a budget document calling for unrealistically large budget decreases which is completely ignored, with Congress passing large increases and Trump signing them into law.
For specific numbers, see for instance page 20 of this presentation, which shows numbers for the DOE HEP budget in recent years. The pattern for FY2020 looks the same: a huge proposed decrease, and a huge likely increase (see the number for the House Mark).

The result of all this is that far greater funds are available than expected during the last P5 planning exercise, so instead of having to make the difficult decisions P5 expected, a wider list of projects can be funded.

For mathematicians;

- Michael Harris has a new article in Quanta magazine, mentioning suggestions by two logicians that the Wiles proof of Fermat’s Last Theorem should be formalized and checked by a computer. He explains why most number theorists think this sort of project is besides the point:

Wiles and the number theorists who refined and extended his ideas undoubtedly didn’t anticipate the recent suggestions from the two logicians. But — unlike many who follow number theory at a distance — they were certainly aware that a proof like the one Wiles published is not meant to be treated as a self-contained artifact. On the contrary, Wiles’ proof is the point of departure for an open-ended dialogue that is too elusive and alive to be limited by foundational constraints that are alien to the subject matter.

I don’t know who the “two logicians” Harris is referring to are, or what the nature of their concerns about the Wiles proof might be. I had thought this might have something to do with number theorist Kevin Buzzard’s Xena Project, but in a comment Buzzard describes such a formalization as currently impractical, with no clear motivation.

Taking a look at the page describing the motivation for the Xena Project, I confess to finding it unconvincing. The idea of revamping the undergraduate math curriculum to make it based on computer checkable proofs seems misguided, since I don’t see at all why this is a good way to teach mathematical concepts or motivate undergraduate students. The complaints about holes in the math literature (e.g. details of the classification of finite simple groups) don’t seem to me to be something that can be remedied by a computer.

- For some cutting-edge number theory, with no computers in sight, see the lecture notes from a recent workshop on geometrization of local Langlands.
- Finally, congratulations to this year’s Shaw Prize winner, Michel Talagrand. Talagrand in recent years has been working on writing up a book on quantum field theory for mathematicians, and I see that Sourav Chatterjee last fall taught a course based on it, producing lecture notes available here.
For a wonderful recent interview with Talagrand, see here.

I first got to know Michel when he started sending me very helpful comments and corrections on my QM book when it was a work in progress. He’s single-handedly responsible for a lot of significant improvements in the quality of the book.I’ve recently received significant help from someone else, Lasse Schmieding, who has sent me a very helpful list of mistakes and typos in the published version of the book. I’ve now fixed just about all of them. Note that the version of the book available on my website has all typos/mistakes fixed. For the published version, there’s a list of errata.

**Update**: For more about the Michael Harris vs. Kevin Buzzard argument, see here, or plan on attending their face-off in Paris next week.

Much of the book is historical (and often Anglo-centric), beginning with Newton, a figure who made huge tightly intertwined advances in both mathematics and physics. Looking at Newton’s career, it makes no sense to characterize him as a mathematician or a physicist, he’s both in equal measure and at the same time. Farmelo then moves on to Maxwell, who also revolutionized physics while at the same time introducing important new mathematics into the subject. He tells the story of Maxwell’s 1870 talk “On the relations of mathematics to physics”, then a couple chapters later it’s Dirac’s 1938 talk “The Relation between Mathematics and Physics”.

In Farmelo’s account:

Dirac quickly arrived at what was, in effect, a manifesto for research into theoretical physics. He proposed a new principle – the principle of mathematical beauty – which says that researchers should always strive to maximize the beauty of the mathematical structures that underpin their theories of the natural world…

He concluded that “big domains of pure mathematics will have to be brought in to deal with the advances in fundamental physics”…

Eventually the two subjects might possibly become unified, Dirac suggested, with “every branch of pure mathematics then having its physical applications, its importance in physics being proportional to its interest in mathematics.”

A major reason for Dirac taking this sort of view was the example of Einstein’s work on general relativity, which to reach fruition had required Einstein to become expert in the newly developed and rather challenging abstract mathematical machinery of Riemannian geometry. The two great pillars of modern physics, relativity and quantum theory, are deeply related to central modern ideas about mathematics, in particular, respectively, geometry and representation theory. While in the case of relativity the mathematics came first, for the case of quantum theory, the mathematics underlying the subject was mostly developed later.

Farmelo goes on to explain that relations between math and physics entered a fallow period during the 1950s and 1960s, but the advent of gauge theory and the Standard model led to a productive renewal of healthy relations during the 1970s. He does a good job of explaining how this came about, discussing in particular the central role played by Witten. Farmelo has benefited from getting to talk to Witten himself in some depth about this, and gives a nuanced portrayal of Witten’s rather complex and evolving feelings about the relations of the two subjects and the role that he and his immense talents have played in this story.

There’s a great deal of the usual sort about the history of string theory, emphasizing its points of contact with new mathematics. The next to last chapter is about Nima Arkani-Hamed and the amplituhedron story, portrayed as the latest exciting development on the math-physics front. Farmelo is clearly enthralled by Arkani-Hamed and his intense enthusiasms. The evolution of Arkani-Hamed from phenomenologist to mathematical physicist is definitely a fascinating thing to observe, and I’ve often written about it here (you might want to for instance read this posting). Farmelo also points to an excellent lecture by Greg Moore on *Physical Mathematics and the Future* (discussed here), which I think is based on a much deeper understanding of the current state of the math/physics relationship, and gives a much broader perspective than the narrow one of the amplituhedron. By the way, I see on Moore’s website that he’s writing up for the 2019 TASI school what appears to be an excellent set of notes about Chern-Simons theory and related topics.

A major problem though with this book is that it pretty much completely avoids the big problem raised by the program of pursuing progress in fundamental physics through beautiful mathematics: how do you know whether people doing this are on the right track or headed down a blind alley? Farmelo starts the book off with a very odd preliminary chapter comparing Einstein’s work at the IAS in his later years to that of his modern day successors:

[Einstein] was seeking a new theory, not in response to puzzling experimental discoveries, but as an intellectual exercise – using only his imagination, underpinned by mathematics. Although this approach was unpopular among his peers, he was pioneering a method similar to what some of his most distinguished successors are now using successfully at the frontiers of research.

The fact of the matter is that, in retrospect, Einstein’s work of this era was a huge failure, as he got stuck deep down a blind alley. He was seduced by a specific speculative idea about how to get unification out of mathematics, by using simple extensions of the differential geometry that he had such success with in the case of GR.

How does Farmelo know that string theory enthusiasts following Einstein haven’t run into the same problem he did? In essence, Farmelo just assures us that he has talked to them and they tell him that, like Einstein before them, they think they’re on the right track. The existence of skeptics is mentioned, but their writings are carefully excluded from the 200+ item bibliography. Jim Baggott, Sabine Hossenfelder and I (and our writings) appear only in a short footnote on page 6, with bloggers described as complaining that “modern physics should get back on the straight and narrow path of real science”. But the three of us are complaining about, not “modern physics”, but one small subset of it, and at least in my case, the path I argue for is almost exactly what Farmelo is arguing for: absent help from experiment, pursue the path advocated by Dirac.

Here’s part of Farmelo’s summation of the current situation in the book’s last chapter:

The great majority of today’s leading theoretical physicists are, however, confident that they are motoring steadily in the right vehicle, despite the problems they are having in trying to drive it.

In the public domain, the debate about the merits of the string framework has been raging for years, especially in print and online. Some of these onslaughts are useful correctives to the hype lavished on this programme and to the superciliousness of pronouncements made by some string (although rarely by the best ones in my experience). Experts on the string framework have every reason to be proud of the progress they have made, but until such time as experiments confirm its validity, there is no room for smugness. Yet I am often troubled by the dismissiveness of some of the critical commentators, especially those who write with a confidence that belies the evident slightness of their understanding of the subject they are attacking. Opposing the view taken by leading theoreticians might be interpreted as a healthy disrespect for orthodoxy. However it may be part of the worrisomely common view that anyone can have a valid opinion on any subject, regardless of their technical knowledge and appreciation of it. In scientific matter this trend is especially regrettable.

The first part of this I think is simply not true, with most “leading theoretical physicists” these days unsure what the right direction is for how to get beyond the Standard Model. As for the last bit, I’ll just say that I think it can accurately be described as “sleazy” (by the way, Farmelo at one point came to see me in New York when he was doing research for the book, and we had a quite pleasant conversation, he’s rather charming). Besides the ad hominem attack on unidentified critics, there’s nothing anywhere in the book about the actual problems of the vehicle some people are motoring in. For instance, the problem of the landscape and the multiverse is dealt with by just ignoring it, it’s not mentioned at all. If it had been mentioned, Farmelo might have had to deal with the fact that it’s mathematically hideous, so a direction which should be abandoned by his own arguments.

In the end, my feelings about this book are much the same as in the case of Farmelo’s biography of Dirac (see here): a wonderful book in many ways, but marred by a bizarre degree of string theory fanboyism. While there’s a lot to like about this book, and much of it makes a good case for a controversial point of view that I strongly agree with, unfortunately the problems with it are even more serious than in the case of the Dirac biography.

The IAS is having a public event next week, convening a panel to discuss the math-physics issues brought up by the book. I may add something here about this after it happens. Perhaps someone in attendance can get a show of hands from the assembled leading theorists to see if they really feel that they’re steadily motoring in the right vehicle or not.

**Update**: I’m listening to a live-streaming version of the IAS event here. After a talk by Farmelo, mainly about history, and a discussion with Karen Uhlenbeck and Freeman Dyson (moderated by Natalie Wolchover), Greg Moore is now giving a talk (available here) on TQFT and gauge theory which is quite good. I’m very much in sympathy with his take on “physical mathematics”.

**Update**: Besides an advertisement by Nima Arkani-Hamed and Thomas Lam for their work on amplitudes, the only discussion of the current state of the math/fundamental physics interface was the final short conversation between Dijkgraaf and Witten. Looking back at expectations from 30 years ago, Witten said he had expected progress on understanding the laws and principles behind string theory, but that has not occurred. Dijkgraaf tried to end with a defense of string theory as “well, by AdS/CFT it’s connected to QFT, so it somehow is connected to the real world”, to which Witten said something like “we shouldn’t give up hope for string theory as a unified theory, we might yet find the right string theory vacuum”. In the end Witten said that he found it hard to accept the possibility that the unexpected things discovered through string theory didn’t mean that it had to do with a unified theory, but admitted he didn’t have a scientific justification for this.

**Update**: There’s a review of the book by Tony Mann at Times Higher Education. It includes:

Farmelo’s book is a response to these contrarians [

Hossenfelder and myself]. He is confident that the beauty of the mathematics is significant in indicating that we are on the right track, and that eventually, even if we have to wait for many years, we will be able to test string theory against new experimental evidence. As a spirited defence of the idea that beautiful mathematics should be a guide for physicists, Farmelo’s book is a timely response to critics such as Woit and Hossenfelder, defending what science writer Jim Baggott has called “fairy-tale physics”. Ultimately I am not sure, however, that he makes his case anything more than a matter of faith.

I think Mann gets it right that the Farmelo book is intended as a defense of string theory against the Baggott/Hossenfelder/Smolin/Woit critique (with the odd feature that he refuses to allow mention of our books in the bibliography, or to engage in any way with the arguments we make), and that his argument comes down to little more than “trust certain famous theorists”, especially those at the IAS. As usual, I’m inaccurately portrayed as opposed to the idea that progress can be made by following beautiful mathematics. The problem with string theory unification is that it’s a failed physical idea, with the failure indicated not just by lack of predictivity, but also by the fact that string theory models compatible with observations are horrendously ugly.

**Update**: The Dijkgraaf-Witten conversation is available here. Some extracts from near the end.

Witten: I’m actually personally reasonably confident that what we’re doing is a lasting contribution. But I’m less confident that we’ll really be able to put it on a completely solid footing. It depends on being lucky with experiment, I would say…

Dijkgraaf: Often I hear questions about whether is string theory wrong or right, but I often answer there’s no way in which we’ll ever get rid of string theory, because in some sense it’s an integral part of the theories that we already are using.

Witten: That’s true but not completely satisfying. Let’s not give up on the dream of finding the vacuum that describes the real world…The honest answer is that personally as I told you before I have confidence that the general enterprise is on the right track, but I don’t claim that the argument I’ve given is scientifically convincing.

Witten makes clear that at this point he has no likely forseeable experimental results relevant to string theory he can point to, no “scientifically convincing” argument that it’s on the right track, just a feeling that since string theory research has turned up various points of contact with important math and physics, there must be something right about the idea.

**Update**: Siobhan Roberts at the New York Times has a piece about the centenary of the eclipse that made Einstein famous that also discusses Wednesday’s IAS event.

- Goro Shimura, one of the major figures in twentieth century number theory and arithmetic geometry passed away on May 3 in Princeton at the age of 89. Princeton has an article about his life and work here. There’s another article about him here (in German). Back in 2008 Shimura published an autobiographical memoir, The Map of My Life, which I wrote about here.
- The Dutch publication de Volkskrant has an article asking if theoretical physics has lost its way. Sabine Hossenfelder and Avi Loeb are quoted on the “there’s a problem side”, Robbert Dijkgraaf on the “no problem here” side.
- A commenter here points out an article in le Monde about the currently unresolved question of what to do with the 100,000 or so pages of writings left by Grothendieck at the time of his death. There seems to be a consensus that someone should carry out the expensive project of having the pages cataloged and transcribed, but how to pay for this, and who should ultimately take ownership of the papers remains up in the air. Supposedly a sizable part of the documents deals with Grothendieck’s speculation about physics. The article starts off with a characterization of Grothendieck’s work as important in the story of the Higgs discovery, which is quite inaccurate (there is no significant relation between his work and the Higgs).
- For many years people at SLAC have used the database there to produce “Topcites” lists of the most heavily cited papers in HEP physics, giving some insight into what topics are the most popular in current HEP research. From 1997-2003 Michael Peskin wrote up some reviews of what was going on in HEP physics each year based on these lists, and has started doing so again (for 2017 and 2018). These lists and the reviews are now dominated by astrophysical and cosmological topics, with little about HEP theory. To get an idea of what the hot topics are in HEP theory these days, take a look at the list of most frequently cited papers by hep-th preprints in 2018.
- The series finale of The Big Bang Theory will air this week, on Thursday. Since I’ve canceled my cable TV service a while back I haven’t been following the latest episodes, which evidently feature a replacement for the failure of supersymmetry, called “super-asymmetry”. At some point I hope to catch up with these, and find out what happens to “super-asymmetry”.
- This week the European Strategy Update for Particle Physics is holding an Open Symposium in Granada, to discuss plans for the post-LHC future (a blog posting about this from Tommaso Dorigo is here). I’ve written here about the difficult issues that CERN and European HEP physicists are facing. Looking at one of the first talks on future colliders, I was surprised to see muon colliders listed as a potentially viable possibility, since I thought that the technology needed for those was still far in the future.

**Update:** Kenneth Chang at the New York Times today has an obituary for Shimura.

**Update**: On the obituary front, it was announced today (5/24) that Murray Gell-Mann has passed away, at the age of 89. The New York Times has an obituary written by his biographer, George Johnson.

One surprising thing I learned from the interview is that Witten learned Calculus when he was 11 (this would have been 1962). He quite liked that, but then lost interest in math for many years, since no one gave him more advanced material to study. After years of studying non math/physics subjects and doing things like working on the 1972 McGovern campaign, he finally realized physics and math were where his talents lay. He ended up doing a Ph.D. at Princeton with David Gross, starting work with him just months after the huge breakthrough of asymptotic freedom, which put in place the final main piece of the Standard Model.

If only back in 1962 someone had told Witten about linear algebra and quantum mechanics, the entire history of the subject could have been quite different. It seems quite possible that within 5 years he would have picked up quantum field theory and maybe started thinking about Yang-Mills generalizations of QED, perhaps, at 16, beating Weinberg and Salam to the electroweak theory. Surely he could have figured out how to do one loop calculations in gauge theory, beating Gross/Wilczek/Politzer to asymptotic freedom and a Nobel prize, possibly a few years early. If he had done this at Princeton, he would have overlapped with John Schwarz, who surely would have then been more interested in pursuing gauge theory than string theory. So, no superstring theory or 1984 “revolution”, and who knows what different sort of path the history of the field would have taken.

A lesson for all parents: if your child is an off-the-scale genius, learning Calculus at age 11, don’t even think about trying to give them a normal childhood. Push them, hard, to skip grades, get to college/grad school early. Do whatever it takes.

I did though find some of the later parts of the interview quite depressing. While acknowledging that neither he nor anyone else has been able to figure out what string theory actually is, this hasn’t shaken Witten’s faith that it’s the only viable path towards a unified theory. Most disturbing, on the topic of the landscape he says that he has gone from finding it upsetting to reconciling himself to the idea. For years, whenever asked about how evidence could be found for string theory, he would point to the naturalness arguments indicating that something like SUSY had to happen at the electroweak scale. Now that the LHC has falsified this and there’s nothing to point to as any sort of “test of string theory”, he shows no signs that this falsification has in any way shaken his faith.

Looking to the near future, he’s most optimistic about the “It from Qubit” business. Maybe he’s right and something will come of this, but I’ve seen no indication of a path to a unified theory in this direction (how do you get the Standard Model? Or has he just completely given up on that?).

I don’t have time right now to transcribe the most relevant portions of the interview, might find time later, or maybe Farmelo will make available a transcription.

**Update**: As explained in the comments, the advice to parents was not meant to be taken seriously. No, your child is not going to grow up to be Edward Witten, and they do not need to hurry up to revolutionize physics before it is too late.

Sabine Hossenfelder had posted a transcript of the interview here. I’ll add some extracts and some more comments about the interview.

About the landscape:

These two puzzles although primarily the one about gravity which was discovered first are perhaps the main motivation for discussions of a cosmic landscape of vacua. Which is an idea that used to make me extremely uncomfortable and unhappy. I guess because of the challenge it poses to trying to understand the universe and the possibly unfortunate implications for our distant descendants tens of billions of years from now. I guess I ultimately made my peace with it recognizing that the universe hadn’t been created for our convenience.

GF [00:20:43] So you come to terms with it.

EW [00:20:45] I’ve come to terms with the landscape idea and the sense of not being upset about it. As I was for many years.

GF [00:20:49] Really upset?

EW [00:20:50] I still would prefer to have a different explanation but it doesn’t upset me personally to the extent it used to.

GF [00:20:56] So just to conclude what would you say the principal challenge is all down to people looking at fundamental physics.

EW [00:21:01] I think it’s quite possible that new observations either in astronomy or accelerators will turn up new and more down to earth challenges. But with what we have now and also with my own personal inclinations it’s hard to avoid answering new terms of cosmic challenges. I actually believe that string slash M theory is on the right track toward a more deeper explanation. But at a very fundamental level it’s not well understood. And I’m not even confident that we have a good concept of what sort of thing is missing or where to find it.

If you theory is not well understood, you don’t even know what sort of thing is missing, and a multiverse is being invoked to explain away why it can’t be tested, the situation seems clear: you have a failed theory. Yes, failure may be personally upsetting to you, but, that’s science.

GF [00:23:20] There’s a famous book about night thoughts of a quantum physics. are there night thoughts of a string theorists is where you have a wonderful theory list developing you know unable to test it. Does that ever bother you.

EW [00:23:31] Of course it bothers us but we have to live with our existential condition. But let’s backtrack 34 years. So in the early 80s there were a lot of hints that something important was happening in string theory but once Green and Schwarz discovered the anomaly cancellation and it became possible to make models of elementary particle physics unified with gravity. From then I thought the direction was clear. But some senior physicists rejected it completely on the grounds that it would supposedly be untestable. Or even have cracked it would be too hard to understand. My view at the time was that when we reached the energies of the W, Z and the Higgs particle we’d get all kinds of fantastic new clues.

EW [00:24:11] So. I found it very very surprising that any colleagues would be so convinced that you wouldn’t be able to get important clues that would shed light on the validity of a fundamental new theory that might in fact be valid. Now if you analyze that 34 years later I’m tempted to say we were both a little bit wrong. So the scale of clues that I thought would materialize from accelerators has not come. In fact the most important clue possibly is that we’ve confirmed the standard model without getting what we fully expected would come with him. And as I told you earlier that might be a clue concerning the landscape. I think the flaw in the thinking of the critics though is that while it’s a shame that the period of incredible turmoil and constant experiment and discovery that existed until roughly when I started graduate school hasn’t continued. I think that the progress which has been made in physics since 1984 is much greater than it would have been if the naysayers had been heeded and string theory hadn’t been done in that period.

“34 years later I’m tempted to say we were both a little bit wrong”??? No, others had good arguments and were right about this (string theory is untestable and has nothing to do with LHC-scale physics), and you had bad arguments and were quite wrong. That this clear result is not being acknowledged and is having no effect on faith in string theory is disturbing.

**Update**: For another interview with an influential theorist, Sean Carroll has an interview with Leonard Susskind. I don’t think this is a good thing, but Susskind has been very influential in blazing the path that Witten now seems headed down (invoke the multiverse to justify giving up on unifying particle physics, hope very general “it from qubit” considerations will explain gravity). The interview explains in detail Susskind’s point of view.

**Update**: Farmelo has another interview with a string theorist up, this time it’s Michael Green. When asked if he’s troubled by string theory not being experimentally testable, Green says (19:20):

I don’t think at the moment there’s anything directly to test, because we don’t know what its predictions are.

and says that string theory should really be called “string (not yet a) theory”. Earlier (16:40), he explains

The ingredients of something are there, but it’s clearly not formulated in the right language, and because it’s not formulated in the right language, we don’t really know how to even make sense of its predictions. It doesn’t have any really genuine rigorously derived predictions yet.

Green has been working on string theory for forty years, an entire professional lifetime during which string theory has gone from a relatively simple “(not yet a) theory”, with a true theory seeming not far away, to a much more complicated “(not yet a) theory”, with no progress towards an actual theory in sight. Farmelo doesn’t ask the obvious question of why people shouldn’t interpret this story straightforwardly as the story of a failed speculative idea that never worked out.

]]>The problem here is almost exactly the same as the problem with the Symmetry article discussed in the last posting. Both authors believe that string theory is a conventionally predictive theory, one with predictions that just happen to be hard to test. According to them, critics of string theory just don’t understand that there can be value in a theory which is testable in principle, even if a practical test is far away. Unlike the Symmetry piece, McIntyre at least names critics and links to their words, writing:

If one reads these kinds of criticisms closely one finds careful phrasing that string theory “makes no predictions about physical phenomena at experimentally accessible energies” and that “at the moment string theory cannot be falsified by any conceivable result.”

^{23}But these are weasel words, born of scientists who are not used to taking seriously the distinction between saying that a theory is “currently” testable versus whether it is “in principle” testable. The practical limitations may be all but insurmountable, but philosophical distinctions like demarcation live in the difference.

The quoted words are mine, with footnote 23 referring to my 2002 article in American Scientist. Of course I was and am well aware of the distinction between testable “in principle” and “currently”. Bizarrely, the author has chosen to edit out from what I wrote the sentence that precisely addresses the issue I’m supposedly weaseling on. Here’s the full quote:

String theory not only makes no predictions about physical phenomena at experimentally accessible energies, it makes no precise predictions whatsoever. Even if someone were to figure out tomorrow how to build an accelerator capable of reaching the astronomically high energies at which particles are no longer supposed to appear as points, string theorists would be able to do no better than give qualitative guesses about what such a machine might show. At the moment string theory cannot be falsified by any conceivable experimental result.

As the deleted language make clear, by “any conceivable experimental result” I was making a claim about “in principle”, not “currently”. Furthermore, near the beginning of the article I explain the problem of principle:

First, string theory predicts that the world has ten space-time dimensions, in serious disagreement with the evidence of one’s senses. Matching string theory with reality requires that one postulate six unobserved spatial dimensions of very small size wrapped up in one way or another. All of the predictions of the theory depend on how you do this, but there are an infinite number of possible choices, and no one has any idea how to determine which is correct.

This article started out as an early 2001 arXiv posting and was published in early 2002, about a year before the now famous KKLT claim to have a string theory model with fully stabilized moduli. Back then, the problem I was pointing to was the basic one that, to have a self-consistent string theory model that you can confront, in principle, with experiment, you need to solve the problem of “moduli stabilization”. 6d compactifications come in families with a lot of parameters (the “moduli”) governing their size and shape, and the physics depends crucially on those parameters. You need to somehow give the moduli dynamics, and get a ground state with a correct fine-tuned vacuum energy.

KKLT claimed they could do this, but with an exponentially large “landscape” of solutions that removes the ability to get well-defined predictions from the theory. Their construction is so complicated, and non-perturbative string theory so poorly understood, that it remains controversial to this day whether these are really solutions to whatever the conjectural well-defined version of string theory might be. This is what the current “Swampland” argument is about.

I’ve put together a FAQ entry answering the Doesn’t string theory make predictions at very high energy? question. What causes all the confusion here is the common claim from string theorists that “string theory is testable at high energy”. If you ask them to tell you what the “test” is, they tell you about one of the characteristic features of the perturbative superstring (Veneziano amplitude, Regge trajectories, 10 space-time dimensions). What they are really saying is “if we did experiments at a high enough energy scale and saw one of these characteristic phenomena, we would have a successful test of string theory”, which is true enough, but not a specific, falsifiable prediction. What they are not telling you is that they are ignoring the compactification problem as well as that of not having a well-defined non-perturbative theory, and that many “string theory” models wouldn’t exhibit these characteristically perturbative features.

The main point of the new book seems to be to argue that a better way to characterize science is by whether those supposedly engaging in it are exhibiting the “scientific attitude”, which

can be summed up in a commitment to two principles:

(1) We care about empirical evidence.

(2) We are willing to change our theories in light of new evidence.

It seems to me there are lots of problems with this formulation. Sticking to the string theory question, undoubtedly string theorists “care about empirical evidence” and would like to have some. The problem though is they don’t have any, and don’t have any significant prospects for getting any. As for being willing to change one’s theories in light of new evidence, if there’s no new evidence, your willingness to change your theory won’t ever get tested.

My impression is that most people, this author included, are just fundamentally unwilling to believe that, given the high scientific profile of “string theory”, it could really have a serious problem of being inherently untestable. The technical issues involved are so formidable that non-experts don’t have any hope of understanding them. But there really is a serious problem here, and those who worry about the string theory fiasco damaging the credibility of science in a dangerously post-truth world are right to be worried.

]]>The problems with this article begin with the misleading subtitle: “Can a theory that isn’t completely testable still be useful to physics?” The problem here is not theories that aren’t “completely testable”, but theories that aren’t testable at all, that make no testable predictions at all.

The article starts out by discussing Popper and the supposed “falsifiability” criterion for what is and isn’t science, leading up to:

But where does this falsifiability requirement leave certain areas of theoretical physics? String theory, for example, involves physics on extremely small length scales unreachable by any foreseeable experiment. Cosmic inflation, a theory that explains much about the properties of the observable universe, may itself be untestable through direct observations. Some critics believe these theories are unfalsifiable and, for that reason, are of dubious scientific value.

Who are these “some critics”? Where do they say that the reason there is a problem with string theory is “unfalsifiability”? For the case of one critic I’m pretty familiar with, chapter 14 of his book is all about how “falsifiability” is not something that can be used to decide what is science and what isn’t.

We’re then told that:

At the same time, many physicists align with philosophers of science who identified flaws in Popper’s model, saying falsification is most useful in identifying blatant pseudoscience (the flat-Earth hypothesis, again) but relatively unimportant for judging theories growing out of established paradigms in science.

Unclear who “many physicists” are, who the “philosophers of science” are, and what flaw in Popper is being referred to.

In an odd move, the article then turns to the topic of SUSY, where the problem isn’t that well-advertised SUSY models (with electroweak scale SUSY breaking solving the “naturalness” problem) aren’t falsifiable, it’s that the LHC has falsified them. As usual in science, if your model gets falsified, instead of giving up and doing something else you can change your model to something less desirable that hasn’t been falsified (SUSY models with symmetry broken at higher energy scales) and keep on going. This is though what philosophers of science call a “degenerating research program”, which is not a good thing.

There’s more in the rest of the article, but actual critics remain invisible and their actual arguments unaddressed.

**Update**: Will Kinney has some appropriate comments.

**Update**: Massimo Pigliucci has posted here his contribution to the “Why Trust a Theory?” volume, which discusses “falsifiability” and the “String Wars”.

- I was sorry to hear today of the death on April 11 of Geoffrey Chew. Throughout the 1960s, Chew’s S-matrix/bootstrap philosophy was the dominant paradigm in high energy theory. It went into eclipse with the success of gauge theories in the early 1970s, but in recent years the (S-matrix) “amplitudes” program has to some degree revived it a bit, with hopes that it may be relevant to formulating quantum gravity.
- I thought the string wars were at times rather brutal, but it seems that they may have been a picnic compared to what astronomers get up to when there is a lot of money involved. See here for the bizarre story of what happened to Richard Easther when he started criticizing the plan for a New Zealand component of the Square Kilometer Array.
- For some recent and upcoming conference sites giving an idea of what is new in math and physics, Microsoft is hosting Physics Meets Machine Learning, the Eighth New England String Meeting had lots of interesting talks, hardly any strings to be seen, and MSRI last week hosted a “Hot Topics” workshop on Recent Progress in the Langlands Program.

For some news related to new books, there’s:

- Lee Smolin has a new book out, Einstein’s Unfinished Revolution, arguing that quantum mechanics is likely incomplete, since it continues to lack a successful “realist” version. He will be giving a public lecture about this at Perimeter tomorrow.
- John Baez advertises on Twitter a forthcoming volume about “New Spaces in Mathematics and Physics”. For some of the content, see here. Also, the original conference these articles are based on has videos here.
- I’m looking forward to seeing Graham Farmelo’s forthcoming The Universe Speaks in Numbers, about which I suspect there will be parts I’ll strongly agree with, others about which I’ll equally strongly disagree. The book evidently is based mainly on interviews, some of which Farmelo is putting up on his website. Jon Butterworth has a review this week in Nature, entitled A struggle for the soul of theoretical physics. He describes the Farmelo book as “a riposte” to critiques from a group I’m identified as being part of, but I have to keep pointing out that my point of view is not at all that the problem with string theory/supersymmetry has been “too much math”. I think progress in fundamental physics is going to require more mathematics, not less.
- There’s a new edition of the Kiritsis String theory in a Nutshell textbook available from Princeton. Looking at the introduction, I’m glad to see that Kiritsis points out the problem with the usual “string theory works, at the Planck scale” argument:

A big “hole” in string theory has been its perturbative (only) definition. With the advent of nonperturbative dualities, it was hoped that this shortcoming can be bypassed.Although the nonperturbative dualities have shed light in many obscure corners of string theory (obscured by strong-coupling physics), they never managed to bypass the Planck barrier. The Planck scale is always duality invariant, and any dual description is well defined for energies well below that Planck scale. We have no clue from string theory what happens near or above the Planck scale, as the relevant physics looks nonperturbative from any point of view.

I’ve added this to this FAQ entry.

The conference had its origins in a piece published a year earlier in Nature by George Ellis and Joe Silk, entitled Scientific method: Defend the integrity of physics. Ellis and Silk made a forceful case that widely advertised but inherently untestable string theory and multiverse research does damage to the public understanding of science and is a threat to the credibility of science at a time it is under attack. The piece suggested:

A conference should be convened next year to take the first steps. People from both sides of the testability debate must be involved.

Looking through the proceedings volume, there’s lots of abstract discussion of philosophy of science and some diversity of points of view on the multiverse. When it comes to string theory though, the organizers interpreted “people on both sides” to mean bringing in one person willing to point out that there is a problem with string theory, and an army of string theorists to defend the theory. On the issue of the problems of string theory, the volume contains nearly 100 pages of pro-string theory hype, from Polchinski (two contributions), Silverstein, Kane and Quevedo. As usual with Kane, there’s a string theory “prediction” of the gluino mass (1.5 TeV +/- 10-15%) which has already been falsified. All I could find on the side of substantive criticism of string theory was in Carlo Rovelli’s contribution (preprint version here), and mainly in a single paragraph:

String theory is a living proof of the dangers of excessive reliance on non-empirical arguments. It raised great expectations thirty years ago, promising to compute all the parameters of the Standard Model from first principles, to derive from first principles its symmetry group SU(3)×SU(2)×U(1) and the existence of its three families of elementary particles, to predict the sign and the value of the cosmological constant, to predict novel observable physics, to understand the ultimate fate of black holes, and to offer a unique, well-founded unified theory of everything. Nothing of this has come true. String theorists, instead, have predicted a negative cosmological constant, deviations from Newton’s 1/r^2 law at sub-millimeters scale, black holes at the European Organization for Nuclear Research(CERN), low-energy super-symmetric particles, and more. All this was false. Still, Joe Polchinski, a prominent string theorist, writes [7] that he evaluates the Bayesian probability of string to be correct at 98.5% (!). This is clearly nonsense.

I won’t spend more time here discussing the conference and the articles in this volume, mainly because I’ve already written a lot about this in previous posts. For a contemporaneous discussion of the conference and Polchinski’s String Theory to the Rescue paper, see here and here. There are also interesting blog posts about the conference from Massimo Pigliucci, see here, here and here, and a Quanta piece by Natalie Wolchover here. For a discussion of Sean Carroll’s Beyond Falsifiability contribution, see here (and discussion here and here). For a discussion of Eva Silverstein’s contribution, see here.

**Update**: A few more links to material about the Munich conference: Jim Baggott here and here, Andrew Gelman here, Davide Castelvecchi here, and the conference website (with videos) here.

**Update**: Looking at the Preface, I notice that the editors claim:

Additional contributions were solicited by the editors with the aim of ensuring as full and balanced presentation as possible of the various positions in the debate.

With regards to string theory, the one additional contribution in the volume is from string theorist Eva Silverstein, so evidently the editors felt that balance required yet more on the pro-string theory side….

**Update**: I mischaracterized Polchinski’s calculation of the probability that string theory is correct as 98.5%. More accurately, he claims that the probability is “over 3 sigma” (i.e. over 99.73%).

**Update**: I finally got around to watching the videos of the panel discussions at the workshop (all videos available here). What most struck me about these discussions was the heavily dominant role of David Gross, who was on two of three panels, participating from the audience in the third. On the panels he was on, Gross was speaking far more than anyone else, and rarely if at all would anyone disagree with him. Gross’s point of view is that there is a testability problem with the multiverse, but all is well with string theory (although probably not at Polchinski’s “over 99.73% sure to be true” level). He’s a powerful intellect and a forceful speaker, so it’s not surprising that no one would take him on. But on the topic of string theory I think there are very serious problems with many of the claims he makes (for his arguments of 15 years ago, see the first substantive post of this blog), and the organizers should have found someone willing to challenge him on those.