Who was “Not Even Wrong” first?

I recently heard from John Minkowski, whose father Jan Minkowksi was a student of Pauli’s in the late 1940s. He asked if I knew what the specific context of Pauli’s “Not Even Wrong” comment was, and I told him I didn’t. I referred to this early blog post, which explains that Karl von Meyenn (editor of Pauli’s correspondence) had pointed me to a biographical memoir about Pauli by Rudolf Peierls which includes:

Quite recently, a friend showed him the paper of a young physicist which he suspected was not of great value but on which he wanted Pauli’s views. Pauli remarked sadly ‘It is not even wrong.’

Looking around for any more information about this, Wikipedia links to a 1992 letter to the editor at Physics Today from Peierls, which states

Wolfgang Pauli’s remark “Das is nicht einmal falsch” (“That is not even wrong”) was made not as a comment on a seminar talk but as a reaction to a paper by a young theoretician, on which a colleague (I believe it was Sam Goudsmit) had invited Pauli’s opinion.

Google also turned up a translation of a talk by Peierls in this article by Mikhail Shifman, which includes:

Somebody showed to Pauli a work of a young theorist being well aware that the work was not too good but still willing to hear Pauli’s opinion. Pauli read the paper and said, with sadness: “It is not even wrong.”

Trying to guess what the article in question might have been, I’m tempted by the hypothesis that the discussion with Goudsmit was about Everett’s “Relative State” Formulation of Quantum Mechanics paper. The timing (“Quite recently”) would have been right, with the paper published in July 1957, Pauli’s death later in December 1958. Goudsmit at the time was editor-in-chief at Physical Review, so would have been interested in Pauli’s opinion of the paper.

Complicating this story, John Minkowki sent me some pages from his father’s 1991 book Through three wars: The memoirs of Jan Michael Minkowski, which included this (in a context describing his 1946-48 student days at ETH):

I remember a seminar in theoretical physics given by a visitor from another Swiss university. These seminars were presided over by Dr. Pauli, and after the speaker finished all eyes would turn to Pauli to pronounce the verdict in his commentary. This particular lecture was treated by Pauli with progressively faster twirling of his thumbs around and around one another and a growing benevolent smile. Bad sign, we thought. The more he smiled the more vicious he will be, we thought. And sure enough, he smiled some more and said “It isn’t even wrong.”

One possibility here is that Minkowski was mis-remembering something from forty years earlier, another is that the occasion that Peierls was referring to was not the first time Pauli had used the phrase. As evidence for the second hypothesis, see this interview with Konrad Bleuler, which points to the possibility of Stueckelberg as the “visitor from another Swiss university”:

So these seminars took place in a common seminar having also Professor Ernst Stueckelberg, then a Professor in Geneva, also Stueckelberg being a well-known theoretician, his work was very much, if I might remind you of that fact, acknowledged by Richard Feynman. For example, his idea of the particle going back in time being interpreted as an antiparticle came as far as I know originally from Stueckelberg and many other great ideas. I remember one special seminar in which, of course this seminar could be rather called. High Court, with scientific papers in the docket, sometimes really sentenced to death. From that one might record Pauli’s classification of scientific papers. There were two classes or else there were old and right. Or the other class, new and wrong. But hardly anything intermediate. If it was even worse, Pauli would have said “it’s not even wrong.” That was the kind of atmosphere. But all what is written in physics is either understood or else it’s thrown away, and not this half-and-half, what we see at present. But then in this connection it was a search for truth. And for Pauli, a lecture hall was a kind of a holy place where only truth was allowed. And a wrong statement was a sacrilege, and in that sense one should understand his rather extremely sharp remarks he might make to some lecturer who seemed not to present things in a quite logical way. But coming to that special, to another special seminar is the following: Stueckelberg always knew really special — I might say prophetic — ideas. He gave a lecture and of course Pauli — it happened very often — didn’t agree. And said “you are not allowed to say such things.” But you see, Stueckelberg being a prophet, he’s not so easily stopped uttering his prophecies. So Pauli in despair menaced Stueckelberg with a stick and it seemed — I was not present myself but I was told — that the seminar ended like the war of Troy, Pauli, rather corpulent, with his stick after Stueckelberg around the table in the lecture hall. That was the kind of attitude at this period.

I’m not sure what to make of all of this. Perhaps Pauli used the phrase both in the late 40s to criticize Stueckelberg (probably unfairly since many of Stueckelberg’s ideas were ahead of his time) and then Everett in the late 50s (in my opinion accurately, but I don’t want to start up the usual empty arguments about MWI here).

Posted in Uncategorized | 17 Comments

A Muon Collider?

The US particle physics community has been going through a multi-year process designed to lead up this fall to a 10 year strategic plan to be presented to the DOE and the NSF. In particular, this will generate a prioritized list of what projects to fund over this period. The process began with the Snowmass self-study, concluded last year, and available here. Since last fall there have been two independent efforts going on:

  • A National Academies study has been holding meetings, materials available here.
  • A P5 (Particle Physics Project Prioritization Panel) is holding meetings, see here, and planning for a report to NSF and DOE by October.

Looking through all the materials relevant to particle theory, there seems to me little acknowledgement of the serious problems faced by the subject, or any new ideas for how to address these problems. Most of the effort though is devoted to where most of the money will be spent, on the experimental side. To a large degree, for the short-term it’s clear where funding has to go (to continue supporting the LHC into the HL-LHC era, and finish building the DUNE/LBNF US neutrino project). The longer-term is however very uncertain, as it is unclear whether there’s a viable energy-frontier project that could study higher energies than those accessible at the LHC.

Last week EPP2024 and P5 held Town Hall events at Fermilab, see here and here. There’s video of the EPP2024 event here. On the question of the long-term future, one issue that is getting a lot of attention is that of whether to prioritize development of a possible muon collider. In this presentation a young physicist gives a future timeline including their likely retirement and death dates, showing that a muon collider is their only hope for new energy frontier physics during their lifetime. For those of my age the situation is a bit different, since even a muon collider is not going to do the job. At the EPP2024 event (3:28 in the video) Nima Arkani-Hamed makes the case that:

I think the subject has not been so exciting for many, many decades, and at the same time our ability to experimentally address and solidly settle some of these very big questions has never been more uncertain. I don’t think it’s a normal time, it’s an inflection point in the history of the development of our subject, and it requires urgency… The confluence of the technical expertise for doing so and the enthusiasm amongst the young people who are willing to do it exists now and I very much doubt it will exist in 10 or 15 years from now. If we are going to do it, we have to start thinking about doing it now.

While his point is more general, he’s clearly making the case for starting a new energy frontier machine project soon, with the muon collider the one possibility for getting to higher energies than the LHC.

A few weeks ago there was a workshop at the KITP devoted to the muon collider question, with a news story here (anyone know why the video of the panel discussion is password-protected?). Arkani-Hamed gave a talk aimed at other physicists here. On the European front, a couple days ago there was this meeting.

Already twenty years ago when I was writing Not Even Wrong, it was clear that a muon collider was in principle a very attractive idea for how to get to higher energies and I wrote about this in the first chapter of the book. The much higher mass of the muon than the electron means that you don’t have the same synchrotron energy loss problem, so can build a much smaller storage ring at the same energy, or get to much higher energies with the same size. The problem though is that muons have a life-time of only 2.2 microseconds. This implies two serious difficulties:

  • You need to produce, store, accelerate and collide the muons in a very short period of time.
  • As the muons decay they’ll produce large numbers of high energy electrons and neutrinos, creating a difficult environment for detectors to operate in and significant radiation hazards.

Normally one thinks of neutrinos as virtually never interacting with anything, but the numbers and high energies of the neutrinos produced at a muon collider create a potential significant radiation hazard, one that cannot be dealt with by shielding.

While I might not be around to see the results from a muon collider, if such a thing is viable, I’d strongly support such a project, and would even buy a t-shirt. The US DOE HEP budget is about a billion dollars/year. One would think this should be enough to accommodate building demonstrator projects or a small collider ring on a 10 year timescale, and possibly even an energy-frontier ring on a 20 or more year timescale. What’s worrying me a bit is the fact that more visible progress on this hasn’t happened since I looked into it 20 years ago. Why no current demonstrator project? Have the potential radiation hazard issues found solutions? I’d be very curious to hear from anyone with expertise on these questions.

Posted in Uncategorized | 11 Comments

Not Quite What Happened

Quanta has an article out today about the wormhole publicity stunt, which sticks to the story that by doing a simple SYK model calculation on a quantum computer instead of a classical computer, one is doing quantum gravity in the lab, producing a traversable wormhole and sending information through it. From what I’ve heard, the consensus among theorists is that the earlier Quanta article and video were nonsense, outrageously overhyping a simulation and then bizarrely identifying a simulation with reality if it’s done on a quantum computer.

The new article is just about as hype-laden, starting off with:

A holographic wormhole would scramble information in one place and reassemble it in another. The process is not unlike watching a butterfly being torn apart by a hurricane in Houston, only to see an identical butterfly pop out of a typhoon in Tokyo.

and

In January 2022, a small team of physicists watched breathlessly as data streamed out of Google’s quantum computer, Sycamore. A sharp peak indicated that their experiment had succeeded. They had mixed one unit of quantum information into what amounted to a wispy cloud of particles and watched it emerge from a linked cloud. It was like seeing an egg scramble itself in one bowl and unscramble itself in another.

In several key ways, the event closely resembled a familiar movie scenario: a spacecraft enters one black hole — apparently going to its doom — only to pop out of another black hole somewhere else entirely. Wormholes, as these theoretical pathways are called, are a quintessentially gravitational phenomenon. There were theoretical reasons to believe that the qubit had traveled through a quantum system behaving exactly like a wormhole — a so-called holographic wormhole — and that’s what the researchers concluded.

An embarrassing development provides the ostensible reason for the new article, the news that “another group suggests that’s not quite what happened”. This refers to this preprint, which argues that the way the Jafferis-Lykken-Spiropulu group dramatically simplified the calculation to make it doable on a quantum computer threw out the baby with the bathwater, so was not meaningful. The new Quanta piece has no quotes from experts about the details of what’s at issue. All one finds is the news that the preprint has been submitted to Nature and that

the Jafferis, Lykken and Spiropulu group will likely have a chance to respond.

There’s also an odd piece of identity-free and detail-free reporting that

five independent experts familiar with holography consulted for this article agreed that the new analysis seriously challenges the experiment’s gravitational interpretation.

I take all this to mean that the author couldn’t find anyone willing to say anything in defense of the Nature article. An interesting question this raises is that if all experts agree the Nature article was wrong, will it be retracted? Will the retraction also be a cover story?

The update of the original story is framed by enthusiastic and detailed coverage of the work of Hrant Gharibyan on similar wormhole calculations. The theme is that while Jafferis-Lykken-Spiropulu may have hit a bump in the road, claiming to be doing “quantum gravity in the lab” by SYK model calculations on quantum computers is the way forward for fundamental theoretical physics:

The holographic future may not be here yet. But physicists in the field still believe it’s coming, and they say that they’re learning important lessons from the Sycamore experiment and the ensuing discussion.

First, they expect that showing successful gravitational teleportation won’t be as cut and dry as checking the box of perfect size winding. At the very least, future experiments will also need to prove that their models preserve the chaotic scrambling of gravity and pass other tests, as physicists will want to make sure they’re working with a real Category 5 qubit hurricane and not just a leaf blower. And getting closer to the ideal benchmark of triple-digit numbers of particles on each side will make a more convincing case that the experiment is working with billowing clouds and not questionably thin vapors.

No one expects today’s rudimentary quantum computers to be up to the challenge of the punishingly long Hamiltonians required to simulate the real deal. But now is the time to start chiseling away at them bit by bit, Gharibyan believes, in preparation for the arrival of more capable machines. He expects that some might try machine learning again, this time perhaps rewarding the algorithm when it returns chaotically scrambling, non-commuting Hamiltonians and penalizing it when it doesn’t. Of the resulting models, any that still have perfect size winding and pass other checks will become the benchmark models to drive the development of new quantum hardware.

If quantum computers grow while holographic Hamiltonians shrink, perhaps they will someday meet in the middle. Then physicists will be able to run experiments in the lab that reveal the incalculable behavior of their favorite models of quantum gravity.

“I’m optimistic about where this is going,” Gharibyan said.

I had thought that perhaps this fiasco would cause the Quanta editors to think twice, talk to skeptical experts, and re-report the original credulous story/video. Instead, it looks like their plan is to double down on the “quantum gravity in the lab” hype.

Update: Two more related pieces of wormhole news.

  • On Friday Harvard will be hosting a talk on the non-wormhole.
  • In this preprint Maldacena argues for another example of how to do quantum gravity in the lab, by doing a QM calculation on a quantum computer that will “have created something that behaves as a black hole in the laboratory” (no wormholes, just black holes). The calculation he suggests involves not the newer SYK model, but the ancient BFSS matrix model from 27 years ago, which at the time got a lot of attention as a possible definition of M-theory.

Update: The Harvard CMSA talk about the wormholes is available here. I didn’t see anything in the slides about the Yao et al. criticism of this work. In the last minute of the video there was a question about this, and some reference to the criticism having been addressed during the talk. Supposedly there was some quick verbal summary of this response to the criticism in this last minute, but the sound was so garbled I couldn’t understand it. Here’s the automatically generated transcript:

1:16:50
so I guess I guess um we’re talking about like at the time of interpretation you do see this
1:16:56
operating ghost in kind of declare the two-point function if you’re looking for at later times you can ask about
1:17:01
different kind of scenarios one is accepting the single-sided systems what it’s doing it’s like internal reversible
1:17:07
verbal hamiltonian and you see thermalizing Dynamics in the library
1:17:12
um perhaps also the size winding uh although it’s not necessarily required
1:17:18
for all of your fermions to show size winding because you have done gravitational attractions in your model we do see impact that all the pronouns
1:17:26
have quite good size winding they’re good enough to allow them to teleport to size binding but the time and size
1:17:31
binding is clearly related to like the the rate of Decay the two-point function and so it seems to actually lend itself
1:17:38
to an even tighter kind of interpretation where would you associate different masses through different
1:17:44
permeons and this is quite consistent that is

Someone with more patience and interest in this perhaps can carefully follow the talk and report what the response to the Yao et al. criticism actually was.

Update: A response by the original authors to Yao et al. has been posted as “Comment on “Comment on “Traversable wormhole dynamics on a quantum processor” ” “. From the abstract, the claim seems to be that the results of the toy model calculation are “consistent with a gravitational interpretation of the teleportation dynamics, as opposed to the late-time dynamics”, and that this is not in conflict with the objections by Yao et al. These objections are described as “counterfactual scenarios outside of the experimentally implemented protocol.” The odd thing here is the description of the quantum computer calculation as a “factual” experimental result, part of an “experimentally implemented protocol”. The quantum computer calculation was not an experiment but a calculation, with a known-in-advance result (the calculation done previously on a classical computer). The criticisms of Yao et al. aren’t “counterfactual” to an experimental protocol, but challenging the interpretation of a calculation. As far as I can tell, this whole discussion is about how to interpret simple calculations you can do on any conventional computer, nothing to do with an “experiment”.

Posted in Wormhole Publicity Stunts | 21 Comments

Lost in the Landscape

A commenter in the previous posting pointed to an interview with Lenny Susskind that just appeared at the CERN Courier, under the title Lost in the Landscape. Some things I found noteworthy:

  • He deals with the lack of any current definition of what string theory means by distinguishing between “String theory” and “string theory”. “String theory” is the superstring in 10 dimensions somehow compactified to have some large dimensions that are either flat or AdS. This can’t be the real world

    I can tell you with 100% confidence that we don’t live in that world.

    since the real world is non-supersymmetric and dS, not supersymmetric and AdS. He describes this theory as being “a very precise mathematical structure”, which one might argue with.

    Something very different is “string theory”:

    you might call it string-inspired theory, or think of it as expanding the boundaries of this very precise theory in ways that we don’t know how to at present. We don’t know with any precision how to expand the boundaries into non-supersymmetric string theory or de Sitter space, for example, so we make guesses. The string landscape is one such guess…

    The first primary fact is that the world is not exactly supersymmetric and string theory with a capital S is. So where are we? Who knows! But it’s exciting to be in a situation where there is confusion.

  • About anthropics and the landscape, he still thinks this is the best idea out there, but acknowledges it has gone nowhere in twenty years:

    Witten, who had negative thoughts about the anthropic idea, eventually gave up and accepted that it seems to be the best possibility. And I think that’s probably true for a lot of other people. But it can’t have the ultimate influence that a real theory with quantitative predictions can have. At present it’s a set of ideas that fit together and are somewhat compelling, but unfortunately nobody really knows how to use this in a technical way to be able to precisely confirm it. That hasn’t changed in 20 years. In the meantime, theoretical physicists have gone off in the important direction of quantum gravity and holography.

  • About the swampland, like everyone else I know, he can’t figure out what the argument is that is going to relate it to the real world:

    The argument seems to be: let’s put a constraint on parameters in cosmology so that we can put de Sitter space in the swampland. But the world looks very much like de Sitter space, so I don’t understand the argument and I suspect people are wrong here.

  • His comments on Technicolor strike me as odd:

    I had one big negative surprise, as did much of the community. This was a while ago when the idea of “technicolour” – a dynamical way to break electroweak symmetry via new gauge interactions – turned out to be wrong. Everybody I knew was absolutely convinced that technicolour was right, and it wasn’t. I was surprised and shocked.

    I remember first hearing about the Technicolor idea around 1979 when Susskind and Weinberg wrote about it. It was a very attractive idea by itself, but the problem was that to match known flavor physics you needed to go to “Extended Technicolor”, which was really ugly (lots of new degrees of freedom, no predictivity). No idea when people supposedly were “absolutely convinced that technicolour was right”, maybe it was for the few months it took them to realize you needed Extended Technicolor.

  • About the wormholes, he says:

    One extremely interesting idea is “quantum gravity in the lab” – the idea that it is possible to construct systems, for example a large sphere of material engineered to support surface excitations that look like conformal field theory, and then to see if that system describes a bulk world with gravity. There are already signs that this is true. For example, the recent claim, involving Google, that two entangled quantum computers have been used to send information through the analogue of a wormhole shows how the methods of gravity can influence the way quantum communication is viewed. It’s a sign that quantum mechanics and gravity are not so different.

    Unclear to me how this enthusiastic reference to the wormholes relates to his much less enthusiastic recent quote in New Scientist:

    What is not so clear is whether the experiment is any better than garden-variety quantum teleportation and does it really capture the features of macroscopic general relativity that the authors might like to claim… only in the most fuzzy of ways (at best).

Posted in Multiverse Mania, Swampland | 20 Comments

Yet More on the Wormholes

The paper explaining that this Nature cover story, besides being a publicity stunt, was also completely wrong, has so far attracted very little media attention. The first thing I’ve seen came out today at New Scientist, a publication often accused of promoting hype, but in this case so far the only one reporting problems with the hyped result. The title of the article is Google’s quantum computer simulation of a wormhole may not have worked. It contains an explanation of the technical problems:

The first problem has to do with how the simulated wormhole reacted to the signals being sent through it….Yao and his colleagues found that for each individual test, the system continued to oscillate indefinitely, which doesn’t match the expected behaviour of a wormhole.

The second issue was related to the signals themselves. One of the signatures of a real wormhole – and therefore of a good holographic representation of a wormhole – is that the signal comes out looking the same as it went in. Yao and his team found that while this worked for some signals – those similar to the ones the researchers used to train a machine learning algorithm used to simplify the system – it didn’t work for others.

…it seems that for this particular quantum system, the size winding would disappear if the model was made larger or more detailed. Therefore, the perfect size winding observed by the original authors may just be a relic of the model’s small size and simplicity.

There is a response from Maria Spiropulu:

“The authors of the comment argue about the many-body properties of the individual decoupled quantum systems of our model,” she says. “We observed features of the coupled systems consistent with traversable wormhole teleportation.”

Remarkably, Lenny Susskind throws the authors of the stunt under the bus:

“What is not so clear is whether the experiment is any better than garden-variety quantum teleportation and does it really capture the features of macroscopic general relativity that the authors might like to claim… only in the most fuzzy of ways (at best),” he says.

Posted in Wormhole Publicity Stunts | 13 Comments

Physics With Witten

I just noticed that last semester Edward Witten was teaching Physics 539 at Princeton, a graduate topics course. Since he’s now past the age of 70, at the IAS he is officially retired and an emeritus professor (the IAS is the only place I know of in the US with retirement at 70, presumably since it is a non-teaching institution). I don’t know if there are other times Witten has been teaching courses at the university since his move to the IAS in 1987.

Videos of the first few lectures are on Youtube here, problem sets on this web-page. It seems like the course started out covering issues with causality in general relativity, following these lecture notes, then later moved on to topics in quantum information theory.

Posted in Uncategorized | 6 Comments

Some Interviews

Some interviews that readers of this blog may find of interest:

Posted in Uncategorized | 9 Comments

Latest on the Wormholes

I had thought that the wormhole story had reached peak absurdity back in December, but last night some commenters pointed to a new development: the technical calculation used in the publicity stunt was nonsense, not giving what was claimed. The paper explaining this is Comment on “Traversable wormhole dynamics on a quantum processor”, from a group led by Norman Yao. Yao is a leading expert on this kind of thing, recently hired by Harvard as a full professor. There’s no mention in the paper about any conversations he might have had with the main theorist responsible for the publicity stunt, his Harvard colleague Daniel Jafferis.

Tonight Fermilab is still planning a big public event to promote the wormhole, no news yet on whether it’s going to get cancelled. Also, no news from Quanta magazine, which up until now has shown no sign of understanding the extent they were taken in by this. Finally, no news from Nature about whether the paper will be retracted, and whether the retraction will be a cover story with a cool computer graphic of a non-wormhole.

Update: Dan Garisto goes through the Jafferis et al. paper, noting “Turns out it looked good only because they used an average (a fact not specified in the article).” and ending with

The unreported averages for the thermalization and teleportation signal make a stronger case for misconduct on the part of the authors.

I don’t understand why Fermilab was planning a public lecture promoting this, and with what has now come out, it should clearly be cancelled.

Update: I like the suggestion from Andreas Karch

Quanta magazine could make a video where the wormhole authors share in vivid detail the excitement they felt when they realized that their paper isn’t just overhyped but actually wrong.

Update: Garisto has a correction, explaining that the averaging is not the problem with Jafferis et al., rather that the teleportation signal is only there for the pair of operators involved in the machine language training, not there for other pairs of operators that should demonstrate the effect. In any case, best to consult the paper itself. If Jafferis et al. disagree with its conclusions, surely we’ll see an explanation from them soon.

Update: The Harvard Gazette promotes the wormhole publicity stunt, with “Daniel Jafferis’ team has for the first time conducted an experiment based in current quantum computing to understand wormhole dynamics.” As far as I can tell, that’s utter nonsense, with the result of the quantum computer calculation adding zero to our understanding of “wormhole dynamics”.

Update: Video of the Lykken talk now available, advertised by FNAL as Wormholes in the Laboratory.

Posted in Wormhole Publicity Stunts | 10 Comments

The Trouble With Path Integrals, Part II

This posting is about the problems with the idea that you can simply formulate quantum mechanical systems by picking a configuration space, an action functional S on paths in this space, and evaluating path integrals of the form
$$\int_{\text{paths}}e^{iS[\text{path}]}$$

Necessity of imaginary time

This section has been changed to fix the original mistaken version.
If one tries to do this path integral for even the simplest possible quantum field theory case (a non-relativistic free particle in one space dimension), the answer for the propagator in energy-momentum space is
$$G(E,p)=\frac {1}{E-\frac{p^2}{2m}}$$
Fourier transforming to real-time is ill-defined (the integration goes through the location of the pole at $E=\frac{p^2}{2m}$). Taking $t$ complex and in the upper half plane, for imaginary $t$ the Fourier transform is a well-defined integral. One gets the real-time propagator then by analytic continuation as a boundary value. For a relativistic theory one has
$$G(E,p)=\frac{1}{E^2-(p^2+m^2)}$$
and two poles (at $E=\pm \sqrt{p^2+m^2}$) to deal with. Again Fourier-transforming to real-time is ill-defined, but one can Fourier transform to imaginary time, then use this to get a sensible real-time propagator by analytic continuation.

Trying to do the same thing for Yang-Mills theory, again one gets something ill-defined for real time, with the added disadvantage of no way to actually calculate it. Going to imaginary time and discretizing gives a version of lattice gauge theory, with well-defined integrals for fixed lattice spacing. This is conjectured to have a well-defined limit at the lattice spacing is taken to zero.

Not an integral and not needed for fermions

Actual fundamental matter particles are fermions, with an action functional that is quadratic in the fermion fields. For these there’s a “path integral”, but it’s in no sense an actual integral, rather an interesting algebraic gadget. Since the action functional is quadratic, you can explicitly evaluate it and just work with the answer the algebraic gadget gives you. You can formulate this story as an analog of an actual path integral, but it’s unclear what this analogy gets you.

Phase space path integrals don’t make sense in general

Another aspect of the fermion action is that it has only one time derivative. For actions of this kind, bosonic or fermionic, the variables are not configuration space variable but phase space variables. For a linear phase space and quadratic action you can figure out what to do, but for non-linear phase spaces or non-quadratic actions, in general it is not clear how to make any sense of the path integral, even in imaginary time.

In general this is a rather complicated story (see some background in the part I post). For an interesting recent take on the phase-space path integral, see Witten’s A New Look At The Path Integral Of Quantum Mechanics.

Update: A commenter pointed me to this very interesting talk by Neil Turok. The main motivation that Turok explains at the beginning of the talk (and also in the Q and A afterwards) is exactly one that I share. He argues that the lesson of the the last 40 years is that one should not try and solve problems by making the Standard Model more complicated. All one needs to do is look more closely at the Standard Model itself and its foundations. If you do that, one thing you find is that there’s a “trouble with path integrals”. In Turok’s words, the problems with the path integral indicate that “the field is without foundations” and “nobody knows what they are doing”.

I do though very much part company with him over the direction he takes to try and get better foundations. He argues that you shouldn’t Wick rotate (analytically continue in time), but should complexify paths, analytically continuing in path space. For some problems doing the latter may be a better idea than doing the former, and in his talk he works out a toy QM calculation of this kind. But the model he studies (anharmonic oscillator) doesn’t at all prove that going to the imaginary time theory is a bad idea, for some calculations that works very well. He’s motivated by defining the path integral for gravity, where Euclidean quantum gravity is a problematic subject, but the gravitational version of the toy model I think will also be problematic. The ideas I’ve been pursuing involving the way the symmetries of spinors behave in Euclidean signature I think give a promising new way to think about this, and you won’t get that from just trying to complexify the conventional variables used to describe geometries.

Posted in Uncategorized | 30 Comments

The Trouble With Path Integrals, Part I

Two things recently made me think I should write something about path integrals: Quanta magazine has a new article out entitled How Our Reality May Be a Sum of All Possible Realities and Tony Zee has a new book out, Quantum Field Theory, as Simply as Possible (you may be affiliated with an institution that can get access here). Zee’s book is a worthy attempt to explain QFT intuitively without equations, but here I want to write about what it shares with the Quanta article (see chapter II.3): the idea that QM or QFT can best be defined and understood in term of the integral
$$\int_{\text{paths}}e^{iS[\text{path}]}$$
where S is the action functional. This is simple and intuitively appealing. It also seems to fit well with the idea that QM is a “many-worlds” theory involving considering all possible histories. Both the Quanta article and the Zee book do clarify that this fit is illusory, since the sum is over complex amplitudes, not a probability density for paths.

This posting will be split into two parts. The first will be an explanation of the context of what I’ve learned about path integrals over the years. If you’re not interested in that, you can skip to part II, which will list and give a technical explanation of some of the problems with path integrals.

I started out my career deeply in thrall to the idea that the path integral was the correct way to formulate quantum mechanics and quantum field theory. The first quantum field theory course I took was taught by Roy Glauber, and involved baffling calculations using annihilation and creation operators. At the same time I was trying to learn about gauge theory and finding that sources like the 1975 Les Houches Summer School volume or Coleman’s 1973 Erice lectures gave a conceptually much simpler formulation of QFT using path integrals. The next year I sat in on Coleman’s version of the QFT course, which did bring in the path integral formalism, although only part-way through the course. This left me with the conclusion that path integrals were the modern, powerful way of thinking, Glauber was just hopelessly out of touch, and Coleman didn’t start with them from the beginning because he was still partially attached to the out-of-date ways of thinking of his youth.

Over the next few years, my favorite QFT book was Pierre Ramond’s Field Theory: A Modern Primer. It was (and remains) a wonderfully concise and clear treatment of modern quantum field theory, starting with the path integral from the beginning. In graduate school, my thesis research was based on computer calculations of path integrals for Yang-Mills theory, with the integrals done by Monte-Carlo methods. Spending a lot of time with such numerical computations further entrenched my conviction that the path integral formulation of QM or QFT was completely essential. This stayed with me through my days as a postdoc in physics, as well as when I started spending more time in the math community.

My first indication there could be some trouble with path integrals I believe started in around 1988, when I learned of Witten’s revolutionary work on Chern-Simons theory. This theory was defined as a very simple path integral, a path integral over connections with action the Chern-Simons functional. What Witten was saying was that you could get revolutionary results in three-dimensional topology, simply by calculating the path integral
$$\int_{\mathcal A} e^{iCS[A]}$$
where the integration is over the space of connections A on a principal bundle over some 3-manifold. During my graduate student days and as a postdoc I had spent a lot of time thinking about the Chern-Simons functional (see unpublished paper here). If I could find a usable lattice gauge theory version of CS[A] (I never did…), that would give a way defining the local topological charge density in the four-dimensional Yang-Mills theory I was working with. Witten’s new quantum field theory immediately brought back to mind this problem. If you could solve it, you would have a well-defined discretized version of the theory, expressed as a finite-dimensional version of the path integral, and then all you had to do was evaluate the integral and take the continuum limit.

Of course this would actually be impractical. Even if you solved the problem of discretizing the CS functional, you’d have a high dimensional integral over phases to do, with the dimension going to infinity in the limit. Monte-Carlo methods depend on the integrand being positive, so won’t work for complex phases. It is easy though to come up with some much simpler toy-model analogs of the problem. Consider for example the following quantum mechanical path integral
$$\int_{\text {closed paths on}\ S^2} e^{i\frac{1}{2}\oint A}$$
Here $S^2$ is a sphere of radius 1, and A is locally a 1-form such that dA is the area 2-form on the sphere. You could think of A as the vector potential for a monopole field, where the monopole was inside the sphere.

If you think about this toy model, which looks like a nice simple version of a path integral, you realize that it’s very unclear how to make any sense of it. If you discretize, there’s nothing at all damping out contributions from paths for which position at time $t$ is nowhere near position at time $t+\delta t$. It turns out that since the “action” only has one time derivative, the paths are moving in phase space not configuration space. The sphere is a sort of phase space, and “phase space path integrals” have well-known pathologies. The Chern-Simons path integral is of a similar nature and should have similar problems.

I spent a lot of time thinking about this, one thing I wrote early on (1989) is available here. You get an interesting analog of the sphere toy model for any co-adjoint orbit of a Lie group G, with a path integral that should correspond to a quantum theory with state space the representation of G that the orbit philosophy associates to that orbit. Such a path integral that looks like it should make sense is the path integral for a supersymmetric quantum mechanics system that gives the index of a Dirac operator. Lots of people were studying such things during the 1980s-early 90s, not so much more recently. I’d guess that a sensible Chern-Simons path integral will need some fermionic variables and something like the Dirac operator story (in the closest analog of the toy model, you’re looking at paths moving in a moduli space of flat connections).

Over the years my attention has moved on to other things, with the point of view that representation theory is central to quantum mechanics. To truly play a role as a fundamental formulation of quantum mechanics, the path integral needs to find its place in this context. There’s a lot more going on than just picking an action functional and writing down
$$\int_{paths}e^{iS[\text{path}]}$$

Posted in Uncategorized | 8 Comments