Not Quite What Happened

Quanta has an article out today about the wormhole publicity stunt, which sticks to the story that by doing a simple SYK model calculation on a quantum computer instead of a classical computer, one is doing quantum gravity in the lab, producing a traversable wormhole and sending information through it. From what I’ve heard, the consensus among theorists is that the earlier Quanta article and video were nonsense, outrageously overhyping a simulation and then bizarrely identifying a simulation with reality if it’s done on a quantum computer.

The new article is just about as hype-laden, starting off with:

A holographic wormhole would scramble information in one place and reassemble it in another. The process is not unlike watching a butterfly being torn apart by a hurricane in Houston, only to see an identical butterfly pop out of a typhoon in Tokyo.


In January 2022, a small team of physicists watched breathlessly as data streamed out of Google’s quantum computer, Sycamore. A sharp peak indicated that their experiment had succeeded. They had mixed one unit of quantum information into what amounted to a wispy cloud of particles and watched it emerge from a linked cloud. It was like seeing an egg scramble itself in one bowl and unscramble itself in another.

In several key ways, the event closely resembled a familiar movie scenario: a spacecraft enters one black hole — apparently going to its doom — only to pop out of another black hole somewhere else entirely. Wormholes, as these theoretical pathways are called, are a quintessentially gravitational phenomenon. There were theoretical reasons to believe that the qubit had traveled through a quantum system behaving exactly like a wormhole — a so-called holographic wormhole — and that’s what the researchers concluded.

An embarrassing development provides the ostensible reason for the new article, the news that “another group suggests that’s not quite what happened”. This refers to this preprint, which argues that the way the Jafferis-Lykken-Spiropulu group dramatically simplified the calculation to make it doable on a quantum computer threw out the baby with the bathwater, so was not meaningful. The new Quanta piece has no quotes from experts about the details of what’s at issue. All one finds is the news that the preprint has been submitted to Nature and that

the Jafferis, Lykken and Spiropulu group will likely have a chance to respond.

There’s also an odd piece of identity-free and detail-free reporting that

five independent experts familiar with holography consulted for this article agreed that the new analysis seriously challenges the experiment’s gravitational interpretation.

I take all this to mean that the author couldn’t find anyone willing to say anything in defense of the Nature article. An interesting question this raises is that if all experts agree the Nature article was wrong, will it be retracted? Will the retraction also be a cover story?

The update of the original story is framed by enthusiastic and detailed coverage of the work of Hrant Gharibyan on similar wormhole calculations. The theme is that while Jafferis-Lykken-Spiropulu may have hit a bump in the road, claiming to be doing “quantum gravity in the lab” by SYK model calculations on quantum computers is the way forward for fundamental theoretical physics:

The holographic future may not be here yet. But physicists in the field still believe it’s coming, and they say that they’re learning important lessons from the Sycamore experiment and the ensuing discussion.

First, they expect that showing successful gravitational teleportation won’t be as cut and dry as checking the box of perfect size winding. At the very least, future experiments will also need to prove that their models preserve the chaotic scrambling of gravity and pass other tests, as physicists will want to make sure they’re working with a real Category 5 qubit hurricane and not just a leaf blower. And getting closer to the ideal benchmark of triple-digit numbers of particles on each side will make a more convincing case that the experiment is working with billowing clouds and not questionably thin vapors.

No one expects today’s rudimentary quantum computers to be up to the challenge of the punishingly long Hamiltonians required to simulate the real deal. But now is the time to start chiseling away at them bit by bit, Gharibyan believes, in preparation for the arrival of more capable machines. He expects that some might try machine learning again, this time perhaps rewarding the algorithm when it returns chaotically scrambling, non-commuting Hamiltonians and penalizing it when it doesn’t. Of the resulting models, any that still have perfect size winding and pass other checks will become the benchmark models to drive the development of new quantum hardware.

If quantum computers grow while holographic Hamiltonians shrink, perhaps they will someday meet in the middle. Then physicists will be able to run experiments in the lab that reveal the incalculable behavior of their favorite models of quantum gravity.

“I’m optimistic about where this is going,” Gharibyan said.

I had thought that perhaps this fiasco would cause the Quanta editors to think twice, talk to skeptical experts, and re-report the original credulous story/video. Instead, it looks like their plan is to double down on the “quantum gravity in the lab” hype.

Update: Two more related pieces of wormhole news.

  • On Friday Harvard will be hosting a talk on the non-wormhole.
  • In this preprint Maldacena argues for another example of how to do quantum gravity in the lab, by doing a QM calculation on a quantum computer that will “have created something that behaves as a black hole in the laboratory” (no wormholes, just black holes). The calculation he suggests involves not the newer SYK model, but the ancient BFSS matrix model from 27 years ago, which at the time got a lot of attention as a possible definition of M-theory.

Update: The Harvard CMSA talk about the wormholes is available here. I didn’t see anything in the slides about the Yao et al. criticism of this work. In the last minute of the video there was a question about this, and some reference to the criticism having been addressed during the talk. Supposedly there was some quick verbal summary of this response to the criticism in this last minute, but the sound was so garbled I couldn’t understand it. Here’s the automatically generated transcript:

so I guess I guess um we’re talking about like at the time of interpretation you do see this
operating ghost in kind of declare the two-point function if you’re looking for at later times you can ask about
different kind of scenarios one is accepting the single-sided systems what it’s doing it’s like internal reversible
verbal hamiltonian and you see thermalizing Dynamics in the library
um perhaps also the size winding uh although it’s not necessarily required
for all of your fermions to show size winding because you have done gravitational attractions in your model we do see impact that all the pronouns
have quite good size winding they’re good enough to allow them to teleport to size binding but the time and size
binding is clearly related to like the the rate of Decay the two-point function and so it seems to actually lend itself
to an even tighter kind of interpretation where would you associate different masses through different
permeons and this is quite consistent that is

Someone with more patience and interest in this perhaps can carefully follow the talk and report what the response to the Yao et al. criticism actually was.

Posted in Wormhole Publicity Stunts | 14 Comments

Lost in the Landscape

A commenter in the previous posting pointed to an interview with Lenny Susskind that just appeared at the CERN Courier, under the title Lost in the Landscape. Some things I found noteworthy:

  • He deals with the lack of any current definition of what string theory means by distinguishing between “String theory” and “string theory”. “String theory” is the superstring in 10 dimensions somehow compactified to have some large dimensions that are either flat or AdS. This can’t be the real world

    I can tell you with 100% confidence that we don’t live in that world.

    since the real world is non-supersymmetric and dS, not supersymmetric and AdS. He describes this theory as being “a very precise mathematical structure”, which one might argue with.

    Something very different is “string theory”:

    you might call it string-inspired theory, or think of it as expanding the boundaries of this very precise theory in ways that we don’t know how to at present. We don’t know with any precision how to expand the boundaries into non-supersymmetric string theory or de Sitter space, for example, so we make guesses. The string landscape is one such guess…

    The first primary fact is that the world is not exactly supersymmetric and string theory with a capital S is. So where are we? Who knows! But it’s exciting to be in a situation where there is confusion.

  • About anthropics and the landscape, he still thinks this is the best idea out there, but acknowledges it has gone nowhere in twenty years:

    Witten, who had negative thoughts about the anthropic idea, eventually gave up and accepted that it seems to be the best possibility. And I think that’s probably true for a lot of other people. But it can’t have the ultimate influence that a real theory with quantitative predictions can have. At present it’s a set of ideas that fit together and are somewhat compelling, but unfortunately nobody really knows how to use this in a technical way to be able to precisely confirm it. That hasn’t changed in 20 years. In the meantime, theoretical physicists have gone off in the important direction of quantum gravity and holography.

  • About the swampland, like everyone else I know, he can’t figure out what the argument is that is going to relate it to the real world:

    The argument seems to be: let’s put a constraint on parameters in cosmology so that we can put de Sitter space in the swampland. But the world looks very much like de Sitter space, so I don’t understand the argument and I suspect people are wrong here.

  • His comments on Technicolor strike me as odd:

    I had one big negative surprise, as did much of the community. This was a while ago when the idea of “technicolour” – a dynamical way to break electroweak symmetry via new gauge interactions – turned out to be wrong. Everybody I knew was absolutely convinced that technicolour was right, and it wasn’t. I was surprised and shocked.

    I remember first hearing about the Technicolor idea around 1979 when Susskind and Weinberg wrote about it. It was a very attractive idea by itself, but the problem was that to match known flavor physics you needed to go to “Extended Technicolor”, which was really ugly (lots of new degrees of freedom, no predictivity). No idea when people supposedly were “absolutely convinced that technicolour was right”, maybe it was for the few months it took them to realize you needed Extended Technicolor.

  • About the wormholes, he says:

    One extremely interesting idea is “quantum gravity in the lab” – the idea that it is possible to construct systems, for example a large sphere of material engineered to support surface excitations that look like conformal field theory, and then to see if that system describes a bulk world with gravity. There are already signs that this is true. For example, the recent claim, involving Google, that two entangled quantum computers have been used to send information through the analogue of a wormhole shows how the methods of gravity can influence the way quantum communication is viewed. It’s a sign that quantum mechanics and gravity are not so different.

    Unclear to me how this enthusiastic reference to the wormholes relates to his much less enthusiastic recent quote in New Scientist:

    What is not so clear is whether the experiment is any better than garden-variety quantum teleportation and does it really capture the features of macroscopic general relativity that the authors might like to claim… only in the most fuzzy of ways (at best).

Posted in Multiverse Mania, Swampland | 20 Comments

Yet More on the Wormholes

The paper explaining that this Nature cover story, besides being a publicity stunt, was also completely wrong, has so far attracted very little media attention. The first thing I’ve seen came out today at New Scientist, a publication often accused of promoting hype, but in this case so far the only one reporting problems with the hyped result. The title of the article is Google’s quantum computer simulation of a wormhole may not have worked. It contains an explanation of the technical problems:

The first problem has to do with how the simulated wormhole reacted to the signals being sent through it….Yao and his colleagues found that for each individual test, the system continued to oscillate indefinitely, which doesn’t match the expected behaviour of a wormhole.

The second issue was related to the signals themselves. One of the signatures of a real wormhole – and therefore of a good holographic representation of a wormhole – is that the signal comes out looking the same as it went in. Yao and his team found that while this worked for some signals – those similar to the ones the researchers used to train a machine learning algorithm used to simplify the system – it didn’t work for others.

…it seems that for this particular quantum system, the size winding would disappear if the model was made larger or more detailed. Therefore, the perfect size winding observed by the original authors may just be a relic of the model’s small size and simplicity.

There is a response from Maria Spiropulu:

“The authors of the comment argue about the many-body properties of the individual decoupled quantum systems of our model,” she says. “We observed features of the coupled systems consistent with traversable wormhole teleportation.”

Remarkably, Lenny Susskind throws the authors of the stunt under the bus:

“What is not so clear is whether the experiment is any better than garden-variety quantum teleportation and does it really capture the features of macroscopic general relativity that the authors might like to claim… only in the most fuzzy of ways (at best),” he says.

Posted in Wormhole Publicity Stunts | 13 Comments

Physics With Witten

I just noticed that last semester Edward Witten was teaching Physics 539 at Princeton, a graduate topics course. Since he’s now past the age of 70, at the IAS he is officially retired and an emeritus professor (the IAS is the only place I know of in the US with retirement at 70, presumably since it is a non-teaching institution). I don’t know if there are other times Witten has been teaching courses at the university since his move to the IAS in 1987.

Videos of the first few lectures are on Youtube here, problem sets on this web-page. It seems like the course started out covering issues with causality in general relativity, following these lecture notes, then later moved on to topics in quantum information theory.

Posted in Uncategorized | 6 Comments

Some Interviews

Some interviews that readers of this blog may find of interest:

Posted in Uncategorized | 9 Comments

Latest on the Wormholes

I had thought that the wormhole story had reached peak absurdity back in December, but last night some commenters pointed to a new development: the technical calculation used in the publicity stunt was nonsense, not giving what was claimed. The paper explaining this is Comment on “Traversable wormhole dynamics on a quantum processor”, from a group led by Norman Yao. Yao is a leading expert on this kind of thing, recently hired by Harvard as a full professor. There’s no mention in the paper about any conversations he might have had with the main theorist responsible for the publicity stunt, his Harvard colleague Daniel Jafferis.

Tonight Fermilab is still planning a big public event to promote the wormhole, no news yet on whether it’s going to get cancelled. Also, no news from Quanta magazine, which up until now has shown no sign of understanding the extent they were taken in by this. Finally, no news from Nature about whether the paper will be retracted, and whether the retraction will be a cover story with a cool computer graphic of a non-wormhole.

Update: Dan Garisto goes through the Jafferis et al. paper, noting “Turns out it looked good only because they used an average (a fact not specified in the article).” and ending with

The unreported averages for the thermalization and teleportation signal make a stronger case for misconduct on the part of the authors.

I don’t understand why Fermilab was planning a public lecture promoting this, and with what has now come out, it should clearly be cancelled.

Update: I like the suggestion from Andreas Karch

Quanta magazine could make a video where the wormhole authors share in vivid detail the excitement they felt when they realized that their paper isn’t just overhyped but actually wrong.

Update: Garisto has a correction, explaining that the averaging is not the problem with Jafferis et al., rather that the teleportation signal is only there for the pair of operators involved in the machine language training, not there for other pairs of operators that should demonstrate the effect. In any case, best to consult the paper itself. If Jafferis et al. disagree with its conclusions, surely we’ll see an explanation from them soon.

Update: The Harvard Gazette promotes the wormhole publicity stunt, with “Daniel Jafferis’ team has for the first time conducted an experiment based in current quantum computing to understand wormhole dynamics.” As far as I can tell, that’s utter nonsense, with the result of the quantum computer calculation adding zero to our understanding of “wormhole dynamics”.

Update: Video of the Lykken talk now available, advertised by FNAL as Wormholes in the Laboratory.

Posted in Wormhole Publicity Stunts | 10 Comments

The Trouble With Path Integrals, Part II

This posting is about the problems with the idea that you can simply formulate quantum mechanical systems by picking a configuration space, an action functional S on paths in this space, and evaluating path integrals of the form

Necessity of imaginary time

This section has been changed to fix the original mistaken version.
If one tries to do this path integral for even the simplest possible quantum field theory case (a non-relativistic free particle in one space dimension), the answer for the propagator in energy-momentum space is
$$G(E,p)=\frac {1}{E-\frac{p^2}{2m}}$$
Fourier transforming to real-time is ill-defined (the integration goes through the location of the pole at $E=\frac{p^2}{2m}$). Taking $t$ complex and in the upper half plane, for imaginary $t$ the Fourier transform is a well-defined integral. One gets the real-time propagator then by analytic continuation as a boundary value. For a relativistic theory one has
and two poles (at $E=\pm \sqrt{p^2+m^2}$) to deal with. Again Fourier-transforming to real-time is ill-defined, but one can Fourier transform to imaginary time, then use this to get a sensible real-time propagator by analytic continuation.

Trying to do the same thing for Yang-Mills theory, again one gets something ill-defined for real time, with the added disadvantage of no way to actually calculate it. Going to imaginary time and discretizing gives a version of lattice gauge theory, with well-defined integrals for fixed lattice spacing. This is conjectured to have a well-defined limit at the lattice spacing is taken to zero.

Not an integral and not needed for fermions

Actual fundamental matter particles are fermions, with an action functional that is quadratic in the fermion fields. For these there’s a “path integral”, but it’s in no sense an actual integral, rather an interesting algebraic gadget. Since the action functional is quadratic, you can explicitly evaluate it and just work with the answer the algebraic gadget gives you. You can formulate this story as an analog of an actual path integral, but it’s unclear what this analogy gets you.

Phase space path integrals don’t make sense in general

Another aspect of the fermion action is that it has only one time derivative. For actions of this kind, bosonic or fermionic, the variables are not configuration space variable but phase space variables. For a linear phase space and quadratic action you can figure out what to do, but for non-linear phase spaces or non-quadratic actions, in general it is not clear how to make any sense of the path integral, even in imaginary time.

In general this is a rather complicated story (see some background in the part I post). For an interesting recent take on the phase-space path integral, see Witten’s A New Look At The Path Integral Of Quantum Mechanics.

Update: A commenter pointed me to this very interesting talk by Neil Turok. The main motivation that Turok explains at the beginning of the talk (and also in the Q and A afterwards) is exactly one that I share. He argues that the lesson of the the last 40 years is that one should not try and solve problems by making the Standard Model more complicated. All one needs to do is look more closely at the Standard Model itself and its foundations. If you do that, one thing you find is that there’s a “trouble with path integrals”. In Turok’s words, the problems with the path integral indicate that “the field is without foundations” and “nobody knows what they are doing”.

I do though very much part company with him over the direction he takes to try and get better foundations. He argues that you shouldn’t Wick rotate (analytically continue in time), but should complexify paths, analytically continuing in path space. For some problems doing the latter may be a better idea than doing the former, and in his talk he works out a toy QM calculation of this kind. But the model he studies (anharmonic oscillator) doesn’t at all prove that going to the imaginary time theory is a bad idea, for some calculations that works very well. He’s motivated by defining the path integral for gravity, where Euclidean quantum gravity is a problematic subject, but the gravitational version of the toy model I think will also be problematic. The ideas I’ve been pursuing involving the way the symmetries of spinors behave in Euclidean signature I think give a promising new way to think about this, and you won’t get that from just trying to complexify the conventional variables used to describe geometries.

Posted in Uncategorized | 30 Comments

The Trouble With Path Integrals, Part I

Two things recently made me think I should write something about path integrals: Quanta magazine has a new article out entitled How Our Reality May Be a Sum of All Possible Realities and Tony Zee has a new book out, Quantum Field Theory, as Simply as Possible (you may be affiliated with an institution that can get access here). Zee’s book is a worthy attempt to explain QFT intuitively without equations, but here I want to write about what it shares with the Quanta article (see chapter II.3): the idea that QM or QFT can best be defined and understood in term of the integral
where S is the action functional. This is simple and intuitively appealing. It also seems to fit well with the idea that QM is a “many-worlds” theory involving considering all possible histories. Both the Quanta article and the Zee book do clarify that this fit is illusory, since the sum is over complex amplitudes, not a probability density for paths.

This posting will be split into two parts. The first will be an explanation of the context of what I’ve learned about path integrals over the years. If you’re not interested in that, you can skip to part II, which will list and give a technical explanation of some of the problems with path integrals.

I started out my career deeply in thrall to the idea that the path integral was the correct way to formulate quantum mechanics and quantum field theory. The first quantum field theory course I took was taught by Roy Glauber, and involved baffling calculations using annihilation and creation operators. At the same time I was trying to learn about gauge theory and finding that sources like the 1975 Les Houches Summer School volume or Coleman’s 1973 Erice lectures gave a conceptually much simpler formulation of QFT using path integrals. The next year I sat in on Coleman’s version of the QFT course, which did bring in the path integral formalism, although only part-way through the course. This left me with the conclusion that path integrals were the modern, powerful way of thinking, Glauber was just hopelessly out of touch, and Coleman didn’t start with them from the beginning because he was still partially attached to the out-of-date ways of thinking of his youth.

Over the next few years, my favorite QFT book was Pierre Ramond’s Field Theory: A Modern Primer. It was (and remains) a wonderfully concise and clear treatment of modern quantum field theory, starting with the path integral from the beginning. In graduate school, my thesis research was based on computer calculations of path integrals for Yang-Mills theory, with the integrals done by Monte-Carlo methods. Spending a lot of time with such numerical computations further entrenched my conviction that the path integral formulation of QM or QFT was completely essential. This stayed with me through my days as a postdoc in physics, as well as when I started spending more time in the math community.

My first indication there could be some trouble with path integrals I believe started in around 1988, when I learned of Witten’s revolutionary work on Chern-Simons theory. This theory was defined as a very simple path integral, a path integral over connections with action the Chern-Simons functional. What Witten was saying was that you could get revolutionary results in three-dimensional topology, simply by calculating the path integral
$$\int_{\mathcal A} e^{iCS[A]}$$
where the integration is over the space of connections A on a principal bundle over some 3-manifold. During my graduate student days and as a postdoc I had spent a lot of time thinking about the Chern-Simons functional (see unpublished paper here). If I could find a usable lattice gauge theory version of CS[A] (I never did…), that would give a way defining the local topological charge density in the four-dimensional Yang-Mills theory I was working with. Witten’s new quantum field theory immediately brought back to mind this problem. If you could solve it, you would have a well-defined discretized version of the theory, expressed as a finite-dimensional version of the path integral, and then all you had to do was evaluate the integral and take the continuum limit.

Of course this would actually be impractical. Even if you solved the problem of discretizing the CS functional, you’d have a high dimensional integral over phases to do, with the dimension going to infinity in the limit. Monte-Carlo methods depend on the integrand being positive, so won’t work for complex phases. It is easy though to come up with some much simpler toy-model analogs of the problem. Consider for example the following quantum mechanical path integral
$$\int_{\text {closed paths on}\ S^2} e^{i\frac{1}{2}\oint A}$$
Here $S^2$ is a sphere of radius 1, and A is locally a 1-form such that dA is the area 2-form on the sphere. You could think of A as the vector potential for a monopole field, where the monopole was inside the sphere.

If you think about this toy model, which looks like a nice simple version of a path integral, you realize that it’s very unclear how to make any sense of it. If you discretize, there’s nothing at all damping out contributions from paths for which position at time $t$ is nowhere near position at time $t+\delta t$. It turns out that since the “action” only has one time derivative, the paths are moving in phase space not configuration space. The sphere is a sort of phase space, and “phase space path integrals” have well-known pathologies. The Chern-Simons path integral is of a similar nature and should have similar problems.

I spent a lot of time thinking about this, one thing I wrote early on (1989) is available here. You get an interesting analog of the sphere toy model for any co-adjoint orbit of a Lie group G, with a path integral that should correspond to a quantum theory with state space the representation of G that the orbit philosophy associates to that orbit. Such a path integral that looks like it should make sense is the path integral for a supersymmetric quantum mechanics system that gives the index of a Dirac operator. Lots of people were studying such things during the 1980s-early 90s, not so much more recently. I’d guess that a sensible Chern-Simons path integral will need some fermionic variables and something like the Dirac operator story (in the closest analog of the toy model, you’re looking at paths moving in a moduli space of flat connections).

Over the years my attention has moved on to other things, with the point of view that representation theory is central to quantum mechanics. To truly play a role as a fundamental formulation of quantum mechanics, the path integral needs to find its place in this context. There’s a lot more going on than just picking an action functional and writing down

Posted in Uncategorized | 8 Comments

What’s Going Right in Particle Physics

Since I had a little free time today, I was thinking of writing something motivated by two things I saw today, Sabine Hossenfelder’s What’s Going Wrong in Particle Physics, and this summer’s upcoming SUSY 2023 conference and pre-SUSY 2023 school. While there are a lot of ways in which I disagree with Hossenfelder’s critique, there are some ways in which it is perfectly accurate. For what possible reason is part of the physics community organizing summer schools to train students in topics like “Supersymmetry Phenomenology” or “Supersymmetry and Higgs Physics”? “Machine Learning for SUSY Model Building” encapsulates nicely what’s going wrong in one part of theoretical particle physics.

To begin my anti-SUSY rant, I looked back at the many pages I wrote 20 years ago about what was wrong with SUSY extensions of the SM in chapter 12 of Not Even Wrong. There I started out by noting that there were 37,000 or so SUSY papers at the time (SPIRES database). Wondering what the current number is, I did the same search on the current INSPIRE database, which showed 68,469 results. The necessary rant was clear: things have not gotten better, the zombie subject lives on, fed by summer schools like pre-SUSY 2023, and we’re all doomed.

But then I decided to do one last search, to check number of articles by year (e.g. search “supersymmetry and de=2020”). The results were surprising, and I spent some time compiling numbers for the following table:

These numbers show a distinct and continuing fall-off starting after 2015, the reason for which is clear. The LHC results were in, and a bad idea (SUSY extensions of the SM) had been torpedoed by experiment, taking on water and sinking after 20 years of dominance of the subject. To get an idea of the effect of the LHC results, you can read this 2014 piece by Joe Lykken and Maria Spiropulu (discussed here), by authors always ahead of the curve. No number of summer schools on SUSY phenomenology are going to bring this field back to health.

Going back to INSPIRE, I realized that I hadn’t needed to do the work of creating the above bar graph, the system does it for you in the upper left corner. I then started trying out other search terms. “string theory” shows no signs of ill-health, and “quantum gravity”, “holography”, “wormhole”, etc. are picking up steam, drawing in those fleeing the sinking SUSY ship. With no experimentalists to help us by killing off bad ideas in these areas, some zombies are going to be with us for a very long time…

Update: There are a couple things from Michael Peskin that are relevant to this that might be of interest: an extensive 2021 interview, and a very recent “vision statement” about particle physics. Peskin’s point of view I think is exactly what Hossenfelder is arguing against. He continues to push strongly for a very expensive near-term collider project, and doesn’t seem to have learned much of anything from a long career of working on failed ideas. I remember attending a 2007 colloquium talk here at Columbia where he put up a slide showing all the SUSY particles of masses a few hundred GeV that the LHC was about to find, and assured us that over the next 5-10 years we’d be seeing evidence of a WIMP from several different sources. According to the interview, in 2021 he was working on Randall-Sundrum models (see here) and wondering about the “little hierarchy problem” of why all the new physics was just above the LHC-accessible scale rather than at the electro-weak breaking scale. I very much agree with some of his vision statement (the Higgs sector is the part of the SM we don’t fully understand, and the argument for colliders is that they are the only way to study this), but his devotion to failed ideas (not just Randall-Sundrum, but string theory also) as the way forward is discouraging. In the interview he admits that it’s looking like post-LHC abandonment of particle physics is the most likely future:

I think if you were a betting man, you would bet that LHC will be the last great particle accelerator, and the whole field will dissipate after the LHC is over.

Most of his prominent colleagues have the same attitude that the subject is over, and have already moved on, unfortunately to things like creating wormholes in the lab.

Posted in Uncategorized | 19 Comments

Various and Sundry

A few things that may be of interest:

  • Fermilab is continuing to push the wormhole publicity stunt, with Joe Lykken, the lab’s Deputy director for research on the 17th giving a public lecture on Wormholes in the Laboratory. The promotional text goes way out of its way to mislead about the science:

    A wormhole, also known as an Einstein-Rosen bridge, is a hypothetical tunnel connecting remote points in spacetime. While wormholes are allowed by Albert Einstein’s theory of relativity, wormholes have never been found in the universe. In late 2022, the journal Nature featured a paper co-written by Joe Lykken, leader of the Fermilab Quantum Institute, that describes observable phenomenon produced by a quantum processor that “are consistent with the dynamics of a transversable wormhole.” Working with a Sycamore quantum computer at Google, a team of physicists was able to transfer information from one area of the computer to another through a quantum system utilizing artificial intelligence hardware.

    The “utilizing artificial intelligence hardware” seems to be an incoherent attempt to add more buzzwords to the bullshit. If you know anyone with any influence at the lab, you might want to consider contacting them and asking them to try and get this embarrassment canceled.

  • On another embarrassment to science front, Fumiharu Kato on Twitter is announcing the publication of the paperback edition of his book promoting the IUT proof of the abc conjecture. In his talk about this at the Simons Center he dealt with the issue of the problems with the proof by pretending they don’t exist, but (from what I can make out using Google Translate), he says he’ll deal with this in the paperback edition. His point of view seems to be that once PRIMS (chief editor S. Mochizuki) decided to accept the IUT papers, no one should be writing things like this. Perhaps he’s just trying to point out that this is potentially a huge embarrassment for PRIMS and RIMS in general, which is undeniable. But he appears to be going down the truly unfortunate path of making this not about mathematics but about Japanese national honor, with one tweet getting translated as:

    Some non-Japanese mathematicians questioned, “This is an insult to the Japanese mathematics world! Why don’t Japanese mathematicians say anything after being so insulted?” I also think the question is valid.

  • Continuing with the difficult and depressing, there’s the ongoing Russian war of aggression in Ukraine. The New York Times reports on the efforts of the Simons Foundation help sustain Ukrainian science. The Guardian has an excellent article on the problems the LHC experiments are having due to the fact that Russian physicists make up a significant part of the collaborations. I had heard this story back in September from John Ellis, who I met for the first time in London (at an event which now has a video of a discussion I was involved in). Tommaso Dorigo has an article about this on his blog, where he takes a point of view that is appealing (no borders or nationalism in science), but I don’t think it’s so simple.

    I’ve been wondering if there is a historical parallel to look to, with one possibility the situation in 1938-39 when Hitler invaded Czechoslovakia. By this point (as now) a lot of scientists had fled to the West, and the issue must have arisen of how scientists in the West should deal with their German colleagues who were staying in Germany.

  • Turning to something much more pleasant, Michael Harris points me to a video of a talk by Manjul Bhargava that has finally appeared, one of a series of talks at a 2018 conference in honor of Barry Mazur.
  • This week in my graduate class I’m talking a bit about Howe duality, and just discovered that his original unpublished articles on the subject are now available online, see here and here.
  • Finally, I only recently learned about the volume of Sidney Coleman’s correspondence that recently appeared, under the title Theoretical Physics in Your Face. Especially fun to read for those like me who remember the era at Harvard when Coleman was at the center of activity. One quote, his opinion in 1985 evaluating the grant to the Princeton theory group:

    If I have any serious criticism of this group at all, it is that their recent concentration on superstrings seems to me a tactical error, too much devotion of effort to a line of development that (at least to an outsider’s eye) is not that promising. However, I could well be wrong in this, and, even if I am right, they’ll soon discover they’ve drilled a dry hole and be off exploring other fields next year.

    Unfortunately the last part of this was very wrong…

Update: Nature has an article on the resolution of the LHC Russian authorship issue.

Posted in Uncategorized, Wormhole Publicity Stunts | 9 Comments