Not Quite What Happened

Quanta has an article out today about the wormhole publicity stunt, which sticks to the story that by doing a simple SYK model calculation on a quantum computer instead of a classical computer, one is doing quantum gravity in the lab, producing a traversable wormhole and sending information through it. From what I’ve heard, the consensus among theorists is that the earlier Quanta article and video were nonsense, outrageously overhyping a simulation and then bizarrely identifying a simulation with reality if it’s done on a quantum computer.

The new article is just about as hype-laden, starting off with:

A holographic wormhole would scramble information in one place and reassemble it in another. The process is not unlike watching a butterfly being torn apart by a hurricane in Houston, only to see an identical butterfly pop out of a typhoon in Tokyo.

and

In January 2022, a small team of physicists watched breathlessly as data streamed out of Google’s quantum computer, Sycamore. A sharp peak indicated that their experiment had succeeded. They had mixed one unit of quantum information into what amounted to a wispy cloud of particles and watched it emerge from a linked cloud. It was like seeing an egg scramble itself in one bowl and unscramble itself in another.

In several key ways, the event closely resembled a familiar movie scenario: a spacecraft enters one black hole — apparently going to its doom — only to pop out of another black hole somewhere else entirely. Wormholes, as these theoretical pathways are called, are a quintessentially gravitational phenomenon. There were theoretical reasons to believe that the qubit had traveled through a quantum system behaving exactly like a wormhole — a so-called holographic wormhole — and that’s what the researchers concluded.

An embarrassing development provides the ostensible reason for the new article, the news that “another group suggests that’s not quite what happened”. This refers to this preprint, which argues that the way the Jafferis-Lykken-Spiropulu group dramatically simplified the calculation to make it doable on a quantum computer threw out the baby with the bathwater, so was not meaningful. The new Quanta piece has no quotes from experts about the details of what’s at issue. All one finds is the news that the preprint has been submitted to Nature and that

the Jafferis, Lykken and Spiropulu group will likely have a chance to respond.

There’s also an odd piece of identity-free and detail-free reporting that

five independent experts familiar with holography consulted for this article agreed that the new analysis seriously challenges the experiment’s gravitational interpretation.

I take all this to mean that the author couldn’t find anyone willing to say anything in defense of the Nature article. An interesting question this raises is that if all experts agree the Nature article was wrong, will it be retracted? Will the retraction also be a cover story?

The update of the original story is framed by enthusiastic and detailed coverage of the work of Hrant Gharibyan on similar wormhole calculations. The theme is that while Jafferis-Lykken-Spiropulu may have hit a bump in the road, claiming to be doing “quantum gravity in the lab” by SYK model calculations on quantum computers is the way forward for fundamental theoretical physics:

The holographic future may not be here yet. But physicists in the field still believe it’s coming, and they say that they’re learning important lessons from the Sycamore experiment and the ensuing discussion.

First, they expect that showing successful gravitational teleportation won’t be as cut and dry as checking the box of perfect size winding. At the very least, future experiments will also need to prove that their models preserve the chaotic scrambling of gravity and pass other tests, as physicists will want to make sure they’re working with a real Category 5 qubit hurricane and not just a leaf blower. And getting closer to the ideal benchmark of triple-digit numbers of particles on each side will make a more convincing case that the experiment is working with billowing clouds and not questionably thin vapors.

No one expects today’s rudimentary quantum computers to be up to the challenge of the punishingly long Hamiltonians required to simulate the real deal. But now is the time to start chiseling away at them bit by bit, Gharibyan believes, in preparation for the arrival of more capable machines. He expects that some might try machine learning again, this time perhaps rewarding the algorithm when it returns chaotically scrambling, non-commuting Hamiltonians and penalizing it when it doesn’t. Of the resulting models, any that still have perfect size winding and pass other checks will become the benchmark models to drive the development of new quantum hardware.

If quantum computers grow while holographic Hamiltonians shrink, perhaps they will someday meet in the middle. Then physicists will be able to run experiments in the lab that reveal the incalculable behavior of their favorite models of quantum gravity.

“I’m optimistic about where this is going,” Gharibyan said.

I had thought that perhaps this fiasco would cause the Quanta editors to think twice, talk to skeptical experts, and re-report the original credulous story/video. Instead, it looks like their plan is to double down on the “quantum gravity in the lab” hype.

Update: Two more related pieces of wormhole news.

  • On Friday Harvard will be hosting a talk on the non-wormhole.
  • In this preprint Maldacena argues for another example of how to do quantum gravity in the lab, by doing a QM calculation on a quantum computer that will “have created something that behaves as a black hole in the laboratory” (no wormholes, just black holes). The calculation he suggests involves not the newer SYK model, but the ancient BFSS matrix model from 27 years ago, which at the time got a lot of attention as a possible definition of M-theory.

Update: The Harvard CMSA talk about the wormholes is available here. I didn’t see anything in the slides about the Yao et al. criticism of this work. In the last minute of the video there was a question about this, and some reference to the criticism having been addressed during the talk. Supposedly there was some quick verbal summary of this response to the criticism in this last minute, but the sound was so garbled I couldn’t understand it. Here’s the automatically generated transcript:

1:16:50
so I guess I guess um we’re talking about like at the time of interpretation you do see this
1:16:56
operating ghost in kind of declare the two-point function if you’re looking for at later times you can ask about
1:17:01
different kind of scenarios one is accepting the single-sided systems what it’s doing it’s like internal reversible
1:17:07
verbal hamiltonian and you see thermalizing Dynamics in the library
1:17:12
um perhaps also the size winding uh although it’s not necessarily required
1:17:18
for all of your fermions to show size winding because you have done gravitational attractions in your model we do see impact that all the pronouns
1:17:26
have quite good size winding they’re good enough to allow them to teleport to size binding but the time and size
1:17:31
binding is clearly related to like the the rate of Decay the two-point function and so it seems to actually lend itself
1:17:38
to an even tighter kind of interpretation where would you associate different masses through different
1:17:44
permeons and this is quite consistent that is

Someone with more patience and interest in this perhaps can carefully follow the talk and report what the response to the Yao et al. criticism actually was.

Update: A response by the original authors to Yao et al. has been posted as “Comment on “Comment on “Traversable wormhole dynamics on a quantum processor” ” “. From the abstract, the claim seems to be that the results of the toy model calculation are “consistent with a gravitational interpretation of the teleportation dynamics, as opposed to the late-time dynamics”, and that this is not in conflict with the objections by Yao et al. These objections are described as “counterfactual scenarios outside of the experimentally implemented protocol.” The odd thing here is the description of the quantum computer calculation as a “factual” experimental result, part of an “experimentally implemented protocol”. The quantum computer calculation was not an experiment but a calculation, with a known-in-advance result (the calculation done previously on a classical computer). The criticisms of Yao et al. aren’t “counterfactual” to an experimental protocol, but challenging the interpretation of a calculation. As far as I can tell, this whole discussion is about how to interpret simple calculations you can do on any conventional computer, nothing to do with an “experiment”.

This entry was posted in Wormhole Publicity Stunts. Bookmark the permalink.

21 Responses to Not Quite What Happened

  1. Anonymous says:

    I would like to point out the parallel to the room-temperature superconductivity debacle, which to me appears to be a similar instance of the postmodern approach to doing physics research.

  2. Peter Woit says:

    Anonymous,

    If you’re referring to the recent room-temperature superconductivity controversy, it’s interesting to notice that the same Quanta writer (Charlie Wood), who wrote this hype about “quantum gravity in the lab”, wrote what seems to me a much more even-handed article about that, see here
    https://www.quantamagazine.org/room-temperature-superconductor-discovery-meets-with-resistance-20230308/
    A story about an experimental development is somewhat different than one about theoretical developments. Another group can try and reproduce an experiment, and people promoting non-reproducible work will run into skepticism. For theorists there’s no such thing: you can keep repeating the same nonsense about quantum gravity for ever with impunity.

    The odd thing about the “quantum gravity in a lab” hype is that all the top Quanta physics writers and editors have gone all in (Natalie Wolchover, Thomas Lin, Charlie Wood). I’ve heard from several experts that they tried to explain to them the problems with their coverage, but they’re not listening to skeptical voices, preferring to (like the director of the IAS), take the word of a small number of people like Maldacena that quantum gravity in the lab is a real thing and the future of the subject.

    Built into the DNA of the Simons Foundation and Quanta I think is a faith in the traditional elite institutions of the US theoretical physics world. That people at the IAS/Harvard/Caltech/Stanford could just be wrong about string theory/quantum gravity/it from qubit, etc. to this world-view is nearly inconceivable.

  3. Peter Shor says:

    Don’t a lot of the latest quantum gravity theories say that quantum gravity goes beyond the standard rules of quantum mechanics? For example, the AMPS paper (Black Holes: Complementarity or Firewalls?) seems to imply that something really unusual must happen. But if you can do quantum gravity in the lab, then doesn’t quantum gravity have to satisfy the standard rules of quantum mechanics?

    Do any of the people writing these papers see any contradictions here?

  4. Rusty says:

    Is there anything going on in this experiment/calculation
    that can’t be explained by ordinary, non-relativistic QM?

  5. Peter Woit says:

    Peter Shor/Rusty,
    That’s the obvious problem with all “quantum gravity in the lab” claims. The “lab” setup is described by conventional well-tested QM/QED, and you’re not planning on testing that. So, as a matter of physics, you know exactly what is going to happen, there’s no need to do the “experiment” and it can’t tell you anything new about physics.

    The only sense in which you don’t know what’s going to happen is the same sense in which any numerical calculation may tell you something new about solutions to an equation. But if I solve an equation describing a cow on my laptop, that doesn’t mean I created a cow, or that I’m doing lab experiments on cows.

  6. I think the underlying problem here isn’t Quanta, it’s Nature. Nature has a bias for publishing anything that can be experimentally done, regardless of how meaningless the theoretical interpretation. This is isn’t the first time this has happened. Nature has published several previous papers on this wormhole stuff (without the teleportation part). They also keep publishing anything about ultracold gases so long as someone claims it’s got something to do with black holes or the early universe. (Don’t get me started.)

    Otoh, you’d never see them publish a theory paper in that area, and probably for the better because most of it is nonsense. So you get those theorists teaming up with experimentalists to get their stuff published in Nature and chances are the theory-part is never really scrutinized.

    This doesn’t really excuse that the Quanta editors can’t seem to admit to themselves that they’re promoting bullshit, but with Nature in their back they know they’ll get through with it.

    The issue with the superconductor claim I think is somewhat different. It doesn’t originate in the journal, but it originates in the lab itself, and there’s only so much peer review can do even in the best case.

  7. John says:

    Sabine,
    I don’t think Nature is completely innocent in the superconductivity story, given the history of retractions from the same authors.

  8. André says:

    To be fair, the whole issue of claiming that doing a simulation is somehow equal to studying the real thing isn’t really new. There has been similar hype coming from the analog gravity community for years (obviously, I know a lot of really decent researchers in analog gravity who are very careful about stating precisely the relevance of what they are doing, but there are also those who are very loudly claiming that they are doing “gravity in the lab” when looking at water waves or BECs).
    I always ask those people what their experiments tell us that you couldn’t learn from numerical simulations, and most of the time the answer is nothing – with the additional caveat that numerical simulations have smaller statistical errors.

    I also do not understand what it is that science communicators (journalists/nature editors/…) find so unattractive about the story: We have these complicated equations that are thought to tell us something about gravity and now, thanks to quantum computers, we can learn much more about their solutions which we didn’t previously understand. [Notwithstanding that the second part of this statement is probably not true yet, and the first may be not true, period.]

    My PhD supervisor used to say (not sure if his words or an unattributed quotation): Don’t trust an experiment until verified by theory.

  9. jack morava says:

    It occurs to me that the best way to understand the quantum computer emulation of a wormhole is as an example of the law of similarity in

    https://en.wikisource.org/wiki/The_Golden_Bough/Sympathetic_Magic

  10. Dimitris Papadimitriou says:

    André

    I don’t think that there is really an analogy between the Analog gravity papers/ experiments and the recent “wormhole” case.
    In the former, there is an actual similarity in the math description.
    In the latter, there is a conjectured duality between two different descriptions.
    The case for Analog gravity, although it has its limitations, is much more solid, not that controversial as the other.

  11. Peter Woit says:

    Dimitris ,
    The problem with analog “gravity in the lab” claims is a different one, but it’s just as much nonsense, since your experiment again is not looking at gravitational degrees of freedom. You’re testing whether an equation does a good job of modeling the analog system, that the same equation appears in a theory of gravity is irrelevant.

    If I’m studying cows and one of my equations modeling the cows is the same as one describing something happening near a black hole, I’m still studying cows, not black holes.

  12. John,

    “I don’t think Nature is completely innocent in the superconductivity story, given the history of retractions from the same authors.”

    Of course they knew this too. What do you expect them to do? Reject a paper just because of prior history? That wouldn’t be proper treatment either.

  13. Jesko Sirker says:

    Sabine,
    the analysis of Hirsch and Marsiglio has raised some very serious doubts about the first retracted Nature paper on room-temperature superconductivity. Their analysis seems to suggest that raw, measured data might never have existed but rather were later constructed based on the published curves. I think in such a case the onus is on the authors to provide more information about the experiment and the data analysis to the physics community before other papers by the same group can be considered for publication. Besides, the new publication is marred by the same issues as the previous one: most prominently, a zero resistance is not measured but rather defined by again subtracting some ‘background’. This is not standard in the field. Several studies have already been published where none of the results in the new Nature paper could be reproduced. This whole story is very unfortunate and could have been easily avoided if Nature would have acted more responsibly.

  14. Peter Woit says:

    All:
    Enough about the high-Tc story, which has nothing to do with the topic of this posting.

  15. Peter Woit says:

    Anon,
    Thanks!
    I guess we’ll have to now wait for “Comment on “Comment on “Comment on….

    Or, maybe Quanta can contact its five anonymous experts to weigh in.

  16. anon says:

    I can simulate the Navier-Stokes equation on a computer by discretizing the equation and putting it on a grid. There is a good theory of how the errors introduced in the discretization process scale with the size of the grid. There are proofs that the errors go to zero as the grid size goes to zero.

    I don’t think the wormhole people have any similar kind of theory for how to estimate the errors in their approximations. So it feels to me like this conversation is going to degenerate into “feelings” about whether some output from the computer feels like this or that.

    If they were calculating finite N SYK, then they could estimate their errors (the differences between finite N and infinite N). But they switch to a fully commuting Hamiltonian that “looks like” finite N SYK. I don’t think they have any control over the errors that that step introduces.

  17. Matthew Foster says:

    Peter,

    “If I’m studying cows and one of my equations modeling the cows is the same as one describing something happening near a black hole, I’m still studying cows, not black holes.”

    This is true of course, but misses a potential advantage of such “analog” simulations. If the same equations govern both cows and black holes, and you do the experiment with cows, you may discover new emergent or nonlinear phenomena of the cows that you didn’t anticipate or understand how to extract from the equations. Most of us humans remain pretty feckless in the face of strong nonlinearity. Assuming that the same equations approximately apply in a corresponding regime of validity for black holes, one might anticipate or propose new physical phenomena in the latter.

    Replacing actual experiments with numerics, some of the insights with respect to SYK and its holographic dual(s) have emerged this way, e.g. the effects of the sum over diffeomorphism fluctuations.

  18. André says:

    Matthew,

    “If the same equations govern both cows and black holes, and you do the experiment with cows, you may discover new emergent or nonlinear phenomena of the cows that you didn’t anticipate or understand how to extract from the equations.”

    That is correct, of course, but I think the relevant question is not whether there might be such new phenomena that you don’t find from analytically studying the equations, but if studying real cows gives you any advantages (precision, practicality, …) compared to a numerical simulation of said equations. If the answer is no, then I see no point in studying the cows.

  19. eitan bachmat says:

    Dear Peter
    you say
    “Built into the DNA of the Simons Foundation and Quanta I think is a faith in the traditional elite institutions of the US theoretical physics world. That people at the IAS/Harvard/Caltech/Stanford could just be wrong about string theory/quantum gravity/it from qubit, etc. to this world-view is nearly inconceivable.”

    I am not sure its reading correctly the DNA of the Simons Foundation if there is one. If you look at all their MPS collaborations across all domains, there seem to be a few general themes. One main theme seems to be the teaming of people and ideas from math, physics (both theory and applied), computer science and engineering (both theory and applied), in short, trying to build interdisciplinary bridges. Another less dominant, but still important theme seems to be symmetries and dualities as powerful tools in math, physics and engineering. A faint theme, but which still seems to have some effect is classifying objects, which is a very typical mathematical activity. I think this accounts for much of their activity for better or worse and within your field of interest, this leads to a thematic bias towards the more mathematically powerful among practitioners, even if the physics is completely off, dualities and other ideas that arise commonly in string theory, or quantum field theory more generally, are extremely powerful in relating different pieces of math or even computer science, and this bias leads indirectly to your perception of the choices.

  20. Peter Woit says:

    eitan bachmat,
    The context of that comment is one of trying to understand why a highly competent journalistic operation like Quanta would be taken in by this hype. Contrast the Quanta wormhole coverage to that at the NY Times or other prominent places.

    Also, the comment was not about the topics the Simons Foundation favors (on the question of topics, I think there’s a prejudice towards computational projects, but the Foundation is now funding a wide range of different sorts of things). From what I have seen over the years, Simons likes to fund elite institutions (places like Berkeley, the IAS, IHES, etc). Other donors often take a different approach, sometimes explicitly trying to build up non-elite institutions. On the whole, I’m somewhat of an elitist myself, believing that historically elite institutions have been the source of great work, and that their health is important to maintain for the health of the whole field. But, when elite institutions are captured by failed or over-hyped research programs, that’s a big problem, and one that I think Quanta is ill-equipped to recognize.

Comments are closed.