For many years now the consensus in a dominant part of the theoretical physics community has been that the center of attention should be on the problem of quantizing gravity, and that conventional notions of quantum theory and space-time geometry need to be abandoned in favor of something radically different. The slogan version of this is “Space-Time is Doomed.”

Ever since my student days long ago, I’ve spent a lot of time looking into the problems of quantum gravity and what people have tried to do to address these problems. The highly publicized attempts to get known physics out of radically different degrees of freedom that I’ve seen haven’t seemed to be making any progress, remaining very far from anything like known physics. In the case of string theory, which also claimed to be able to get particle physics, there was at one point a (highly over-hyped) relatively well-defined proposal that one could discuss, but that’s no longer the case.

Recently things have changed as I’ve become convinced of the promise of certain specific ideas about four-dimensional geometry involving twistors and Euclidean space-time signature. I’ve written about these here and on the blog, and have given some talks (see here and here). These ideas remain speculative and incomplete, but I think they provide some new ways of thinking about the problems of quantizing gravity and unifying it with the other forces.

The existence of a yearly essay competition gave me an excuse to write something about this which I just finished yesterday and sent in, with the title Is Space-Time Really Doomed?. After spending some time on a diversion into arithmetic geometry, I’ve been getting back to seriously thinking about this topic, looking forward to having time in coming months to concentrate on this. I hope the essay will encourage others to not give up on 4d geometry as doomed and unquantizable, but to realize that much is there still waiting to be explored.

**Update**: The essay is now on the arXiv here.

**Update**: Awards for this announced here. I got an honorable mention.

Apart from the metric and the Cartan tetrad formalisms of geometry mentioned in your article, there is probably a third formalism worth exploring, the geometric algebra formalism of geometry, which uses real Clifford algebras.

The geometric nature of quantum field theory and the Standard Model, the gauge theoretic nature of gravity, and the relation between the mathematical structures of four-dimensional geometry, general relativity, and quantum field theory were already made explicit in the geometric algebra formalism in Chris Doran and Anthony Lasenby’s 2003 textbook titled Geometric Algebra for Physicists:

https://www.cambridge.org/core/books/geometric-algebra-for-physicists/FB8D3ACB76AB3AB10BA7F27505925091

Furthermore, the book comes with a section of a chapter showing that the geometric algebra formalism could also be used for twister geometry and a twister-based formalism as well.

The “space-time is doomed” claim, put forward by Nima Arkani-Hamed is based on two arguments:

1. In order to perform an infinitely accurate measurement you need an infinitely large instrument, otherwise the quantum uncertainty imposes an accuracy limit. When you want to fit an infinitely-sized instrument into a laboratory room you collapse to a black hole and no measurement is done.

2.In order to measure something very small (below Planck length) you need a very energetic particle which, again, would result in a black hole.

I think both arguments are fallacious.

1. Even if the physics is classical you need an infinitely sized memory to record a result with infinite accuracy. Forcing that memory inside a room would lead to a black hole. So quantum uncertainty is a red-herring here. And why would you expect to be able to perform an infinitely accurate measurement with finite resources? Why is this premise supposed to be true in the first place?

2.One can measure in principle distances smaller than Planck length with interferometry. No need for ultra-energetic particles. LIGO measures distances with an accuracy of about 10^-20m with a wavelength of 10^-5m.

So, Nima’s arguments fail.

@Andrei,

John Baez wrote a nice article explaining all kinds of issues that arise when thinking of the universe as being modelled by infinite precision reals, even using boring old Newtonian mechanics

“Struggles with the Continuum”, https://arxiv.org/abs/1609.01421

Can your approach accommodate quantum superpositions of different spacetimes? This looks (to my nonexpert eyes) like something that should be included in any successful theory of quantum gravity. Or else, you would have to abandon some basic tenets of relativity (spacetime) or of quantum theory (quantum superpositions).

Mark,

There’s an argument for replacing the use of differential forms and the exterior algebra with the Clifford algebra, for one thing it makes introducing spinors more well-motivated. But that works for any dimension, in this essay I wanted to aim at certain structures specific to 4d.

Andrei,

In any case the argument that you need to consider black hole states doesn’t imply doom for space-time. After all, they’re space-time geometric objects themselves.

Pascal,

Sure, in a quantum theory of space time you should have superpositions of geometries just like in QED you have superpositions of EM fields.

One thing I’m pointing out is that the fact that we haven’t found a convincing quantum dynamics for the degrees of freedom given by geometry of space-time does not mean that those are the completely wrong fundamental degrees of freedom, just emergent epiphenomena. This problem is fine as a motivation for looking for different fundamental degrees of freedom, but that’s not led to anything successful either.

I agree with the sentiment. I also have a bold proposal, not about a physical theory, but a sociological one: it’s time to stop listening to the people in the community that has been so wrong over decades by now, hyping and overselling *their doomed ideas*. For the good health of the field, this has to stop. The names are so obvious that there’s not even a need to spell them out explicitly, we all know who they are, what have been they saying for decades, with what language, tone, and bullying methods. But the problem was not even one of academic ethics, it’s simply that the ideas have failed miserably to deliver their promised ‘El Dorado’ of physics, and misdirected a whole field and young researchers to nowhere.

I don’t think the comments above do justice to Arkani-Hamed’s arguments regarding spacetime. There’s a big difference between classical and quantum mechanics regarding measurements.

In classical mechanics measurements are treated no differently to any other process and thus the ability to perform an accurate measurement is of no consequence to defining the theory. The physical degrees of freedom are defined independently of how they are measured.

However in quantum theory the whole formalism is an engine to compute the probabilities associated to sharp measurements. Arkani-Hamed’s point is that in a setting where we treat gravity and cosmology realistically it’s not possible to fulfill the conditions quantum theory assumes measurements obey within a finite region and thus it seems difficult to have well-defined local observables.

Of course in quantum information we can model unsharp measurements with POVMs, but the space of operators still includes sharp PVMs. If PVMs aren’t possible at all locally the theory would seem to need to take a very different form.

Of course the above does not “prove” this line of argumentation is correct, Arkani-Hamed has been clear that this is just his intuition about where to go. However you can’t argue it’s wrong by ignoring the completely different nature of measurements in quantum and classical theory.

Darran,

I guess I just don’t see the relevance of any such arguments to the claim being made (that space-time degrees of freedom cannot be what a fundamental theory is based on, they should be emergent degrees of freedom of an effective theory).

Peter,

Essentially because spacetime degrees of freedom would be local ones if they retain their character from GR. If there doesn’t seem to be local observables due how measurements work in a realistic cosmological setting it is hard to see how the proper quantum theory would involve spacetime in its conventional form.

His isn’t the only view where considerations of treating measurement realistically result in a weakening or replacement of the notion of spacetime. Rovelli’s relational quantum mechanics is an example.

Peter,

when you say spacetime may not be doomed in QG, which one of the following options are you thinking of?:

1) Both classical manifold and metric, the pair (M,g), are somehow still there at the fundamental physical reality, even after quantization.

2) Only the manifold M survives at the Planck scale.

3) Both of these concepts in the pair (M,g), and their mathematical machinery, are still needed to formulate the quantum theory via quantization of (general relativistic) classical field theories. Now, at the physical fundamental level, that doesn’t mean that there’s a classical pair (M,g) waiting for us at the Planck scale.

4) Physical quantum spacetime at the Planck scale may be some sort of noncommutative geometry, where basic geometric notions such as metric and fiber bundles can be suitably generalized into their noncommutative counterparts, and hence our basic geometric ideas can still be implemented there (although, in their noncommutative incarnations).

5) Other?

Thanks.

Darran,

To sensibly discuss this, I think one needs to get much more specific. I don’t know what it means for a quantized degree of freedom to “retain its character” with respect to the classical theory, or exactly what locality means in this context.

Alex,

The essay was trying to point to a relatively specific proposal: standard 4d geometry is formulated in terms of standard geometrical degrees of freedom (SD/ASD spin connection/curvatures, vierbeins, torsion, half-spinors, twistors). Can one build a fundamental theory by quantizing these degrees of freedom? In a path integral formalism you want to find an action/path integral measure written in terms of these variables that makes sense. I know the various problems people have run into trying to do this in the past, want to understand if there is some way to overcome these problems in the somewhat different specific setup I’m proposing that includes the SM degrees of freedom

Peter,

Basically if we imagine spacetime surviving in the resulting quantum theory, then you would imagine some structures from classical GR or more fundamentally just basic geometric notions, e.g. connections, will appear in quantised form.

Regardless of what geometric entities are quantised, quantising classical variables gives you self-adjoint operators. Self-adjoint operators are just weighted sums of projectors, so essentially quantising gives PVMs with weights.

However it seems that no realistic observation in a finite region actually corresponds to a PVM/Self-Adjoint operator. Only POVMs are possible, to use quantum information language. Thus quantisations of classical quantities don’t seem to lie within a cosmologically realistic quantum theory. Hence a quantisation of classical geometry can’t be fundamental.

Asher Peres has a nice paper “What’s wrong with these observables?” where he discusses how most POVMs don’t correspond to quantised versions of classical observables. Arkani-Hamed doesn’t use Quantum Info terms like POVM, but it’s just a formal way of expressing a lack of sharp observables.

Quantised classical variables correspond to idealised measurement processes. In non-relativistic QM and Minkowski QFT we can get away with this idealisation by recourse to limiting infinitely massive devices, but Arkani-Hamed’s point is that in a treatment that properly respects cosmology we cannot. Thus “quantised geometry” would not be the way to go.

Peter,

okay, then I think that would fall into point 3) of my list. Personally, that’s my opinion too. But, what insurmountable problem do you see with other approaches that take a similar view on this (e.g., LQG; of course, it’s a canonical quantization approach rather than path integral, but the view on the role of classical geometry is similar; and, although still with some problems, some versions of if, like Thiemann’s Master Constraint Programme, seem quite advanced by now; not that I’m trying to sell this approach, I have my doubts too, but I’m just mentioning it)? Also, why path integrals, knowing their infamous ill-defined measures over infinite dimensional field configuration spaces?

Anyway, I have a simple reason to stick to 4d standard geometry rather than über speculative theories: it’s physics that actually *works* and describes our reality! And science tends to advance by quasi-static changes on the acquired knowledge, despite what those simplified versions of Kuhn’s view, that one often hears, say. I think point 4) in page 7 in this article by Rovelli (https://arxiv.org/abs/1805.10602) puts it very well, whatever views one may have on his own approach to (L)QG.

Alex,

There’s a list of serious technical problems that you run into if you try to pursue either path integral or canonical quantization of these degrees of freedom. I think it’s quite possible these can be overcome with some new ideas, in particular working in Euclidean space twistor space (with a distinguished time direction). In any case, this seems to be a lot more promising than the idea that you’re going to solve the problem with quantum information theory.

Peter,

Who is trying to solve these problems with Quantum Information Theory? Arkani-Hamed isn’t for example, in case I gave that impression.

Darran,

That was a reference not to Arkani-Hamed, but to the well-publicized and well-funded “It from Qubit” stuff.

These days Arkani-Hamed typically uses the “spacetime is doomed”argument as motivation for his work on reformulating amplitudes calculations, but I don’t think the relation is much more than vaguely motivational.

Dear Peter,

I read your essay with great interest, thanks for sharing.

Since you are interested in the unique nature of 4D, I wonder if it is also worth mentioning that in four-dimensions we have this commuting nature between the Hodge Star and Riemann Curvature operations, as described in Professor Krasnov’s book that you reference. This also happens to be when the metric can be described as “Einstein”.

Related to this (I think, although I am not sure) is one of my favorite results in the form of Lovelock’s theorem and just how general one can make the left-hand side of the Einstein equations. Although it can be generalized to any number of dimensions, in four it shows the unique role played by the cosmological constant. In the light of the cosmological constant problem, I personally think that this is something not discussed enough.

Thanks and best,

Jack.

Jack,

The commutativity of * and curvature in 4d is exactly what allows one to express GR in terms of just self-dual connections. It seems to me quite remarkable and important (and this was one of the main points I wanted to make in the essay) that 4d GR depends on only half the variables one expects, i.e.half the variables that almost all descriptions of the theory use. For this to work nicely though, you need to be in Euclidean signature.

Hi Peter,

Can you please elaborate on this (taken from your essay)

> The only known non-perturbative definition of Yang-Mills theory is in Euclidean space-time

What is the definition of Yang-Mills theory in Euclidean space-time that you’re referring to?

Prof. Legolasov,

The lattice gauge theory definition. This very much requires Euclidean signature to make sense.

I have a very basic layman question on the subject. Why is Cartan-Einstein theory so very much out of favor? More broadly, could it be that the difficulty of quantizing gravity lays in the fact that the classical theory is incomplete? (And that the missing terms don’t make much of a contribution on the experimentally accessible energy scales?)