There are two workshops going on this week that you can follow on video, getting a good idea of the latest discussions going on at two different ends of the spectrum of particle theory in the US today.
At the KITP in Santa Barbara there’s Black Holes: Complementarity, Fuzz or Fire?. As far as I can tell, what’s being discussed is the black hole information paradox reborn. It all started with Joe Polchinski and others last year arguing that the consensus that AdS/CFT had solved this problem was wrong. See Polchinski’s talk for more of this argument from him.
If thinking about and discussing deep conceptual issues in physics without much in the way of mathematics is your cup of tea, this is for you (and so, I fear, not really for me). As a side benefit you get to argue about science-fiction scenarios of whether or not you’d get incinerated falling into a black hole, while throwing around the latest buzz-words: holography, entanglement, and quantum information. If you like trendy, and you don’t like either deep mathematics or the nuts and bolts of the experimental side of science, it doesn’t get much better than this. One place you can follow along the latest is John Preskill’s Twitter feed.
Over on the other coast, at the opposite intellectual extreme of the field, LHC phenomenologists are meeting at the Simons Center this week at a SUSY, Exotics and Reaction to Confronting Higgs workshop. They’re discussing very much those nuts and bolts, those of the current state of attempts to analyze LHC data for any signs of something other than the Standard Model. Matt Strassler is there, and he is providing summaries of the talks at his blog (see here and here) At this workshop, still no deep mathematics, but extremely serious engagement with experiment. One thing that’s apparent is that this field of phenomenology has become a much more sober business than a few years ago, pre-LHC, and pre-no evidence for SUSY. Back then workshops like this featured enthusiastic presentations about all the wonderful new particles, forces and dimensions the LHC was likely to find, with one of the big problems being discussed the “LHC inverse problem” of how people were going to disentangle all the complex new physics the LHC would discover. Things have definitely changed.
One anomaly at the SEARCH workshop was Arkani-Hamed’s talk on naturalness, which started off in a promising way as he said he would give a different talk than his recent ones, discussing various ideas about solving the naturalness problem (though they didn’t work, but might be inspirational). An hour later he was deep into the same generalities and historical analogies about naturalness as in other talks, headed into 15 minutes of promotion of anthropics and the multiverse. He ended his trademark 90 minute one-hour talk with a 15 minute or so discussion of a couple failed ideas about naturalness, and for these I’ll refer you to Matt here.
Arkani-Hamed and others then went into a panel discussion, with Patrick Meade introducing the panelists as having “different specialties, ranging from what we just heard to actually doing calculations and things like this.”
Update: Scott Aaronson now has a blog posting about the KITP workshop here.
Update: A summary of the situation from John Preskill is here.
>> if the breakdown would take something like ~2^10^60 years to reveal
maybe I misunderstood the Harlow-Hayden paper, but I thought the main point was that Alice can (most likely) not complete the calculation before jumping into the black hole (and thus create a contradiction) – however, Charlie , remaining (infinitely long) outside the b.h. finds no issue with unitarity.
This seems different to me than what you just wrote.
In regards to your statement about gauge coupling unification, this happens generically within a 1-3% so long as the superpartners are not too heavy. This not very sensitive to the number of parameters in the MSSM. The 1-3% percent discrepancy is usually attributed to unknown GUT threshold corrections, although the discrepancy does appear to be smaller for heavier squarks. For example, see http://cds.cern.ch/record/478820/files/0011356.pdf
Scott, there was what I think is a fairly important LQG paper in Physical Review Letters 110, 211301 (2013) by R. Gambini and J. Pullin. I’ve only seen the version they posted in February on arxiv.
Pullin presented the paper in July at the GR20 conference. It was the lead item of the main Loops session. Ashtekar spoke in a GR20 joint session with string and pheno people (Bob Wald, Don Marolf and Gary Horowitz also took part) which was specifically about BH evaporation and the same stuff as the KITP conference. In doing so Ashtekar drew on this type of fairly unambiguous Loop BH result.
Loop quantization of the Schwarzschild black hole
Rodolfo Gambini, Jorge Pullin
(Submitted on 21 Feb 2013 (v1), last revised 10 May 2013 (this version, v2))
We quantize spherically symmetric vacuum gravity without gauge fixing the diffeomorphism constraint. Through a rescaling, we make the algebra of Hamiltonian constraints Abelian and therefore the constraint algebra is a true Lie algebra. This allows the completion of the Dirac quantization procedure using loop quantum gravity techniques. We can construct explicitly the exact solutions of the physical Hilbert space annihilated by all constraints. New observables living in the bulk appear at the quantum level (analogous to spin in quantum mechanics) that are not present at the classical level and are associated with the discrete nature of the spin network states of loop quantum gravity. The resulting quantum space-times resolve the singularity present in the classical theory inside black holes.
4 pages, Revtex, version to appear in Physical Review Letters
Here is the abstract of Ashtekar’s July 2013 talk at GR20, in the special joint session on quantum mechanics of BH evaporation
ABHAY ASHTEKAR (20+5 MINUTES)
TITLE: Quantum Space-times and Unitarity of BH evaporation
There is growing evidence that, because of the singularity resolution, quantum space-times can be vastly larger than what classical general relativity would lead us to believe. We review arguments that, thanks to this enlargement, unitarity is restored in the evaporation of black holes. In contrast to ADS/CFT, these arguments deal with the evaporation process directly in the physical space-time.
You asked Lee and Greg if they regarded info into new expanding region as essentially a prediction of loop/spinfoam. Hopefully they will answer or you can figure out from what i just quoted. As nonexpert I would say YES essentially a prediction. AdS/CFT cannot be right if it sticks with a single asymptotic region (“boundary”) while a hole develops in bulk. That is top.change. In Loop it leads to bounce and new expanding region which would have its OWN asymptotic region.
Therefore boundary acquires new component and boundary observables algebra must be enlarged. “Firewall” kerfluffle is probably just nature warning us that boundary has been enlarged (by baby universe) and only a part of info is coming back out to us in Hwkg radiation. My best guess as non-expert.
Here is abstract of Jorge Pullin’s talk at the main Loop session of GR20. It was the first paper of the session. He presented the February Gambini Pullin result.
Complete quantization of vacuum spherically symmetric gravity
We find a rescaling of the Hamiltonian constraint for vacuum spherically symmetric gravity that makes the constraint algebra a true Lie algebra. We can implement the Dirac quantization procedure finding in closed form the space of physical states. New observables without classical counterpart arise. The metric can be understood as an evolving constant of the motion defined as a quantum operator on the space of physical states. For it to be self adjoint its range needs to be restricted, which in turn implies that the singularity is eliminated. One is left with a region of high curvature that tunnels into another portion of space-time. The results may have implications for the current discussion of ”firewalls” in black hole evaporation.
http://gr20-amaldi10.edu.pl/userfiles/book_07_07_2013.pdf see page 218 of file.
He is saying they get a new expanding spacetime region from where the singularity used to be. It’s pretty unambiguous. And he notes possible implications for the “firewall” discussion.
A very interesting discussion indeed.
From what I can gather, the firewall paradox is deeply connected to quantum gravity (?).
But string theory claims to be a successful quantum theory of gravity. (From what I know, the successful quantization of gravity is said to be one of the “well established” triumphs of string theory, unlike the landscape nonsense).
In that case, why doesn’t ST provide a clear resolution to the firewall paradox ?
Apologies in advance if this has been discussed elsewhere.
Thanks very much for your question. Let me be clear first of all, there are so far no exact results in full LQG (ie the full 3+1 d spacetime diffeo invariant QFT) as to the fate of black hole singularities. What there are are results about models which reduce the full diffeo gauge symmetry to 1+1 dimensional diffeo symmetry. These give different results, so there is no clear message except that there may not be a single universal answer.
The region to the future of the resolved singularity can reconnect with infinity after a period of evolution through quantum geometries that have no classical description. In this case you get what you want. This is shown in detail in LQG analyses of the 1+1 dimensional CGHS model by Ashtekar, Tavaras and Varadarajan, arXiv:0801.1811, recently studied also by Ashtekar, Pretorius and Ramazanoğlu: arXiv:1011.6442 and arXiv:1012.0077. I am surprised that these papers are not better known.
A different kind of 1+1 model is studied by Modesto who found evidence for a bounce leading to new asymyptotic regions as discussed also above by Marcus in his discussion of the recent very interesting model of Gambinii and Pullin.
The picture seems to be that while the elimination of singularities is universal the fate of the resulting quantum region to the future of the resolved singularity depends on details of the dynamics of the quantum geometry and hence requires detailed computations of models such as I’ve discussed.
I want to reply too your general comments, I’ll do that in a later message.
The issues you raise are subtle, partly because there is not a formulation of QFT on curved spacetime that shares the coordinate and diffeomorphism invariance of classical GR. So at the very least beware of claims and intuitions based on one choice of coordinates. The thermalization of Hawking radiation appears to be fully explained by projecting out a subsystem of an entangled pure state. Remember these are free fields-there are no interactions of the modes at the horizon with each other-so there is no physical basis for rapid mixing. The other system the Hawking photons are entangled with are modes that fall through the horizon and are approaching another boundary-the singularity in Hawking’s original calculation and whatever is post=singularity when the singularity is resolved. That is the physics as we best understand it.
I’d like then to address your statement: “ Namely, I would like the laws of physics to uphold the holographic entropy bound, that the total number of qubits in any bounded region should be upper-bounded by the region’s surface area in Planck units.”
That is a statement of what we can call the “strong holographic bound”. We can distinguish it from a weak form of the holographic bound (hep-th/0003056) which might be stated, “the total number of qubits measurable on any surface should be upper-bounded by the region’s surface area in Planck units.”
I would argue that all the evidence we have is that the weak form is correct. I give several arguments in hep-th/0003056 for the weak form over the strong form as best explaining the evidence we have from Bekenstein and Hawking’s original arguments as well as since. Moreover, recent work deriving black hole thermodynamics from quantum gravity by Bianchi, both perturbative (arXiv:1211.0522) and non-perturbative (arXiv:1204.5122) shows that the black hole entropy is best understood as an entanglement entropy. I would suggest that this be taken seriously as it is the only calculation of the BH entropy that gets the 1/4 right without any parameter fixing for a generic non-extremal black hole.
Scott, the sentiment is ok, but one must choose between other universes (ridiculous, of course) and the (quantum) ‘universes’ of other observers. In the latter case, one might say that unitarity was preserved for any given observer, but in the context of black hole information one probably needs to account for multiple observers. There could easily be (non local) information that violates unitarity, but then that does not necessarily conflict with your view, if you modify the wording.
Lee Smolin, a number of people have examined those papers by Bianchi and they say that he is not counting black hole microstates, that it’s just a version of the original Bekenstein-Hawking calculation.
Thank you for your reply.
“”Actually, several speakers (including Stephen Hawking, who’s been saying similar things since he conceded the information loss bet in 2004; and Samir Mathur and several other “fuzzball” people) explicitly advocated replacing black holes by some kind of unitary “black-hole-like object.”” “
I am happy to see Mathur’s fuzzball proposal getting more serious attention. I have followed it for many years, and had read many of the earlier papers, but I was not up to date on all the more recent papers.
As to Hawking. I could not find a published paper clearly advocating an object without horizon or singularity. I tried, but could not understand his Fuzz or Fire workshop talk, so I don’t know if he mentioned it there. (Maybe a transcript or a paper will emerge later?)
I reread his concession paper arXiv:hep-th/0507171 and scanned his more recent work, , but I didn’t find a clear endorsement of such black hole replacements. If a good reference where Hawking supports an unconventional black hole replacement exists, I would like to hear of it.
If someone has an objection to the claims of Bianchi’s papers I’d be glad to discuss in detail, as I’m sure would also Bianchi. Indeed he has spent a lot of time in discussion with people who were originally skeptical, some of whom conceded in the end he was right.
In any case please explain your statement as on the face of it its confusing. First of all there are two papers by Bianchi, which one are you referring to, the perturbative or the non-perturbative one? Second, please explain what you mean by “a version of the original Bekensteiin-Hawking calculation?” I don’t know what that could refer to as the two of them did very different things. In any case both were semiclassical and treated the metric geometry classically, which Bianchi’s first paper is clearly not.
Mitchell and Lee,
This is starting to give me bad flashbacks to endless 2006 blog comment arguments about LQG vs string theory black hole entropy calculations, which seemed to me highly unenlightening. Claims about what “a number of people” say about this definitely are not enlightening. Unless this has a lot more to do with the discussions at the KITP workshop than it seems, it would be best to just give references here to good sources of information about this question.
I meant Hawking’s semiclassical calculation, not Bekenstein’s. The claim is that a true explanation of black hole entropy should involve state counting, and that Bianchi’s 1204.5122 does not. There was a discussion of this at Bianchi’s entropy result… when the paper first appeared.
I had not seen the later paper, 1211.0522. I now gather that the philosophy of the two papers is as follows. 1204.5122 should not be regarded as a microscopic explanation of black hole entropy, it really is just the Hawking semiclassical calculation in a LQG framework. It’s 1211.0522 which proposes a microscopic picture, based on entanglement entropy. For a non-LQG counterpoint to that discussion, I suggest 0905.0932, e.g. section 2.5.
The thread on Bianchi’s work that I just mentioned is still open, and would be a suitable venue for continuing with this topic.
Here’s the relevant paragraph from Hawking’s concession speech.
“Information is lost in topologically non-trivial metrics like black holes. This corresponds to dissipation in which one loses sight of the exact state. On the other hand, information about the exact state is preserved in topologically trivial metrics. The confusion and paradox arose because people thought classically in terms of a single topology for spacetime. It was either R^4 or a black hole. But the Feynman sum over histories allows it to be both at once. One can not tell which topology contributed to the observation, any more than one can tell which slit the electron went through in the two slits experiment. All that observation at infinity can
determine is that there is a unitary mapping from initial states to final and that information is not lost.”
I should confess that I don’t understand this argument (and apparently I’m not alone — even Preskill, to whom Hawking conceded, said he didn’t understand it!). But Hawking does seem to be clearly asserting that the solution to information loss involves there being a nonzero amplitude for the black hole never forming in the first place. (Though an obvious issue is that he doesn’t say how large the amplitude is: if it were nonzero but exponentially small, that wouldn’t seem to help much.)
Thanks very much!
I read that paragraph and didn’t pick up that it is a reference to a black hole substitute which is topologically trivial. In my book, that should eliminate both the singularity and the so-called “horizon” which is more like a “brink”. That is my favored configuration, so this pleases me very much.
But in my thinking, the topologically trivial configuration should be dominant.
Are there relevant ideas in PRL 110, 101301 (2013)?
The paper is available for free download.
I see Scott Aaronson has a blog posting about this now, at
so if you want to discuss these issues with someone who actually was at the workshop and knows what is going on, Scott’s blog is your best bet.
To me, the amazing thing is that the theory of black hole complementarity was taken seriously by a lot of people for two decades before AMPS showed that this theory doesn’t hold water (at least, not as it was originally presented). I suspect that the moral is that if somebody writes physics papers with very few equations, lots of words, and a moderate amount of handwaving, you should ignore them (unless they are named Einstein).
It really seems to be a case of The Emperor’s New Clothes. As Scott Aaronson says on his blog:
I don’t believe he was the only one who thought he didn’t understand it.
The conclusion that a black hole is at the event horizon surrounded with a wall of fire by the disintegration of infalling matter was first proposed in an article I had published in 2001 in Zeitschrift fuer Naturforschung 56a, 889 (2001). My paper had the title “Gamma Ray Bursters and Lorentzian Relativity”. It is Lorentzian relativity which resolves the black hole information paradox, with no information loss or violation of unitarity. In Lorentzian relativity SRT and GRT remain extremely good approximations for energies small compared to the Planck energy. My paper is cited in “An Apologia for Firewalls” by Almheiri, Marolf, Polchinski, Stanford and Sully: arXiv:1304.6483v2 [hep-th] 21 jun 2013.
There is a point that is puzzling me, and which is also puzzling me in Bianchi’s computation: in your paper GR as the equation of state of SF, you consider a finite piece R of the near horizon region observed by a family of accelerated observers. You further require that R has the Unruh property, namely that there exists a state in the Hilbert space associated with R which is thermal with temperature a/2π (a being the acceleration of the observer). I do not really understand in which context this assumption makes sense. More precisely:
1. In the framework of algebraic quantum field theory [AQFT], we have shown with Rovelli in Class. Quant. Grav . 20 (2003) 4919-4932 that a uniformly accelerated observer in a finite double-cone region of space-time of size L sees the vacuum at a temperature which, at first order in 1/L, equals the Unruh temperature a/2π. But our result heavily relies on:
a) an extended acceptation of the KMS condition as a local equilibrium condition;
b) the requirement that the qft under investigation is conformally invariant. Otherwise (e.g. for a massive theory), it is a longstanding open problem in AQFT whether there exist observers in a finite region of spacetime for which the vacuum state is indeed thermal (technically speaking: unlike the Wedge case, the modular group associated to a non-conformal qft defined on a finite region of Minkowski spacetime may not have a geometrical action).
2. Besides AQFT, there are some proposals for an “Unruh temperature for finite lifetime observers”. For instance J. Louko (see e.g his work with Satz here) considers a finite-time interaction of an Unruh-de Witt detector with the vacuum. As far as I can remember, it is not clear at all that one finds a temperature equal to the Unruh temperature at first order in the time of interaction. There is for instance a non-trivial dependence in the shape of the function describing the switching on/off of the interaction.
So it is not so obvious to me what the Unruh property for a finite region of spacetime means, even at first order in the size of the region.
In your paper you use the result of Bianchi who showed that R was Unruh. From what I understand following various discussions with people expert in these thematics, the idea is to assume that an observer stationary near the horizon locally sees the same vacuum as the one seen by an eternally accelerated observer in the Rindler Wedge. What I do not understand is why this is enough to justify that the finite region is Unruh ? My point is the following: in the Wedge, the computation of the Unruh temperature does not relies only on the fact that the Unruh-de Witt detector interacts with the vacuum, but also that this interaction is integrated all along the trajectory of an eternal observer, that is from τ=-∞ to τ=+∞ (similarly in AQFT: to obtain the Unruh temperature one has to consider the algebra of local observables associated to the whole Wedge, not to a sub-region of it).
Alternatively, the notion of “local equilibrium state” is far from obvious.
For instance Buchholz and Solveen have extensively discussed it, and proposed a definition different from the one we used with Rovelli (and again, in this context the meaning of a finite Unruh region is not so obvious to me).
Of course I am not claiming that these objections invalidate the BH entropy computation. But it seems to me that the the proof the finite region R is Unruh is based on some “hidden” assumptions that are not completely clear.
Sorry for the length of the message, but I would be interesting to hear your opinion on these points.