There are two workshops going on this week that you can follow on video, getting a good idea of the latest discussions going on at two different ends of the spectrum of particle theory in the US today.

At the KITP in Santa Barbara there’s Black Holes: Complementarity, Fuzz or Fire?. As far as I can tell, what’s being discussed is the black hole information paradox reborn. It all started with Joe Polchinski and others last year arguing that the consensus that AdS/CFT had solved this problem was wrong. See Polchinski’s talk for more of this argument from him.

If thinking about and discussing deep conceptual issues in physics without much in the way of mathematics is your cup of tea, this is for you (and so, I fear, not really for me). As a side benefit you get to argue about science-fiction scenarios of whether or not you’d get incinerated falling into a black hole, while throwing around the latest buzz-words: holography, entanglement, and quantum information. If you like trendy, and you don’t like either deep mathematics or the nuts and bolts of the experimental side of science, it doesn’t get much better than this. One place you can follow along the latest is John Preskill’s Twitter feed.

Over on the other coast, at the opposite intellectual extreme of the field, LHC phenomenologists are meeting at the Simons Center this week at a SUSY, Exotics and Reaction to Confronting Higgs workshop. They’re discussing very much those nuts and bolts, those of the current state of attempts to analyze LHC data for any signs of something other than the Standard Model. Matt Strassler is there, and he is providing summaries of the talks at his blog (see here and here) At this workshop, still no deep mathematics, but extremely serious engagement with experiment. One thing that’s apparent is that this field of phenomenology has become a much more sober business than a few years ago, pre-LHC, and pre-no evidence for SUSY. Back then workshops like this featured enthusiastic presentations about all the wonderful new particles, forces and dimensions the LHC was likely to find, with one of the big problems being discussed the “LHC inverse problem” of how people were going to disentangle all the complex new physics the LHC would discover. Things have definitely changed.

One anomaly at the SEARCH workshop was Arkani-Hamed’s talk on naturalness, which started off in a promising way as he said he would give a different talk than his recent ones, discussing various ideas about solving the naturalness problem (though they didn’t work, but might be inspirational). An hour later he was deep into the same generalities and historical analogies about naturalness as in other talks, headed into 15 minutes of promotion of anthropics and the multiverse. He ended his trademark 90 minute one-hour talk with a 15 minute or so discussion of a couple failed ideas about naturalness, and for these I’ll refer you to Matt here.

Arkani-Hamed and others then went into a panel discussion, with Patrick Meade introducing the panelists as having “different specialties, ranging from what we just heard to actually doing calculations and things like this.”

**Update**: Scott Aaronson now has a blog posting about the KITP workshop here.

**Update**: A summary of the situation from John Preskill is here.

Who is Arkani-Hamed referring to at 37:50?

“And this is why very serious, sober theorists in the late 80’s and early 90’s were writing papers with super-partners discovered at LEP. They werenotthe people we won’t talk about here who for 20 years were saying that supersymmetry is six months around the corner, like a translationally invariant statement.Notthose people. Much more reasonable people.”MathPhys,

I also was wondering who he had in mind. The only thing certain is that he wasn’t including Gordon Kane, who surely was part of the “people we won’t talk about here.”

Actually, I’m curious if there are any examples at all of theorists who in the 90s argued that naturalness meant SUSY at LEP, and then when it didn’t show up, stopped promoting SUSY. All the examples I can think of just translated arguments for SUSY at LEP to arguments for SUSY at the LHC.

Hi Peter,

I’m wondering if there’s a reasonable interpretation of the first few split SUSY proposals of the early/mid 2000’s that follows that line of thinking — we didn’t find superpartners at LEP, so the naturalness argument for SUSY isn’t a very good one, but coupling unification and the like are still compelling theoretical reasons to take it seriously.

Of course, you might think the latter argument isn’t very compelling, but I suspect it’s people who continued to hang onto naturalness while allowing incrementally more fine-tuning to count as “natural” who are the people he’d be talking about. Allowing a few percent more fine-tuning everytime you don’t see anything while continuing to take naturalness seriously would lead to the kind of “every 6 months” predictions he ridicules. The people willing to largely give up the naturalness argument in favor of something like split SUSY would then be the “serious theorists”. The fact that he sometimes complains about people having spent far too much time and effort arguing over “how much tuning is acceptable” lends a little credibility to this.

These wouldn’t be people who gave up on SUSY after LEP, but at least people who gave up on the naturalness argument for SUSY after LEP.

CU Phil,

Yes, that’ s one interpretation, with Arkani-Hamed giving himself as a main example. He’s right that nearly ten years ago he was arguing for anthropics, the string Landscape and non-natural split SUSY. For an example, see this talk at a Templeton-funded event on science and religion

http://www.aaas.org/spp/dser/events/archives/lectures/2005/02_Lecture_2005_0428.shtml

However, from what I remember of talks like that, a big emphasis was on his “sharp experimental predictions for physics at the Large Hadron Collider” which were about a very-long lived gluino.

Still, I’m curious if there are any examples of people who not only noticed that LEP killed the main argument for SUSY, but drew the conclusion that SUSY probably wasn’t correct, instead of just moving on to “non-natural” SUSY.

I remember Guido Altarelli being very disappointed about not finding SUSY after the first LEP energy increase.

Must say, never quite appreciated what Peter might mean when he talks about “without much in the way of mathematics.” Checked up on Mark van Raamsdonk’s talk, and as far as I could tell there wasn’t really any math at all. I mean he used some words, drew a few pictures, but wow. I mean 15 minutes into the talk, and he hasn’t even filled up two blackboards, and he’s writing BIG. Actually, that’s true 25 minutes in. At which point he has a couple of arrows indicating maps, and one tensor product sign. Is this typical? Do physicists just not like to waste chalk? Is there any actual content, or is he really just waving his hands and saying “wow, it’s cool, these things look like they might possibly be related and wouldn’t that be neat.”

Hi from Santa Barbara. I’m at the KITP firewall workshop (the only non-physicist participant, I think), where I’m having a very nice time. In my own talk (which dealt with Harlow and Hayden’s work on the computational complexity of decoding the Hawking radiation), I took the opportunity to crack some jokes about the extreme level of handwaving that reigns here, and the airy unreality of some of the discussions.

It’s true that most of the talks have surprisingly little math in them (and, of course, zero input from any recent experiment): it’s mainly just conceptual arguments illustrated by simple cartoons. (Obviously, the cartoons convey vastly more information to Susskind, Maldacena, and the like than they do to me — but they do look funny to an outsider.)

At the same time, you (Peter) have often complained that particle theory has been dominated for ~30 years by a few ideas that haven’t worked out so well. Here, at least, there’s an enormous diversity of ideas on the table, and lots of tolerance (and encouragement) of dissent. And if you think

allthe ideas being discussed are bad ones, then there’s the obvious retort, as John Preskill pointed out in his talk: “OK, let’s hear your better ideas!”As I understand it, the issue is actually pretty simple. Do you agree that

(1) the Hawking evaporation process should be unitary, and

(2) the laws of physics should describe the experiences of an infalling observer, not just those of an observer who stays outside the horizon?

If so, then you seem forced to accept

(3) the interior degrees of freedom should just be some sort of scrambled re-encoding of the exterior degrees, rather than living in a separate subfactor of Hilbert space (since otherwise we’d violate unitary).

But then we get

(4) by applying some suitable unitary transformation to the Hawking radiation of an old enough black hole

beforeyou jump into it, you ought to be able, in principle, to completely modify what you experience when youdojump in—an apparent gross violation of locality.So, there are a few options: you could reject either (1) or (2). You could bite the bullet and accept (4). You could say that the “experience of an infalling observer” should just be to die immediately at the horizon (firewalls). You could argue that for some reason (e.g., gravitational backreaction, or computational complexity), the unitary transformations required in (4) are impossible to implement even in principle. Or you could go the “Lubosian route,” and simply assert that the lack of any real difficulty is so obvious that, if you admit to being confused, then that just proves you’re an idiot. (Yes, I did see your comment about Lubos’s dismissal of the issue making you think there might be something to it after all!) AdS/CFT is clearly relevant, but as Polchinski pointed out, it does surprisingly little to solve the problem.

At any rate, thinking about the “Hawking radiation decoding problem” already led me to some very nice questions in quantum computing theory, which remain interesting even if you remove the black hole motivation entirely. And that helped convince me that something new and worthwhile might indeed come out of this business,

despitehow much fun it is. (Hopefully whateverdoescome out won’t be as garbled as Hawking radiation.)I find this a bit funny, because Arkani-Hamed is the one person more than anyone else in the field I would pick out as guilty of making regular claims about imminent spectacular signals of new physics – mm-size extra dimensions, stopped gluinos, lepton jets, GeV-scale dark forces…how is split susy not a version of ‘susy is just round the corner’?

@Scott

Since super string theory is indeed the theory of everything, and in particular the only theory which predicted gravity, it will tell us any minute now the correct answer …

Btw I for one would go with “the unitary transformations required in (4) are impossible to implement even in principle” …

Or a Motl-approved paper (“not self-evidently wrong”)

http://arxiv.org/abs/1308.4121

Scott,

What you described seems to be the EPR “paradox” in a fancy setting, unless I’m not understanding something. You could, for example, have two entangled electrons on opposite sides of a lab. You measure the spin of one of them, then walk across the room and measure the spin of the other, (which you already know the result of because you measured the first spin). In this case, your first measurement has “altered” the second, and you did so non-locally. I don’t really see how that would be different than what you just described, so my feeling is that there is something important about the event horizon that you haven’t mentioned.

(To be fair, the EPR paradox was trying to show that QM implied faster than light influences, not non-local ones, but the resolution seems to be the same: the first measurement only allows you to predict the second measurement, not change it)

DimReg: No, it’s completely different from EPR. EPR is about entanglement, which (as you correctly point out) cannot be used for faster-than-light signalling. By contrast, if you believe in black hole complementarity, then you don’t believe that the interior degrees of freedom are entangled with the exterior ones: you believe they’re the SAME degrees of freedom! So in particular, you should in principle be able to signal “nonlocally” from the interior of the hole to the exterior or vice versa. And that seems to be the source of the problem.

Anonyrat: So, which part of the argument I tried to summarize does that paper reject?

Peter: I can take these discussions to my own blog if you consider them off-topic…

Scott,

This post wasn’t intended as a criticism of the KITP workshop, or this area of research in general, or at all a claim that these are bad ideas being discussed. The world is full of all sorts of interesting and valuable things being done that are “not my cup of tea”. All I was doing was explaining part of why personally I haven’t been willing to put the time into thinking about such things and being able to comment knowledgeably on them (the other part is the same generic reasons my interest in all QG research is limited). I’ve seriously got no idea how useful the insights about quantum gravity coming out of thinking along these lines might be. One definite positive thing one can say is it seems to be keeping some of the participants busy who otherwise would be out putting the torch to science (i.e., promoting the multiverse).

Thanks for the very lucid summary of the paradox. I’m happy to have you answer questions from people about this here if you want to, that’s on-topic and not something I’m equipped to do. If you find you’d rather use your blog for this, that’s fine, will redirect traffic over there.

@Scott,

The question about the physics at the horizon can be asked in two ways: one way for the hypothetical universe where AdS/CFT is true and describes black holes, and the other way for our universe with the laws of physics that we already know and have tested. For obvious reasons, the second version is more interesting and important. The no-nonsense answer there is to reject (1) and get a peaceful in-fall through the horizon and await eventual spaghettification near the singularity.

By now, the case for this point of view will have already been made, I’m sure, in the talks by Bill Unruh and Bob Wald.

I’ve been viewing to Hawking podcast from fuzzfire and the sound quality is TERRIBLE. I have just enough time to wonder aloud whether the other skeptical voice (Unrah) will subject to similar distortion.

Michael Welford,

I took a look and also soon gave up trying to listen to that. Perhaps a transcription could be arranged…

@Igor Khavkine:

But if you admit that (1) is the correct answer and black hole evaporation is not unitary, you’ve just refuted one of the main contributions of AdS/CFT, the crown jewel of string theory. Who wants to do that?

Igor Khavkine:

Yes, Bill Unruh just gave a talk today where he advocated rejecting (1)

extremelyforcefully and entertainingly! (Unfortunately I fly back today and will miss Bob Wald’s talk.)But while I don’t understand some of the assumptions bandied about at this workshop (especially those coming from QFT), I

dounderstand how central unitarity is to physics’ current conception of the world, and what a drastic step it would be to get rid of it. So AdS/CFT or no AdS/CFT, string theory or no string theory, I certainly understand not wanting to give up on unitarity without an extremely hard fight.(Incidentally, in my summary of the AMPS paradox, I forgot to say something. As I understand it, what AMPS added to the simple logical argument that I outlined was really to make the consequence (4) more “concrete” and “vivid”—by describing something that, in principle, you could actually do to the Hawking radiation before jumping in, such that after you jumped in, if there

wasn’tanything dramatic that happened—something violating local QFT and the equivalence principle—then you’d apparently observe a violation of the monogamy of entanglement, a basic principle of quantum mechanics. Probably the bare logic (1)-(4) was known to many people before AMPS. I certainly knew it, but I didn’t call it a “paradox,” I just called it “I don’t understand black hole complementarity.” 🙂 )Any women at KITP besides Eva?

There were maybe 4 or 5 other female students and researchers.

Dear Peter and Scott,

The argument Scott summarizes requires that the evolution be unitary as seen by an observer at infinity.

\\

The view that 1) -so modified- is wrong has a straightforward justification consistent with everything we know about both general relativity and quantum theory. This is the hypothesis that quantum gravity effects eliminate the singularity and so quantum evolution proceeds to a non-classical region of spacetime to the future of where the singularity would have been. The evolution as observed by an observer at infinity will not be unitary if that new region doesn’t reconnect with infinity, but no principle of quantum mechanics is violated. The evolution as a whole can be unitary even if there is no observer who can reconstruct a pure quantum state from their observations.

\\

This commonsense solution has been discussed since the 1970’s. There were some papers raising issues about remnants but as we discussed in detail with Hossenfelder in our paper arXiv:0901.3156, those are not convincing.

\\

The lesson in my opinion is that the key issue in quantum black holes and the information problem is not at the horizon, it is at the singularity. It is unreasonable to expect any new physics at horizons where the curvatures are small, but necessary to find new physics at the approach to singularities. The focus on the firewall problem is in my view a consequence of insufficient appreciation of this point. It can be seen as a reducto for the assumption that the problem can be resolved without investigating how quantum gravity effects eliminate the singularity and taking on board the consequences of the resulting evolution to the future of where the classical singularity would have been.

\\

Thanks,

Lee

Scott Aaronson,

They say quantum states for the blackhole with firewalls exist but are generally not accessible by external observers. Specifically they say in their model:

So I guess they are saying that (4) is mistaken.

Lee:

As I understand it, the problem with assuming that all evolution is unitary (considering both the interior and the exterior of the black hole) is that from the point of view of an outside observer, nothing ever falls into the black hole, so from this observer’s viewpoint, evolution over the entire universe (which just consists of the exterior of the black hole) is not unitary.

Scott:

I wouldn’t dismiss DimReg’s comment so quickly. The ER=EPR conjecture of Maldacena and Susskind (arXiv:1306.0533) is an attempt to resolve the firewall paradox by making something like DimReg’s comment precise. The starting point is the thermofield double formalism of Israel (1976). One introduces a second copy of the physical Fock space. The two copies are then related to the black hole interior and exterior, and they are entangled in just the right way to be consistent with complementarity and black hole thermodynamics. We make peace with the apparent non-locality of:

because the Hawking radiation you acted on was EPR entangled with the black hole interior.

@Peter Shor: 😉

@Scott: If everything in the world was unitary, there would be no place for things like the Lindblad equation. Similarly, for the same reason that a room with an open window is allowed to violate unitarity, so is the exterior of a black hole at intermediate times after its formation. On the other hand, at very long times (where “complete evaporation” is supposed to take place), there is nothing but a HUGE question mark. And a question mark does not a paradox make. Lee has made a similar point above.

Scott,

Thanks for clarifying, that makes more sense! For some reason, I got the impression that it was all about entangling internal and external states.

@Igor: The Lindblad equation does describe non-unitary evolution of a quantum state, but this non-unitarity is not fundamental. As I understand things, it typically arises because you trace out the Hilbert space of an environment (like some thermal bath) which interacts with your system of interest and that you don’t have a way of measuring precisely in practice. In principle, you could include the evolution of the bath in your description and recover unitary evolution. There is currently no reason to expect that this “in principle” statement is not correct, and I think this is what Scott was referring to in his reply.

Lee Smolin: Thanks for the comment. I agree that baby universes are an option, although I can’t comment on the merits of specific scenarios. (Unruh mentioned baby universes in his talk as another alternative to information loss, though he didn’t dwell on them.) Preskill also made the point that, with all this fuss about the horizon, strikingly little was being said about the singularity—in order to motivate his and Seth Lloyd’s proposal (building on Horowitz and Maldacena’s), which

doesinvolve the singularity.While I’m obviously far from an expert, where I think I part ways from you and Unruh is on the following. We’re pretty sure black holes have an

entropy, which goes like the area of the event horizon in Planck units. We’re pretty sure that, from an external observer’s perspective, infalling stuff gets “pancaked” on the event horizon and scrambled beyond recognition, never making it through to the interior. Finally, we’re pretty sure that the external observer ultimately sees the black hole evaporate, through Hawking radiation that emerges (appears to emerge?) from the horizon. To me, these facts would seem like an intolerable coincidence, if the black hole didn’t have microstates—“stored,” one wants to imagine, on or near the event horizon—and if the Hawking radiation didn’t carry away the information about those microstates. Otherwise, what a waste for Nature to “come so close” to upholding unitarity, only to chicken out at the last moment! 🙂@Chris Cesare, you have elaborated my implicit point about the Lindblad equation. For a black hole, the role of the “environment” or “bath” or even more precisely “whatever degrees of freedom are neglected” is exactly played by the interior of the black hole.

@Scott, as a frequent voice of reason in a ocean of doofocity, I hope you re-examine the certainty with which two of the statements from your last post are held: “the external observer

ultimatelysees the black hole evaporate” and “black holes have anentropy” (where “entropy” is specifically used in the sense of log(Omega)). Snag… I have to run and can’t expand on this at the moment! I mention only that anyone interested in black hole physics should personally critically examine the arguments for how these claims have been arrived at.Scott wrote:

This account isn’t just observer-dependent, it’s

coordinate-dependent! There are plenty of respectable coordinate systems that an external observer can use in which it’s false.In contrast to the “pancake” picture, it’s an objective fact (proof here) that if a stationary external observer drops something into a black hole, after a finite amount of proper time has elapsed for that observer, it will be physically impossible for them to chase after the infalling matter and catch it before it crosses the horizon. (Ditto for the infalling matter reaching the singularity — even if you’re feeling suicidal, once you drop something you have a finite time to go after it, before you lose any chance of ever touching it again, even inside the hole.)

About the strongest objective statement I’m aware of that favours the “pancake” picture is the fact that — in the idealised case where you can detect arbitrarily red-shifted light — there is no upper bound on how long it might be before a stationary external observer receives radiation emitted from a given piece of infalling matter. But the red shift factor grows exponentially, so I’m not sure that this counts for much, even in principle.

Dear Scott,

Thanks, but either I don’t understand your argument or else it is circular. What do you suppose happens to the singularity as well as to the quantum state of the star whose collapse formed the black hole in the first place? If the singularity is eliminated then the Hilbert space in the future is a direct product of a factor spanned by observables which describe degrees of freedom to the future of where the singularity would have been and a factor spanned by observables external to the horizon. The evolution onto this product can be assumed to be unitary but (I feel silly telling you this) it cannot be when restricted to either of its factors. Hence the observer at infinity describes a density matrix gotten by tracing out the degrees of freedom in the baby universe inaccessible to them.

Isn’t this a completely reasonable option, especially because it avoids the otherwise paradoxical implications of the firewall argument?

The pancake is a non-sequitor: why does it matter what information does or doesn’t get to infinity or when, if infinity is not the only place information goes to? So to refer to it seems to assume what you are claiming to demonstrate.

Many thanks,

Lee

@Lee

as your wrote, your proposal is not new and thus its problems are known.

One problem of simply allowing a non-unitary evolution in the exterior is that

particle physics becomes non-unitary as soon as you (have to) include virtual black holes.

Wolfgang,

That is not a convincing argument and it is partly addressed in the paper I mentioned. The basic point is that there is no reason one has to include contributions from “virtual black holes.” When one looks at it carefully it becomes not at all clear what would be meant by that in a well defined background independent formulation of quantum gravity. The intuition that any process should have large or even divergent contributions from “virtual black holes” is based on an incorrect use of effective field theory, as discussed in section 4 of the paper with Hossenfelder I mentioned above.

Another reason is that there is no reason to think that horizons make sufficient sense in terms of quantum geometry at the Planck scale to give meaning to the semiclassical intuition of a virtual or Plank scale black hole. If quantum geometry is discrete at Planck scales then there are no horizons, curvatures or singularities at those scales and no way to give meaning to a Planck scale black hole. There is no contradiction in believing that quantum gravity is simply unitary at small scales while real astrophysical black holes create baby universes.

Thanks,

Lee

Wolfgang:

What is anything fundamentally wrong with including virtual black holes, and making particle physics non-unitary? While it is much more difficult to think about, and nobody seems to have a specific satisfactory theory of how non-unitary particle physics might work, I don’t think there is any reason to believe such theories couldn’t exist.

Particle physics could be non-unitary at the Planck scale, and have some built-in quantum error correction properties which keep it unitary at observable scales. For a two-dimensional condensed matter model for something like this, Google “Kitaev honeycomb model”.

Lee:

Your quote “there is no contradiction in believing that quantum gravity is simply unitary at small scales while real astrophysical black holes create baby universes.” is reminiscent of the early quantum physicists’ division of physics into the “microscopic”, where quantum effects are important, and the “macroscopic”, where classical physics holds. We now realize that this was a big mistake.

Regards,

Peter

My guess is that the firewall problem is due to an idealization. A lot of fuzz is made about the role of the BH singularity and physics at infinity – both I regard as just unphysical.

My suggestion: One should think about an appropriate analogue model realizable in the laboratory. There have been interesting suggestions to measure the Hawking effect in black hole analogues in Bose-Einstein condensates, for instance. There should be no violations of unitarity in such systems, right ? It would be interesting to see a firewall analogue in a Bose-Einstein condensate :-). Googling suggests to me that such models have not been worked out yet. Oh, by the way, such lab systems are full of surprises, e. g. bosenovas.

@Peter Shor

>> I don’t think there is any reason to believe such theories couldn’t exist

sure, but until it is written down and works, I prefer to go with the findings of Scott and Harlow-Hayden that Alice cannot actually perform the calculation necessary to create a paradox …

Peter, I think you would be interested in reading this post by a world class mathematician (anonymous, but so I am told) on the interaction between string theory and mathematics. I wonder if your assessment agrees with his?

Bill,

I’m generally sympathetic with the views of that blogger, whoever he or she might be. But that’s a large and complicated topic completely unrelated to this posting. Anyone who wants to discuss it should do it over at that blog.

To reduce it to a slogan: Give up Black Holes, Save Unitarity

Baby Universes were mentioned, but I think the idea of replacing black holes deserves much more serious study than it has received. The problems discussed at the recent “Black Holes Complementarity Fuzz or Fire?” workshop emphasize only some of the serious problems with the black hole concept. I realize the workshop was predicated on acceptance of the black hole concept as well as the general accuracy of the Hawking radiation paradigm, but neither of these ideas has a strong experimental support. Gravitationally collapsed objects similar but not identical to black holes could alleviate many issues if they were unitary and did not involve either horizons or singularities.

Jim Graber:

“I realize the workshop was predicated on acceptance of the black hole concept … Gravitationally collapsed objects similar but not identical to black holes could alleviate many issues if they were unitary and did not involve either horizons or singularities”

Actually, several speakers (including Stephen Hawking, who’s been saying similar things since he conceded the information loss bet in 2004; and Samir Mathur and several other “fuzzball” people) explicitly advocated replacing black holes by some kind of unitary “black-hole-like object.” Though fwiw, my preference would be simply to DEFINE “black hole” to mean “that entity, whatever it is, that behaves from the outside more-or-less like the black hole of classical GR.”

@Wolfgang: my characterization of what Harlow-Hayden is saying is that “Nature is inconsistent, but because we only have limited computational power, we will never be able to catch Her in an inconsistency, so we might as well pretend that She’s inconsistent”. I really don’t think this idea holds up to close inspection.

That last “inconsistent” should be “consistent”.

Of course, this is essentially what Bohr’s Complementarity Principle said before people worked out the mathematical justification behind it. But there is something fundamentally different about hiding “inconsistencies” with the Uncertainty Principle and hiding them with computational complexity.

@Peter Shor

>> there is something fundamentally different about hiding “inconsistencies” with the Uncertainty Principle and hiding them with computational complexity

I see it the other way around: One reason I found Scott’s talk so amazing is because it suddenly hit me how fundamental computational complexity really is.

Of course, Scott was preaching this on his blog for long time …

Apologies – this is wholly off-topic.

Does anyone know if the “unification” of the couplings in SUSY still holds in view of the latest LHC results ? I’ve been hunting around for some time now and couldn’t find anything. Also, how unique are the “unification” solutions i.e. are they a generic feature of SUSY at a mass of X TeV ?

Sorry for going off topic… I shan’t post on this again. Hopefully the question will anyway be of use to a lot of people reading this blog.

Dear Stephen,

Yes, the gauge coupling unification of the MSSM still holds after the latest LHC results. In fact, the gauge coupling unification works better for heavy squarks and sleptons in the 5-10 TeV range.

Hi Eric,

Thanks.

Is there a ref for this ? (Apologies to Peter – no more posts from me on this ).

Stephen,

One recent reference is this

http://arxiv.org/abs/1212.6971

When thinking about whether different versions of the MSSM do a better or worse job of coupling constant unification, you probably should remember that you’re talking about a theory with more than a hundred undetermined parameters to play with….

Peter Shor:

> Nature is inconsistent, but because we only have limited computational power, we

> will never be able to catch Her in an inconsistency

No, I don’t think that’s what Harlow and Hayden are saying at all. It’s more like: “yes, semiclassical field theory might have to break down and get replaced by a consistent quantum theory of gravity, even in a low-energy regime where physicists thought that QFT would work fine. But if the breakdown would take something like ~2^10^60 years to reveal, then maybe we don’t have to worry all that much, if our goal is to reassure ourselves that we already more-or-less understood what happened at low energies.”

Lee Smolin and Greg Egan: Thanks for the comments and clarifications.

When I referred to “pancaking” (my choice of image, and maybe a bad one), I mostly had in mind a series of lectures that Lenny Susskind gave at PI, where he described the process by which quantum information is believed to get rapidly “scrambled” at or near the horizon, even giving detailed quantitative bounds on the rate at which the scrambling is thought to take place. So my impression was that, even independent of AdS/CFT and so forth, we knew something about the “rapid mixing” that a faraway observer would believe to take place, just because of the extreme temperature in the stationary frame at the stretched horizon, or something like that. If so, then the argument wouldn’t be circular, since nothing in it would presuppose that the information ultimately comes out, but it would nevertheless support the intuition that it does come out. But maybe I took Lenny’s calculations about the mixing near the horizon to be better-established than they are.

I forgot to mention an additional reason why I’d personally be happy if the information comes out of the hole, rather than into a baby universe. Namely, I would like the laws of physics to uphold the holographic entropy bound, that the total number of qubits in any bounded region should be upper-bounded by the region’s surface area in Planck units. But if the interior of the region could contain a “portal” to another universe of unbounded size, then isn’t the universality of that bound called into question?

So, here’s a question for either or both of you. Suppose that, for whatever reasons, I thought that upholding unitarity was a

verybig deal—maybe an order of magnitude more important than any other principle at stake in the black hole debate. And suppose that by “unitarity,” I meant that an observer inouruniverse should in principle be able to reconstruct the infalling qubits (even if after 2^10^60 years or whatever), rather than that the qubits should continue to exist in a baby universe. Then my question is: could LQG (or spin foams, or other non-string quantum gravity approaches) give me what I wanted? Or do you regard the information’s falling into baby universes as essentially apredictionof LQG?