Sabine Hossenfelder is on a tear this week, with two excellent and highly provocative pieces about research practice in theoretical physics, a topic on which she has become the field’s most perceptive critic.

The first is in this month’s Nature Physics, entitled Science needs reason to be trusted. I’ll quote fairly extensively so that you get the gist of her argument:

But we have a crisis of an entirely different sort: we produce a huge amount of new theories and yet none of them is ever empirically confirmed. Let’s call it the overproduction crisis. We use the approved methods of our field, see they don’t work, but don’t draw consequences. Like a fly hitting the window pane, we repeat ourselves over and over again, expecting different results.

Some of my colleagues will disagree we have a crisis. They’ll tell you that we have made great progress in the past few decades (despite nothing coming out of it), and that it’s normal for progress to slow down as a field matures — this isn’t the eighteenth century, and finding fundamentally new physics today isn’t as simple as it used to be. Fair enough. But my issue isn’t the snail’s pace of progress per se, it’s that the current practices in theory development signal a failure of the scientific method…

If scientists are selectively exposed to information from likeminded peers, if they are punished for not attracting enough attention, if they face hurdles to leave a research area when its promise declines, they can’t be counted on to be objective. That’s the situation we’re in today — and we have accepted it.

To me, our inability — or maybe even unwillingness — to limit the influence of social and cognitive biases in scientific communities is a serious systemic failure. We don’t protect the values of our discipline. The only response I see are attempts to blame others: funding agencies, higher education administrators or policy makers. But none of these parties is interested in wasting money on useless research. They rely on us, the scientists, to tell them how science works.

I offered examples for the missing self-correction from my own discipline. It seems reasonable that social dynamics is more influential in areas starved of data, so the foundations of physics are probably an extreme case. But at its root, the problem affects all scientific communities. Last year, the Brexit campaign and the US presidential campaign showed us what post-factual politics looks like — a development that must be utterly disturbing for anyone with a background in science. Ignoring facts is futile. But we too are ignoring the facts: there’s no evidence that intelligence provides immunity against social and cognitive biases, so their presence must be our default assumption…

Scientific communities have changed dramatically in the past few decades. There are more of us, we collaborate more, and we share more information than ever before. All this amplifies social feedback, and it’s naive to believe that when our communities change we don’t have to update our methods too.

How can we blame the public for being misinformed because they live in social bubbles if we’re guilty of it too?

There’s a lot of food for thought in the whole article, and it raises the important question of why the now long-standing dysfunctional situation in the field is not being widely acknowledged or addressed.

For some commentary on one aspect of the article by Chad Orzel, see here.

On top of this, yesterday’s blog entry at Backreaction was a good explanation of the black hole information paradox, coupled with an excellent sociological discussion of why this has become a topic occupying a large number of researchers. That a large number of people are working on something and they show no signs of finding anything that looks interesting has seemed to me a good reason to not pay much attention, so that’s why I’m not that well-informed about exactly what has been going on in this subject. When I have thought about it, it seemed to me that there was no way to make the problem well-defined as long as one lacks a good theory of quantized space-time degrees of freedom that would tell one what was going on at the singularity and at the end-point of black hole evaporation.

Hossenfelder describes the idea that what happens at the singularity is the answer to the “paradox” as the “obvious solution”. Her take on why it’s not conventional wisdom is provocative:

What happened, to make a long story short, is that Lenny Susskind wrote a dismissive paper about the idea that information is kept in black holes until late. This dismissal gave everybody else the opportunity to claim that the obvious solution doesn’t work and to henceforth produce endless amounts of papers on other speculations.

Excuse the cynicism, but that’s my take on the situation. I’ll even admit having contributed to the paper pile because that’s how academia works. I too have to make a living somehow.

So that’s the other reason why physicists worry so much about the black hole information loss problem: Because it’s speculation unconstrained by data, it’s easy to write papers about it, and there are so many people working on it that citations aren’t hard to come by either.

I hope this second piece too will generate some interesting debate within the field.

**Note**: It took about 5 minutes for this posting to attract people who want to argue about Brexit or the political situation in the US. Please don’t do this, any attempts to turn the discussion to those topics will be ruthlessly deleted.

It has always seemed obvious to me why pure maths is not afflicted with the sociological issues discussed in the post. The reward system in maths incentivizes quality over quantity, i.e. working on hard problems that take a long time to solve rather than publishing lots of superficial ‘me too’ papers on the latest fad. The way it does this is through the existence of top quality journals. Publishing in one of those journals confers much greater reward than the average or mediocre journals. E.g. a single paper in Ann. Math. is worth more for a mathematician’s career than countless publications in average journals. So pure mathematicians are incentivized to work on more difficult and deeper problems that can lead to publications in the top journals.

Compare this with the situation in physics. Actually HEP theory has its own unique reward system different from the rest of physics and academic science in general. In HEP theory, number of publications and citations is not actually all that important as long as the person has a respectable number of them. People don’t get hired, get grants, or advance their careers by having more publications or citations than others. Instead, it all depends on how they are viewed by the ‘important people’ in the field. Doing good work will of course help them to be viewed positively, but it is not the only factor. In practice it matters a lot that a big shot in the field feels they have something personal at stake in whether the person succeeds or not.

This reward system in HEP theory incentives people to try to maximize the favorability of how they are viewed by the ‘important people’. The first step is to get the attention of those people. This means working on whatever topic those people are working on, try to do PhD or postdocs at the institutions where those people work, etc. Obviously, people who work on other topics will have a hard time in this reward system. Jobwise they will have to try to survive on whatever scraps are left over after the ‘favored’ folks have been accommodated.

Note the difference with pure maths: journals play essentially no role in the reward system in HEP theory. In the major HEP journals, JHEP and Phys.Rev.D, papers making major advances are published side by side with superficial ‘me too’ papers. Quality control and standards for getting published are pretty minimal. So the journals that a HEP theorist publishes in says nothing about the quality of his/her work.

Even the supposedly top physics journal PRL has uneven quality in practice, and publishing in PRL will do nothing for a HEP theorist’s career prospects. Without the backing of important people, PRL publications count for flat zero in HEP theory. And for someone who does have the backing of important people, it doesn’t matter where they publish…

The different reward system in pure maths makes it possible for a mathematician to prove him/herself meritorious and worth supporting regardless of the topic he/she works on or whether he/she is known and viewed favorably by important mathematicians. It is enough that the person produces work of high enough quality to be published in a top maths journal such as Ann. Math. Then the career rewards will be conferred, regardless of other factors such as fashionability of the topic or how well connected the person is.

Working at a prestigious institution and having connections to important people will no doubt help a mathematician to have his/her papers taken seriously by top maths journals, but it is not essential. Unknown/unconnected mathematicians working on unfashionable topics can and do occasionally manage this too, and get the same career rewards. This possibility does not exist in HEP theory; it is simply not part of the reward system.

As for the rest of physics, and academic science in general, from what I’ve seen the reward system is based on a mixture of bean counting (number of publications and citations, h-index), status of the journals the person has published in (impact factors), and the views of important people in the field. At top universities the views of important people carry more weight, while at average research universities the bean count matters more. Hype and fashion seem to play a big role in getting papers published in the journals with high impact factors (Nature, Science, PNAS, PRL etc) – a very different situation that with the top maths journals. This reward system incentivizes academics to treat research as a video game where the objective is to maximize their score. This means do safe ‘me too’ research in hot fashionable areas that can lead to lots of publications and citations.

It seems pretty obvious that pure maths is the only field with a well-functioning reward system that creates good incentives, and I find it amusing how little interest there is in HEP theory or the rest of academic science in considering replicating the maths system in place of the obviously flawed existing systems.

One reason for this is obvious: the people who have risen to the top under the existing reward systems have no interest in replacing it with another system. They think the present system is fine – after all it allowed them to rise to the top so it must be a good one, right? 🙂

Since they are the ones running the show, there is no practical chance that things will change any time soon.

As for Bee’s proposals for how to modify/improve the reward system (written on her blog, not so much in the Nature article), adding a bunch of additional metrics to assess people on along with the ones currently in use seems to me to just be more bean counting that won’t change the “research as a video game” problem. People will game those metrics like they do with the current ones.

Dear Reader,

I am afraid I don’t understand your logic. As we showed with Sabine in the article I mentioned, the arguments we were aware of against long lived remnants, are fallacious as they make unwarranted assumptions. This would apply also to black holes in AdS.

You assert, “Putting aside anything to do with AdS/CFT, AdS acts like a finite sized box, so a state of radiation of total energy M_Pl has a small maximum entropy.” But this is contradicted by the possibility that small black holes have large interiors as in bag of gold solutions. You go on to conclude that , “there is no way for a Planckian remnant to decay into a gas of (arbitrarily) high entropy quanta whose state would purify that of the Hawking radiation emitted earlier. ” Why? To the extent that the overall state is purified, the entropy is low. In any case there is no problem if the remnant doesn’t decay. For example, a baby universe could form, and the quantum information ends up there. (Is there a calculation that shows that the state of a CFT cannot be dual to a baby universe?)

In any case, the published examples of singularity elimination resolving the problem, by Ashtekar et al and Rovelli et al, do not involve slow decay of remnants. Rather, as shown in detail in Rovelli et al, the Plank star explodes in a time scale of M^2, which is very short compared to the time of M^3 that the Hawking evaporation would have taken. (But still long on astrophysical time scales.) So when the Plank star explodes only 1/M of the Hawking photons have been emitted. So the process is nearly reversible and the problem of information loss never arises.

I don’t think we know which of these outcomes arising from singularity resolution is correct. But I fail to see how semiclassical arguments, which assume the singularity remains, have any force once one finds that, as the calculations I mention show, quantum gravity eliminates the singularity. One cannot avoid the fact that there is a region to the future of where the singularity would have been, which contains quanta entangled with photons in the exterior. There is no singularity and this means no information is lost, whatever the fate of the region containing it. This does not contradict QM, rather it is a consequence of taking QM seriously.

Thanks,

Lee

I’d like to disagree with X, having seen a number of tenure cases in pure math in a highly ranked math department. We don’t just count

Annalspapers, we ask for letters from other highly-ranked mathematicians – i.e., “important people”. And when we do that, the same problematical sociological issues can easily muddy the waters. Somebody working on an unfashionable topic is going to get overlooked unless some miraculous discovery that affects the rest of mathematics comes out of this topic. And to make matters worse, people working on unfashionable topics are not going to get published in theAnnalsOn the other hand, it’s clear that the evaluation system in mathematics is working a lot better than that in fundamental physics. But I think that’s because HEP is currently “broken” in several unfortunate ways, not because the evaluation system in mathematics is different and better.

Lee,

.

I should have been more clear. I hope the following clarifies matters.

.

First, the following scenario is clearly impossible: Hawking’s result is accurate until the black hole radiates down to Planckian mass, and then, dictated by quantum gravity, the remaining Planckian object evaporates completely into outgoing radiation in a Planckian time, restoring the information. The obstacle is as follows. The early radiation can have a huge entropy; the total state of the radiation is pure; therefore the radiation produced by the Planckian object must also have this huge entropy. But it is impossible for a state of radiation of Planckian energy extended over a Planckian scale to have such a large entropy. A large entropy from a small energy can only be obtained if the radiation has a large spatial extent, corresponding to the long-lived remnant scenario. The same comments pertain to the decay of a black hole in AdS into a pure radiation state: no such state of Planckian energy can have the needed entropy.

.

As you correctly state, one possibility is that black hole singularity is resolved, resulting in a stable remnant that carries the information. But here there are two options. 1) The information in the remnant is hidden behind a horizon or is otherwise inaccessible to the outside world. This is really no different than information loss, as far as the outside world goes, although indeed information is preserved at large. 2) The information in the remnant is accessible by the outside world. Here one runs into the problems of infinite pair production and the like: if the remnant states interact with external degrees of freedom, then it would seems that these remnants will be produced in collisions.

.

The scenario of Rovelli: this involves a breakdown of semi-classical physics in a regime where spacetime curvature is arbitrarily small. This is a logical possibility, but clearly far more needs to be done to establish that this is a viable alternative.

.

Ashtekar: et. al: it’s not clear to me what their bottom line is: a stable remnant, a long lived one…?

.

I avoided AdS/CFT so far, but you brought up whether it is compatible with information going into a baby universe. A closed baby universe has zero energy and would need to possess an arbitrarily large number of states to resolve the paradox. So the CFT would need to similarly possess a large number of zero energy states. This is not a property of, say, N=4 SYM as far as anyone knows. So AdS/CFT seems to incompatible with remnant/baby universe scenarios, although admittedly this is not a watertight argument.

In reply to ‘Mathematical Paradise’:

“people working on unfashionable topics are not going to get published in the Annals”

No doubt it’s rare, but it does happen. I know an example, someone who was a former colleague in the maths dept where I worked. His PhD was from a good but not top university and afterwards he got a job at a 4-year teaching college. While there he did the work that resulted in an Annals publication (joint with his former PhD advisor). The field was Discrete/Computational Geometry, which is not a fashionable high-powered area as far as I’m aware. His work no doubt made a major advance in that area, and may have had implications for nearby areas such as algebraic geometry, but I doubt it “affected the rest of mathematics”. Afterwards he was rewarded by a job at a pretty decent research university.

This kind of outcome simply can’t happen in HEP theory/fundamental physics for the reasons I discussed.

The reward system in maths incentivizes quality over quantity, i.e. working on hard problems that take a long time to solve rather than publishing lots of superficial ‘me too’ papers on the latest fad.I think this is far too romantic a view. The reward structure in the math profession results in different problems than that of physics, but it has problems nonetheless.

Math is much more fragmented than physics. My advisor–who has worked in a number of different fields–told me that most mathematicians are idiots when confronted with math from outside their speciality (and he/she said this applies to him/herself). This problem is exacerbated by the fact that research math these days doesn’t allow you the time to develop a broad mathematical culture: a functional analyst will probably not be able to afford learning algebraic number theory for fun, even if he/she might enjoy it and benefit from it.

So when institutions make decisions about funding, hiring, tenure, etc., it often degenerates into wrangling between various specialities: do the harmonic analysts get to hire another harmonic analyst, or do the homotopy theorists get this one?

And this is purely a question of academic politics, not of merit: are there more harmonic analysts in the department; are they cleverer bureaucratic infighters than the homotopy theorists; do they bring more grant money to the department; are they chummier with the deans, etc.

The truth-value of a theorem is objective: it is either true or false (let’s leave aside undecidability for the purposes of the discussion). But the

valueof that theorem is subjective. Even within a particular discipline, there’s not always expert consensus on the value of someone’s work.Regarding X’s suggestion that elite journals encourage ambitious and deep work, I don’t think it’s true. Many, even most top mathematicians never manage to publish in such a distinguished journal as the Annals. This is exacerbated by the fact that the editors of such elite journals are very conscious of their elite status. So they’re very conservative: they won’t accept a paper unless they’re sure it’ll benefit the journal’s reputation. That means many great papers are never published in those journals. Not because the work isn’t good enough, but because for some reason or other, it just doesn’t fit with what the editors want.

So it’s a high-risk strategy to try to work on a problem famous enough to warrant publication in a top journal, especially if you don’t have tenure. A young mathematician, unless he/she is very brilliant and has a high tolerance for taking risks, is not advised to try that route.

A far safer strategy is to prove more modest technical results extending the highly specialized line of research already set out by experts in your narrow field. That way those experts will recognize you, they’ll praise you to their colleagues, they’ll ensure your papers are published in their specialist journals, they’ll write recommendations for you, they’ll put your name on grant applications, and you’ll be more likely to get tenure. Of course, it also means you won’t break new ground or innovate very much. But if the goal is to have a career, that’s the path you should probably take.

Young people are supposed to be brave and innovative and willing to gamble on big ideas because they have nothing to lose, but the incentives of academic mathematics encourage just the opposite. The pressure to publish trivial or incomplete work quickly is high and getting higher; the freedom to take time to do really deep and thorough creative work is quickly diminishing; technical prowess is increasingly valued over insight and conceptual clarity. There are exceptions to these rules, of course, but I am speaking of the norm.

The point is not whether the system produces superstars: there will always be people like Terry Tao and Peter Scholze who manage to break through to the highest level by dint of brilliance, hard work, and the luck to find the right problem at the right time. The point is whether the system can sustain a large “middle class”: those researchers who are not superstars, but who make deep and important contributions.

Even those very top people depend on the contributions of many hundreds of mathematicians whose names are rarely cited. Not even Grothendieck could do it all alone!

And if the system becomes so dysfunctional that that “middle class” dies off, then eventually even the superstars won’t have the material they need to produce great work. (An example is Euler’s discovery of the addition formula for elliptic integrals, which was inspired by the work of Fagnano, a much lesser figure).

No doubt math is fragmented, but it very hard to believe it’s ‘much more fragmented than physics’. Unless physics is understood to only mean fundamental theoretical physics…

I know of quite a few mathematicians from non-illustrious backgrounds whose research areas are not particularly fashionable but have still been able to have successful careers by publishing in high quality journals. It doesn’t have to be Ann. Math.; there other journals one or two notches below that such as Adv. Math. and J. Reine. Angew. Math. which are more accessible but still recognized as high quality. When a mathematician publishes in such a journal it confers a quality stamp that is recognized throughout the maths community, including by those whose areas of specialization are completely different.

A person who has these quality stamps will generally get hired and promoted ahead of another person with better pedigree or more fashionable research area who hasn’t managed to publish in journals of the same quality. This is a *great* feature of the maths reward system. Yes it does incentivize people to work on deeper problems. I’ve worked in maths depts, shared offices with pure mathematicians and seen first hand how trying to get published in the best journals they can is one of their main motivations when doing research.

That doesn’t mean they are all trying to solve famous problems to publish in Ann.Math. But when faced with a choice between cranking out a bunch of fairly trivial papers or using the time instead to try to get one paper in a high quality journal, they choose the latter option from what I’ve seen. Maybe not so much at PhD level, but at postdoc level and beyond this seems to be the case.

If such ‘quality stamps’ were introduced in theoretical HEP/fundamental physics it would fix the sociological problems IMO.

Dear Reader,

“Hawking’s result is accurate until the black hole radiates down to Planckian mass” is exactly what is not true here. Hawking radiation is only approximately thermal. There are many ways to show this, including following the original Hawking’s calculation and not taking the exact limit of t->infinity, which is anyway unrealistic for any physical external observer. Radiation becomes exactly thermal only at t = infinity, but a finite size black hole will disappear before that. Therefore, Hawking radiation was never quite thermal to begin with.

However, the real question is whether these small deviations from thermality are able to unitarize the process of black hole formation and subsequent evaporation. If there is a singularity at the center, then you are removing part of the initial state by hand. Then indeed, you really need some serious abracadabra to recover unitarity. If there is no singularity, then you do not trace out the ingoing modes. Then these modes contribute to the corrections to the original Hawking result. In that case, the deviations from thermality are able to unitarize the whole process. The corrections are locally very small, but their integrated effect is large and sufficient to purify the density matrix.

No amount of downstairs gossip or griping about those comfortably ensconced upstairs will change the social order, save perhaps revolution, and even that can have no long term effect as far as stratification and exclusionary practices are concerned. The dust always settles in the same place. How many “important people” are involved in this debate? Peter and Lee are well-known, influential, and well established, but they are not the kind of “important people” discussed here. Their tomes questioning the worth of string theory pleased a lot of people – like me – but the response of the “important people” was a shrug. I’ve seen at least two articles extolling the virtues of being theoretically lost: it’s the journey that matters, not the goal. The disruptions of a century ago were driven by mountains of incontestable data from many sources. We no longer have that, and the bombs those early pioneers gave rise to got the attention of the real power brokers. Money rained down, attracting great numbers of individuals who would otherwise never have dreamed of a life in theoretical physics. The power of vested interests increases in proportion to population. That power has withstood all attacks for 40 years.

Unitarity.

If it were that simple people would not have been arguing about this for over 40 years. First, barring a gross breakdown of locality, the presence/absence of the singularity cannot possibly effect the outgoing radiation until the last stages, simply because the singularity (or whatever replaces it) is not in causal contact with the outgoing quanta, including their causal past. Of course, this Hawking radiation is not exactly thermal, but it carries a very large entropy and no small corrections to physics at the horizon can change this (a result proven by Mathur). Getting the information out in the early Hawking radiation (before the hole has shrunk to Planckian size) requires some radical new effect: nonlocality, firewall, Rovelli’s bounce scenario, etc. Trouble is, all these scenarios are invented purely to solve this problem, and have little support or evidence beyond that.

Dear Reader,

The absence of the singularity makes all the difference you need. Imagine that an object of mass M collapsed, formed an apparent horizon, but not a singularity at the center. Presumably it trapped everything within the region of the size of 2GM. If there was a singularity at the center, you would have to discard all of this trapped stuff and wait until the black hole reaches the Planck size for some unknown effects to release information about it. However, if there is no singularity, the stuff is just trapped by strong gravitational fields, and it is not lost forever. So when a black hole emits dM of its mass, only stuff within 2G(M-dM) radius will be trapped, the rest will be released right then (without waiting for Planckian physics). Therefore, information is released progressively, at first very slowly, but then faster and faster as the rate of evaporation grows and trapped region shrinks. Nothing non-local is needed here.

About Mathur’s result that you mentioned, he is a very smart guy with a great amount of knowledge about this topic. The result he proved is very important, however, it is incomplete. He assumes that only the members of the Hawking virtual pair are entangled, and not the members of different pairs. The off-diagonal elements of the density matrix are exactly the interaction terms that he neglected.

Unitarity,

Let’s put it this way. Up until the very last stages, Hawking’s computation can be carried out while making absolutely no reference to, or assumptions about, regions of large spacetime curvature. So as long as we have approximate locality and causality, any assumptions about the (non)singularity will have negligible effect on the outgoing radiation until the last stages. Therefore, if you want all the information to come out with the Hawking radiation you have two options: 1) arrange for the last gasp of radiation to carry a huge entropy to purify the full state. This is the long lived remnant scenario. 2) modify the early time radiation so that it carries a much lower entropy than Hawking predicts, noting as above that this modification is not coming from effects of high curvature, at least not in a way that anyone understands. This is the firewall/fuzzball/nonlocality/bounce scenario. They are all radical because they need to modify Hawking’s computation by an O(1) amount, even though his computation appears trustworthy at face value. Nothing I am saying here is controversial.

Dear Reader,

I mostly agree with you, up to some details that are important. Hawking flux does not have to be significantly modified. So in a way his result is robust. However, the density matrix is modified by an O(1) amount. Any local modification (say by adding the interaction terms between two emitted quanta) is small, but the integrated modification is of the order O(1), since every emitted particle is eventually entangled with any other emitted particle.

Assumption about the high curvature region is implicit. In the standard picture with the singularity, when an entangled pair is created at the horizon, a member than fell in disappears from the picture. You can’t use it anymore. That is why you get the exactly thermal outgoing state (because you traced over the part of the state). But if the ingoing modes are still there, even if they are classically disconnected from the outside region, they still contribute to the emission of new Hawking quanta. After all, everyone agrees that the Hawking pair is entangled across the horizon. If the near horizon region is just Rindler, then nearby particles feel each other even across the horizon. So new Hawking quanta must “feel” what is inside the horizon, unless the black hole is an empty space with the singularity at the center.

X,

In some areas (such as number theory), it is enough to make decent progress like the resolution of a new special case to be published in the Annals or other top journals. In other areas, your paper better solve a major open problem to be even sent to the referees. In some areas (dynamic systems for example), a large influential name can make publishing in the top journals a completely normal state of affairs. All life is social. The good news is that none of this matters and there will always be hungry good young people pushing the envelope.

Reader: “More seriously perhaps, it seems impossible to make this work for a black hole formed in AdS. Putting aside anything to do with AdS/CFT, AdS acts like a finite sized box, so a state of radiation of total energy M_Pl has a small maximum entropy. In this case there is no way for a Planckian remnant to decay into a gas of (arbitrarily) high entropy quanta whose state would purify that of the Hawking radiation emitted earlier. ”

A large black hole would not evaporate away (to remnant or not) in AdS, so is there a real information paradox for large black holes in AdS? (I am not talking about firewall like arguments about smoothness of horizon, which are related, but do not concern remnants.)

If instead you are talking about a small black hole, it has an upper bound on its mass (and therefore entropy) precisely so that its can decay to radiation while increasing entropy, no? So in principle the remnant scenario could work?

I am no fan of remnants, but just trying to see where exactly the loopholes are. Thanks.

Pingback: Theoretical physics like a fly hitting a window pane? | Uncommon Descent

Question,

A large black hole in AdS does not evaporate because the Hawking radiation reflects off the boundary back into the black hole. But one can change the boundary conditions to allow energy to flow in and out of AdS. This corresponds to coupling to an auxiliary system that collects the Hawking radiation. So one can turn on this coupling, collect the radiation until the black hole becomes small, and then turn the coupling off again. Details can be found in section 4 of arXiv:1304.6483.

reader,

But wouldn’t that violate the premise that you wanted, which if I understand correctly, was that there is only limited phase space available for the radiation in the box and therefore limited entropy? Isn’t the system that you couple the boundary theory to, a proxy for “the region outside the box”?

In p. 20 of the paper you mentioned, they seem to have another (distinct?) argument. They keep the black hole large by pumping in energy, but the Hawking radiation couples to the auxiliary theory and eventually increases its entanglement unboundedly, which they argue is impossible. To me this sounds like enlarging the box and finding that the new box also has a finite (even if bigger) phase space available. But in any event how this is related to remnants I fail to see – the black hole stays large throughout their thought experiment.

Maybe I am missing something basic.

In the same mentioned blogpost of Dr. Hossenfelder, she writes in a response to gilkalai: “The so-called firewall problem, which isn’t a problem but just a mathematical mistake”, so this is yet another cynical description of the current situation in fundamental theoretical physics (In an older blog she explained why she considers this “Problem” to be a mistake-http://backreaction.blogspot.co.il/2015/10/black-holes-and-academic-walls.html )

Could you comment on that?

Miki Weiss,

I have even less of an opinion on the “firewall problem” than on the black hole information paradox itself. If you’re interested in that, and on Sabine’s views about it, you should discuss it with her at her blog.