My understanding is that the most difficult and contentious decision, that of how and whether to go forward with a new energy frontier collider, has been put off until 2026, when there will be a new update. In the meantime, design work will emphasize studies for the leading contender: a new large circular electron-positron machine. Studies of a linear collider design (CLIC) will continue at a reduced rate. New work will begin on the possibility of a muon collider, as well as other advanced accelerator technologies that might someday be usable.
There will be some move in the direction of the US program, which has abandoned the energy frontier, including more participation in the US and Japanese neutrino programs. A “scientific diversity program”, Physics Beyond Colliders, will receive new support. This program will try and come up with new experiments that don’t require a new energy frontier machine. For more about it, see this CERN report and this article in Nature.
In other news from CERN, work on the LHC should start resuming this summer, with the ongoing LS2 extended by a few months because of the COVID shutdown, so beams back in the LHC late next summer. There likely will be no significant new data coming from the LHC during 2021. The extended shutdown may provide the time for magnet quench training needed to bring the machine to its design energy of 7 TeV/beam.
The headline news is that this backs the FCC plan: a 100km new ring, first run as an electron-positron collider, then as a much higher energy proton-proton collider. There are however a whole bunch of very significant caveats:
No plan for how to finance this very expensive proposal.
The press release mentions a construction start timescale of “less than 10 years after the full exploitation of the HL-LHC, which is expected to complete operations in 2038”. This is twenty years or so away, a very long time.
The main near-term goal mentioned is work on designing the magnets needed for the proton-proton machine, to know by 2026 whether a pp machine is feasible. If the design of appropriate magnets with an acceptable cost for the pp machine is not possible, the implication is that there would be no point in building the large ring and ee machine.
The main competitor to the FCC plan, CLIC, is not at all canceled, but work will continue on it.
A new project to try and design a muon collider will be funded, with a planned 2026 decision about whether to move forward on a test facility for that. The technology for this still does not exist (muons decay very quickly…) but if such a collider were feasible, it would be much smaller and likely much cheaper than something like the FCC project.
So, those who want to argue one way or another about whether it’s a good idea to spend a lot of money on building a new collider should rest assured that the future holds many, many more years in which to conduct such arguments…
Update: I find it very frustrating to see that the online discussion of this is dominated by a pointless argument about whether, as reported, CERN should be going ahead and spending more than \$20 billion or so on a new machine. THEY ARE NOT DOING THIS. What has happened is that, after a lot of work, they have identified the best possible way forward at the energy frontier (the FCC proposal) and decided not to go ahead with it now but to keep studying it and the required technologies. If the cost of this proposal had been a few billion dollars, they likely would have tried to come up with a plan to allocate much of the over billion \$/year CERN budget in future years to the project and start construction. Instead, for the next six years they are allocating .1 – .2% of the CERN budget to further studies of the proposal. Those who have been loudly complaining that this is too expensive a proposal for the HEP community to afford should declare victory, not go to war over this.
Update: The CERN press release has been changed, with “construction” starting within ten years after 2038 changed to “operation” starting within ten years after 2038. This makes more sense, the earlier version seemed absurdly far in the future. My understanding is that the current plan is essentially to put off to 2026 a decision about going ahead with FCC. By 2027 the HL-LHC will be in place, freeing up some money for a new project, possibly the FCC. A 2027 start to FCC construction would allow a start of operations within ten years after the 2038 HL-LHC end date.
However, CERN Director-General Fabiola Gianotti emphasizes that no commitment has been made to build a new mammoth collider, which could cost $20 billion. “There is no recommendation for the implementation of any project,” she says. “This is coming in a few years.”
Way back in the 1980s and 1990s I was, for obvious personal reasons, paying close attention to the job situation for young HEP theorists. They were not good at all: way more talented young theorists than jobs, many if not most Ph.D.s who wanted to continue in the field unhappily spending many years in various postdocs before giving up and doing something else. By the later part of the 1990s I had found a satisfying permanent position in math, so this problem seemed much less interesting. When I was writing “Not Even Wrong” I did spend quite a bit of time gathering numbers to try and quantify the problem, and wrote about them in the book.
Since then I haven’t paid a lot of attention to the HEP theory job situation, hoped that it might have gotten a bit better as the wave of physicists hired during the 1960s hit retirement age, opening up some permanent positions. Today someone sent me a link to a personal statement on Facebook (sorry, but you need to login to a Facebook account to see this) from a young theorist (Angnis Schmidt-May) who has recently decided to leave the field, for reasons that she explains. These include:
We are put in competition with each other from day one, and only very few of us will be given prestigious positions in the end. Most of us never see a permanent contract, keep jumping from place to place and eventually need to find a second career after having sacrificed our entire 20s and 30s to academia. After having made it through the worst part of this and more or less securing my career, it still made me sick to see young physicists entering this spiral. I felt terrible about encouraging them to continue on this path because it is impossible to tell who will make it in the end and who will end up miserable with regrets…
Science itself is severely suffering from the poor working conditions and lack of genuine career prospects. I personally found it extremely hard to focus on the science while constantly being worried about the duration and location of my next contract. #PublishOrPerish. Interactions with and among colleagues are often dominated by the drive to “show off”. Very few people focus on removing misunderstandings or ask honest questions in order to fill their knowledge gaps. The general atmosphere is dominated by doubt instead of trust. We constantly need to outshine our peers. Better to demonstrate superficial knowledge of broad subjects than to focus on the details of a deep problem. Your next result needs to be “groundbreaking”, otherwise you’re out of a job. But produce it and have it published at least one year before your contract ends because that’s when you need to apply for a new one. Science has become a show…
I see absolutely no chance that any of the above will change any time soon.
She also makes important points about the personal cost of this system:
During the last 10 years, I was forced to constantly move around, losing contact to people who meant a lot to me and not being able to establish new lasting relationships.
Sadly, it seems pretty much nothing at all has changed in the last 30-40 years, and I continue to believe this is one reason the subject has been intellectually stagnant during this period. About the only positive suggestion I can make for anyone who wants to try and do anything about this is to take a look at the analogous job situation in mathematics. My knowledge of this is mostly anecdotal, but my impression is that while, like most academic fields, the career path for a new math Ph.D. is not easy, the situation is not at all as bad as the one in HEP theory described above.
Completely Off-Topic: Xenon1T has reported new results today. This seems to me unlikely to be new physics (extraordinary claims require extraordinary evidence), so if you want to follow this story, you should be consulting Jester, not me.
Available at the arXiv this evening is something quite fascinating. Jim Cline has posted course notes from Feynman’s last course, given in 1987-88 on QCD. There are also some audio files of a few of the lectures available here. The course was interrupted by Feynman’s final illness, with the last lecture given just a couple weeks before Feynman’s death in February of 1988. There’s an introduction to the notes by Cline in which he explains more about the course and how the notes came to be.
The course was given over thirty years ago, and many textbooks have appeared since then, but it seems to me this has held up well as an excellent place for a student to go to learn the subject.
There’s a new article at Quanta today promoting representation theory, Kevin Hartnett’s The ‘Useless’ Perspective that Transformed Mathematics. Representation theory is a central, unifying theme in modern mathematics, one that deserves a lot more attention than it usually gets, with undergraduate math majors often not exposed to the subject at all. My book on quantum mechanics is very much based on the idea that the subject is best understood in terms of representation theory. Unfortunately, physics students typically get even less exposure to representation theory than math students.
While I think the article is a great idea, and well-worth reading, I do have two quibbles, one minor and one major. The minor quibble is that one example given of a group, the real numbers with multiplication, is not quite right: you need to remove the element 0, since it has no inverse. If the group law is the additive one, then the real number line with nothing removed truly is a group.
The major quibble is with the theme of the article that a group representation can be thought of as a simplification of something more complicated, the group itself. This is a good way of thinking about one aspect of the use of representation theory in number theory, where representations provide a tractable way to get at the much more complicated structure of the absolute Galois group of a number field. The talk by Geordie Williamson linked to in the article (slides here) explains this well, but Williamson also gets right the more general context, where the group can be easy to understand, the representations complicated. For a simple example of this, in the case of the circle group $S^1$ the group is very easy to understand, its representation theory (the theory of Fourier series) is much more complicated (and much more interesting).
As Williamson explains, a good way to think about what is going on is that representation theory does simplify something by linearizing it, but it’s not the group, it’s a group action. When people talk about the importance of the study of “symmetry” in mathematics, physics, and elsewhere, they often make the mistake of only paying attention to the symmetry groups. The structure you actually have is not just a group (the abstract “symmetries”), but an action of that group on some other object, the thing that has symmetries. When you talk about “rotational symmetry” you have a rotation group, but also something else: the thing that is getting rotated. Representation theory is the linearization of this situation, often achieved by going from the group action on an object to the corresponding group action on some version of functions on the object. Once linearized, the group action becomes a problem in linear algebra, with the group elements represented as matrices, which act on the vectors of the linearization.
To further add to the confusion, “symmetry” is often described in popular accounts as meaning “invariance”. In typical examples given, “invariance” just means that you have a group action, since the group is taking elements of the set to other elements of the set (e.g. rotations not of an arbitrary object, but of a sphere). In representation theory, you have a different notion of invariance. For instance, for the representation of rotations on functions on the sphere, the constant functions are a one-dimensional invariant subspace, giving a trivial representation. But, there are lots of more interesting invariant subspaces of higher dimensions. These are the irreducible representations on the sets of spherical harmonics.
I never thought I would see this happen: a university PR department correcting media hype about its research. You might have noticed this comment here a week ago, about a flurry of media hype about neutrinos and parallel universes. A new CNN story does a good job of explaining where the nonsense came from. The main offender was New Scientist, which got the parallel universe business somehow from Neil Turok and from here.
The ANITA scientists and their institution’s PR people were not exactly blameless, having participated in a 2018 publicity campaign to promote the idea that they had discovered not a parallel universe, but supersymmetry. They reported an observation here, which led to lots of dubious speculative theory papers, such as this one about staus. The University of Hawaii in December 2018 put out a press release announcing that UH professor’s Antarctica discovery may herald new model of physics. One can find all sorts of stories from this period about how this was evidence for supersymmetry, see for instance here, or here.
It’s great to see that the University of Hawaii has tried to do something at least about the latest “parallel universe” nonsense, putting out last week a press release entitled Media incorrectly connects UH research to parallel universe theory. CNN quotes a statement from NASA (I haven’t seen a public source for this), which includes:
Tabloids have misleadingly connected NASA and Gorham’s experimental work, which identified some anomalies in the data, to a theory proposed by outside physicists not connected to the work. Gorham believes there are more plausible, easier explanations to the anomalies.
The public understanding of fundamental physics research and the credibility of the subject have suffered a huge amount of damage over the past few decades, due to the overwhelming amount of misleading, self-serving BS about parallel universes and failed speculative ideas put out by physicists, university PR departments and the journalists who mistakenly take them seriously. I hope this latest is the beginning of a new trend of people in all these categories starting to fight hype, not spread it.
She has the following comments on the current state of HEP theory:
What do you see as the future of theoretical physics? Where is the field headed?
Well, I think it’s headed towards insanity [laugh] by itself. I mean, no, if we don’t have experiments, people can let their imaginations run wild, and invent anything without it being verified or disproven. So I think it—I mean, if we want to understand more about what happens at higher energies, we have to have higher energy colliders. I don’t think—well, cosmology is tied to particle physics, and that’s probably something from—I mean, there is a lot of data coming from cosmology. And there is some data that will be coming from very low energy precision physics. But I don’t think that theory by itself—it needs to be kept in line by [laugh] experiments.
And so what advice do you have sort of globally for people entering the field in terms of the kinds of things they should study, and the way they should study those things?
That’s a [laugh]—actually, I often advise people to go into astro-particle physics just because I think that it has more promise of getting data because I don’t—I mean, I strongly believe you can’t go forward without good data, and unless—well, of course, if they do have another generation of colliders, that would be great. I just don’t know if that’s going to happen…
Another very recent interview I found interesting was that of Vipul Periwal. Periwal arrived as a Ph.D. student at Princeton around the time I was on my way out, starting his career right about when string theory hit in late 1984. He worked as a string theorist for quite a few years, ending up in a tenure-track faculty position at Princeton, but then left the HEP field completely, starting a new career in biology. Here are some extracts from his interview:
And what was David [Gross]’s research at this point? What was he pursuing?
String theory. He was just 100 percent in string theory. Right? They just did the heterotic string, and so everyone was — every seminar at Princeton at that time was all string theory. It was all string theory. Curt was working on it, David was working on it. Edward was working on it. Larry Yaffe was probably the only person — no, two people, Larry Yaffe and Ian Affleck were not doing string theory. Not that they couldn’t, but they just would not do it.
So you mean, despite at this point all of the work on string theory, there were still existential questions about what string theory was, that remained to be answered?
There still are.
No one has ever figured out what is string theory. I mean, if you go ask all the eminent string theorists, none of them can answer for you this one simple question. Can you show me a consistent string theory, where supersymmetry is broken?
Was it good for your research? Was it a good time for you [his postdoc at the IAS]?
I don’t think I did particularly interesting research. I did — I mean, I did okay, but I’m not particularly proud of anything I did there, except for one little paper I wrote, in which [laughs] — see, this is called the contrarian part — is I showed — people were very excited about the large N limit, so I took this toy model, and I showed that in the large N limit, it actually produced something nonanalytic, as in like, you could not, in any order of 1 over an expansions, ever see what the answer was that was exact at N equals infinity. So, in other words, it was to me a cautionary tale. Like, you think you’re doing large N and then getting an intuition for finite N. But here’s this very simple model where you can do the calculation exactly, and you can do all your 1 over N expansion as far as you want, and it’ll never tell you [laughs] about what’s going to happen at N equals infinity. But you know, it’s a — at this point, string theory was already at that time pretty much a sociological thing.
What do you mean “sociological”?
So, it’s something that was borne home to me gradually, that there’s no experimental proof. Like, are you a good physicist or a bad physicist? Who’s going to tell? How do you know? Right?
I mean, I’d go and give a talk somewhere, and I remember this very clearly. I went and gave a talk at SUNY Stony Brook, what’s now called, I guess, Stony Brook University. And at the end of the talk, I was talking to one of the faculty there who’d invited me. And he said, “So, what does XYZ think of this work?” And I was just taken aback. I was like, wait, you’re a physicist. I’m a physicist. Why do we need to know what XYZ thinks of this?
Right? That’s what I mean by sociology.
I see. It’s as much about what a certain group of peers thinks about the theory.
Yeah, and this really perturbed me. As far as I was concerned, after the string perturbation theory diverges thing, I was not interested in doing perturbative calculations. So, what the solution was that people did was: okay, we’ll work on various supersymmetric theories where there is no higher contribution, and under the assumption that there is supersymmetry, you can use holomorphicity to deduce things from the structure of the fact that there’s so much supersymmetry. And this really bothered me, as in okay, there’s this really amazingly beautiful structure, and lots of very pretty mathematical results that are coming out — mathematical results that are suggested by these correlations. But I just don’t get — as a physicist, I don’t to want to have to worry about, “What does XYZ think about what I’m doing?”
Yeah, because you’re pursuing a truth, and it’s either true or it’s not. It doesn’t really matter what other people think about it.
Right. I really don’t care. I mean, no matter how much I respect — and I do — Edward, or David, or whoever, I really don’t need to know what they think about my work. Right? I just — anyhow —
How does that attitude serve you in an academic setting, though? Right?
How does that attitude affect you in terms of tenure considerations and things like that?
Yeah, so when I was — no, so I actually — I mean, when I was — well, I have no — I’m really stupid sociologically, as in, I have no instinct for self-preservation. So, I could see I had role models in front of me of how people with tenure…
…succeed, not just getting tenure at Princeton, but getting tenure at very good places after Princeton, too. And I paid zero attention to all this. So, while I was at Princeton, I tried doing some lattice gauge theory.
With this attitude, it’s not surprising that in Periwal didn’t get tenure at Princeton. He didn’t soon get job offers elsewhere in HEP theory, and decided in 2001 better to try another field than keep going in the one he was in. The interview ends with:
Alright. So, really, the last question. What does the big breakthrough moment look like for you? How would you conceptualize this in terms of putting all of this together? What does that big breakthrough look like?
If I could make a prediction that was clinically testable, that would make me very happy.
Do you think you’ll get there? It’s the thing that motivates you.
Yeah. I want — you know, I said this once. We had someone visiting when I was managing the physics seminar at Princeton once, as an assistant professor. So, this guy asked me, “So, Vipul, what are you working on?” And I was very jaundiced at that time about making a prediction. So, I said, “Well, lattice gauge theory,” which, you know, nobody at Princeton did lattice gauge theory. You were all supposed to be doing string theory. I said, “Yeah, I want a number before I die.” [laughs] People are looking at me like, “What kind of lunatic is this?” But you know, a number. That would be nice.
Looking through the old interviews, I found one of very personal interest, that of Gerald Pearson, who worked with my grandfather Gaylon Ford at Bell Labs. Some of his stories mentioned work with my grandfather (whose main expertise was in the design and construction of vacuum tubes) at Bell Labs during the 1930s. During this period both studied at Columbia, where my grandfather got a master’s degree in physics.
Gaylon Ford worked with Johnson. When Kelly was head of the tube department, he worked in that area. And then they had a big shakeup after which the job was no longer available. Much against his desires, he came over to work with us.
In 1938 you were moved over from Johnson’s group into Becker’s. In fact, you and Sears seem to have changed places.
Before that took place, I remember Johnson called me into his office one day and he wanted to know if I would like to work on… well, Buckley had sent a memorandum asking for temperature regulators for buried cable. Johnson wanted to know if I would like to work in this area. Of course, no one likes to change their jobs but I said, “Fine” and we agreed that I would spend a portion of my time on this problem and that’s where thermistors came from. This continued on and it was very successful. Then it was decided that the work fit in better with Becker’s area than it did with Johnson’s. And, well you asked me about Ford. He was the one who was brought from the tube shop to work on this. And then he later went to work on something else.
Let’s see if we can date that time. Ford wasn’t working with you yet. Ford is here with you in 1934. But this move didn’t take place until ’38.
Yes, that’s what I was saying. He first came over to work on change of resistance with temperature. And he was working with a sulfide compound. And then, let’s see, what happened to him. He went someplace else and Johnson called me into his office and asked me if I would like to carry on Ford’s work and we agreed that I should do it part time and still work on noise. But I said I didn’t want to work with sulphur, it smelled too bad. I said if I work in that area, I’m going to use some other materials. So I made a study of that. First I worked on boron and then on a combination of oxides. A lot of my patents are on such materials and devices. These devices are still used today in the buried cable system as volume regulators.
Our semester here at Columbia is finally over, and I’ve put the lecture notes on Fourier analysis that I wrote up in one document here. A previous blog posting explained the origin of the notes: they cover the second half of this semester’s course, from the point at which the course became an online course due to the COVID-19 situation.
Not much blogging going on here, mainly since everyone staying home seems to have kept news of much interest to a minimum.
Defenders of certain failed speculative theories like to accuse those who point to their failure of being “Popperazzi”, relying on mistaken and naive notions about predictions and falsifiability due to Karl Popper. That’s never been the actual argument for failure, and two excellent pieces have just appeared that explain some of the real issues.
Sabine Hossenfelder’s latest blog entry is Predictions are overrated, a critique of the naive view that you can evaluate a physical theory simply by the criterion “Does it make predictions?”. She goes over several important aspects of the underlying issues here, making clear this is a complex subject that resists people’s desire for a simple, easy to use criterion for evaluating a scientific theory.
Over at Aeon, Jim Baggott writes about this under the headline How science fails, focusing on the life and work of philosopher of science Imre Lakatos. I wish I had been aware of the ideas of Lakatos when I wrote a chapter about the complexities of evaluating scientific success or failure in my book Not Even Wrong, since he was concerned with exactly the sorts of issues I was grappling with there.
One of the main ideas of Lakatos is that you should conceptualize the problem in terms of characterizing a research program as “progressive” or “degenerating”. As relevant new experimental and theoretical results come in, is the research program showing progress towards greater explanatory power or is it instead losing explanatory power, for instance by adding new complex features purely to avoid conflict with experiment? One way I like to think of this is that it’s hard to come up with an absolute measure of success of a research program, but you can more easily evaluate the derivative: is some new development positive or negative for the program?
I don’t think there’s any question but that supersymmetry, GUTs, and string theory are classic examples of degenerating research programs. In 1984-5 there was great hope for a certain idea about how to get a unified theory out of string theory (compactification on Calabi-Yaus), but everything we have learned since then has made this hypothesis one with less and less explanatory power.
The Lakatos framework has the feature that there is no absolute notion of failure. It always remains possible that the derivative will change: for instance the LHC will find superpartners, or a simple compactification scheme that looks just like the real world will be found. The not so easy question to answer is when to give up on a degenerating research program. I think right now prominent string theorists are taking the attitude that it’s past time to give up work on the idea of string theory unification (and they already have), but not yet time to admit failure publicly (since, you never know, a miracle may happen…).
And now, for something completely different: If you want something more entertaining to read about particle physics, I highly recommend Tommaso Dorigo’s Anomaly! (see review here). The one problem with that book was that it stopped in the middle of the story (end of Tevatron Run 1). He now is making available some chapters (see here and here) he wrote that cover the later, Run 2, part of the story.
If like most other people you’re stuck at home, and having trouble concentrating on the projects you thought the current situation would cause you to finally find the time to complete, one thing you could do is watch a lot of talks about mathematics online. As far as I can tell, mathematicians are doing much better than any other field right now in dealing with this, since they have a wonderful site developed at MIT called mathseminars.org. It contains a fairly comprehensive set of listings of Math seminars now being run online.
If there’s anything similar in the physics community, I’d be interested to hear about it.
A new book about the problems of fundamental physics has recently appeared, David Lindley’s The Dream Universe: How Fundamental Physics Lost its Way. I’ve been thinking for a while about whether to write about it here, have held off mainly because I felt I didn’t have much interesting to say. Today I see that Sabine Hossenfelder has written a review of the book which I mostly agree with, so you should read what she has to say.
There are a couple places where I significantly disagree with her. For one thing, unlike Hossenfelder, I’m a great fan of Lindley’s much earlier book on this topic, his 1993 The End of Physics. This was written a very long time ago, at a time when writing for the public about fundamental physics was uniformly positive about the glories of string theory. Unlike all those books, this one has held up well. Reading it at the time it came out, it was remarkable to me to find someone else seeing the same problems with the field that seemed to me obvious, providing a very helpful indication that “no, I’m not crazy, there really is something wrong going on here.”
It’s interesting to read Hossenfelder’s take on the way Lindley makes “mathematical abstraction” the villain in this story:
The problem in modern physics is not the abundance of mathematical abstraction per se, but that physicists have forgotten mathematical abstraction is a means to an end, not an end unto itself. They may have lost sight of the goal, alright, but that doesn’t mean the goal has ceased existing.
Here is where I definitely part company with Lindley, and to some extent with Hossenfelder. The current problems with fundamental physics have nothing to do with mathematical abstraction, but with the refusal to give up on bad physical ideas that don’t work. Thirty-six years ago Witten and many other leaders of the field fell in love not with a mathematical abstraction, but with a bad physical idea: replace fundamental particles with fundamental strings. One reason they fell in love with this idea was that it could be fit together with two other bad ideas they had been dallying with at the time, that there are new forces mixing leptons and quarks (GUTs), and that you can relate bosons and fermions with the square root of translation symmetry (SUSY).
Unfortunately it seems to me that many theorists have now drawn the wrong conclusion from the sorry story of the last forty-some years, deciding that what they need to do is to stay away from unwholesome mathematics, and stick to the wholesome experimentally observable and testable. But what if the underlying reason you got in a bad relationship with a seriously flawed love interest was that there weren’t (and aren’t) any experimentally testable ones to be found? Maybe what you need to do is to work on yourself and why you stay in bad relationships: the mathematically abstract love of your life might still be out there.
Witten yesterday posted a definitely not mathematically abstract paper on the arXiv, Searching for a Black Hole in the Outer Solar System. It’s basically a proposal for finding a physical black hole we could then go and get into a relationship with. I can’t help thinking the probabilities are that getting into a healthy relationship with a new mathematical abstraction is more likely to work out than this.
There has been a remarkable discussion going on for the past couple weeks in the comment section of this blog posting, which gives a very clear picture of the problems with Mochizuki’s claimed proof of the Szpiro conjecture. These problems were first explained in the 2018 Scholze-Stix document Why abc is still a conjecture.
In order to make this discussion more legible, and provide a form for it that can be consulted and distributed outside my blog software, I’ve put together an edited version of the discussion. I’ll update this document if the discussion continues, but it seemed to me to now be winding down.
Depending on one’s background, one will be able to get less or more out of trying to follow this discussion, but it seems to me that it makes an overwhelmingly convincing case that Mochizuki’s articles do not contain a proof of the conjecture and should not be published by PRIMS. No one involved in the discussion claims that there is an understandable and convincing proof in the articles. The discussion is rather about Scholze’s argument that there is no way that the kind of thing Mochizuki is doing can possibly work. While Scholze may not have a fully rigorous, loophole-free argument (and given the ambiguous nature of many of Mochizuki’s claims, this may not be possible) the burden is not on him to do this.
To justify the PRIMS decision to publish the proof, one needs to assume that the referees have some understood and convincing counterargument to that of Scholze, one that nobody has made publicly anywhere. If this really is the case, the editors of PRIMS need to make public these counterarguments, and those mathematicians who find them convincing need to be able to explain them.
A note on comments: if someone has further technical comments on the mathematical issues being discussed at the earlier posting, they should be submitted there. For discussion of issues surrounding publication of the claimed Mochizuki proof, this would be the right place (and I’ve moved a couple recent ones to here). For comments about Szpiro and his conjecture, the posting about him would be an appropriate one.
Update: I hear that the editors of PRIMS are aware of the recent discussion of the problems with the Mochizuki proof, but have decided to go ahead with the publication of the proof anyway. They do not seem to intend to release any information about their editorial process, in particular what counter-arguments to Scholze’s they considered. In effect, they are taking the stand that they have convincing evidence that Scholze is wrong about the mathematics here, but cannot make it public for confidentiality reasons.
Note that the discussion in the comment thread itself has some later entries after the ones gathered in the pdf document I created.