I’ve seen reports today (see here and here) that indicate that Mochizui’s IUT papers, which are supposed to contain a proof of the abc conjecture, have been accepted by the journal Publications of the RIMS. Some of the sources for this are in Japanese (e.g. this and this) and Google Translate has its limitations, so perhaps Japanese speaking readers can let us know if this is a misunderstanding.

If this is true, I think we’ll be seeing something historically unparalleled in mathematics: a claim by a well-respected journal that they have vetted the proof of an extremely well-known conjecture, while most experts in the field who have looked into this have been unable to understand the proof. For background on this story, see my last long blog posting about this (and an earlier one here).

What follows is my very much non-expert understanding of what the current situation of this proof is. It seems likely that there will soon be more stories in the press, and I hope we’ll be hearing from those who best understand the mathematics.

The papers at issue are *Inter-universal Teichmuller Theory I, II, III, IV*, available in preprint form since September 2012 (I blogged about them first here). Evidently they were submitted to the journal around that time, and it has taken over 5 years to referee them. During this 5 year period Mochizuki has logged the changes he has made to the papers here. Mochizuki has written survey articles here and here, and Go Yamashita has written up his own version of the proof, a 400 page document that is available here.

My understanding is that the crucial result needed for abc is the inequality in Corollary 3.12 of IUT III, which is a corollary of Theorem 3.11, the statement of which covers five and a half pages. The proof of Theorem 3.11 essentially just says “The various assertions of Theorem 3.11 follow immediately from the definitions and the references quoted in the statements of these assertions”. In Yamashita’s version, this is Theorem 13.12, listed as the “main theorem” of IUT. There its statement takes 6 pages and the proof, in toto, is “Theorem follows from the definitions.” Anyone trying to understand Mochizuki’s proof thus needs to make their way through either 350 pages of Yamashita’s version, or IUT I, IUT II and the first 125 pages of IUT III (a total of nearly 500 pages). In addition, Yamashita explains that the IUT papers are mostly “trivial”, what they do is interpret and combine results from two preparatory papers (this one from 2008, and this one from 2015, last of a three part series.):

in summary, it seems to the author that, if one ignores the delicate considerations that occur in the course of interpreting and combining the main results of the preparatory papers, together with the ideas and insights that underlie the theory of these preparatory papers, then, in some sense, the only nontrivial mathematical ingredient in inter-universal Teichmueller theory is the classical result [pGC], which was already known in the last century!

Looking at these documents, the daunting task facing experts trying to understand and check this proof is quite clear. I don’t know of any other sources where details are written down (there are two survey articles in Japanese by Yuichiro Hoshi available here).

As far as I know, the current situation of understanding of the proof has not changed significantly since last year, with this seminar in Nottingham the only event bringing people together for talks on the subject. A small number of those close to Mochizuki claim to understand the proof, but they have had little success in explaining their understanding to others. The usual mechanisms by which understanding of new ideas in mathematics gets transmitted to others seem to have failed completely in this case.

The news that the papers have gone through a confidential refereeing process I think does nothing at all to change this situation (and the fact that it is being published in a journal whose editor-in-chief is Mochizuki himself doesn’t help). Until there are either mathematicians who both understand the proof and are able to explain it to others, or a more accessible written version of the proof, I don’t think this proof will be accepted by the larger math community. Those designing rules for the Millennium prizes (abc could easily have been chosen as on the prize list) faced this question of what it takes to be sure a proof is correct. You can read their rules here. A journal publication just starts the process. The next step is a waiting period, such that the proof must “have general acceptance in the mathematics community two years after” publication. Only then does a prize committee take up the question. Unfortunately I think we’re still a long ways from meeting the “general acceptance” criterion in this case.

One problem with following this story for most of us is the extent to which relevant information is sometimes only available in Japanese. For instance, it appears that Mochizuki has been maintaining a diary/blog in Japanese, available here. Perhaps those who read the language can help inform the rest of us about this Japanese-only material. As usual, comments from those well-informed about the topic are welcome, comments from those who want to discuss/argue about issues they’re not well-informed about are discouraged.

**Update**: Frank Calegari has a long blog post about this here, which I think reflects accurately the point of view of most experts (some of whom chime in at his comment section).

New Scientist has a story here. There’s still a lack of clarity about the status of the paper, whether it is “accepted” or “expected to be accepted”, see the exchange here.

**Update**: It occurred to me that I hadn’t linked here to the best source for anyone trying to appreciate why experts are having trouble understanding this material, Brian Conrad’s 2015 report on the Oxford IUT workshop.

**Update**: Curiouser and curiouser. Davide Castelvecchi of Nature writes here in a comment:

Got an email from the journal PRIMS : “The papers of Prof. Motizuki on inter-universal Teichmuller theory have not yet been accepted in a journal, and so we are sorry but RIMS have no comment on it.”

**Update**: Peter Scholze has posted a comment on Frank Calegari’s blog, agreeing that the Mochizuki papers do not yet provide a proof of abc. In addition, he identifies a particular point in the proof of Conjecture 3.12 of IUT III where he is “entirely unable to follow the logic”, despite having asked other experts about it. Others have told him either that they don’t understand this either, or if they do claim to understand it, have been unable to explain it/unwilling to acknowledge that more explanation is necessary. Interestingly, he notes that he has no problem with the many proofs listed as “follows trivially from the definitions” since the needed arguments are trivial. It is in the proof of Corollary 3.12, which is non-trivial and supposedly given in detail, that he identifies a potential problem.

**Update**: Ivan Fesenko has posted on Facebook an email to Peter Scholze complaining about his criticism of the Mochizuki proof. I suppose this makes clear why the refereeing process for dealing with evaluating a paper and its arguments is usually a confidential one.

Last Updated on

I’ll be fascinated to follow the technical and other discussions I’m sure this development will provoke. For my part, an observation about peer review in mathematics, which is now an active topic of both historical and sociological research.

One often refers in evaluating math papers to three criteria associated with GH Hardy: is it ( a ) true, ( b ) new, and ( c ) interesting? The key point here is that *none* of these is a binary yes/no question for most mathematical papers. There are always degrees of novelty and interest to a paper, and in most cases it is impossible to absolutely verify every claim of a paper. So an editor (and yes it would look better if the editor-in-chief were not the author) must always make a judgement call weighing those three factors (among other considerations): is it “true enough,” “new enough,” and “interesting enough.” So I wouldn’t find it unreasonable for an editor to say, under the circumstances, that the work’s novelty and (especially) interest make tolerable a weaker consensus about validity to justify publication. Now, it’s another question whether struggling to communicate the proof so far should count against the “interesting” criterion, but it’s hard to say the proof hasn’t generated a lot of interest!

Michael Barany,

There’s no question this is “new” and no question a proof of abc would be interesting. So, this is all about the “true” question: is this a proof or not? Math journals are not supposed to be publishing papers that are not “true”, even if new and interesting. Checking a proof of an important result is a critical role of the math refereeing process.

The real problem here though is that there is an important criterion you haven’t mentioned, quality of exposition: is the paper “well-written”? No matter how new, true and interesting a paper is, if it’s too badly written the journal should return it to the author and tell them they have to rewrite and do better. There’s a good argument that the IUT papers are not readable and checkable by experts in the usual way, so should have been rejected on those grounds.

All, I don’t want to moderate a general discussion of the refereeing system. Comments should be relevant to this story of the IUT papers.

Could someone knowledgable comment on the effectiveness of Mochizuki’s (purported) proof? Brian Conrad posted a detailed comment to an earlier post here explaining that Mochizuki needed a reduction step involving Belyi maps that made the implied constant non-effective, after that comment was made however Vesselin Dimitrov posted a paper on arXiv claiming to replace this reduction with a constructive one. Has Mochizuki commented on this? Is he now claiming to have an effective estimate on the constant (and hence effective bounds for the Mordell conjecture)?

Edward Frenkel thinks the possible conflict of interest worrying enough that he deleted his tweet @edfrenkel saying the publication was a ‘big deal’

“Nottingham the only event bringing people together for talks on the subject”

There was an additional conference in Kyoto which supposedly went a little better..

https://www.maths.nottingham.ac.uk/personal/ibf/files/kyoto.iut.html

Stephen,

I was referring to “since last year”. The Kyoto meeting was July 2016 and I wrote about the situation then here

http://www.math.columbia.edu/~woit/wordpress/?p=8663

As far as I can tell, little has changed since then. There was a meeting in Vermont Fall 2016, but for whatever reason over the last year there has been very little in the way of attempts to bring people together to discuss IUT.

From old pages found on archive.org it appears that S. Mochizuki became the editor-in-chief of PRIMS some time between March and May 2012. So most likely he was the editor-in-chief already when the paper was submitted.

If it is indeed true that the papers are going to be published in PRIMS, I think there needs to be some sort of explanation about how the refereeing process was handled.

I’m a philosopher, not a mathematician, so I am unable to assess M.’s work on its merits. (Although it sounds like none of you can, either.) But I have thought a bit about how this story has developed over the last few years, and I worry that it is more consonant with a hoax than a mathematical discovery of the first rank.

Consider: (1) All the new jargon in the proof; (2) the inability of top scholars to even understand the proof’s strategy; (3) the convoluted ways in which M.’s papers refer to each other and to themselves; (4) M.’s unwillingness to explain his work; and (5) the many “meta” comments in the papers (about, e.g., people who examined the proof and found it compelling).

And now (6): We get what the community has long desired–submission of the work for peer review. But of the dozens of respectable journals which M. could have chosen to undertake this task, he chose the one that is HOUSED IN HIS OWN RESEARCH CENTER. Indeed, M. is the journal’s EDITOR-IN-CHIEF. This is, of course, unacceptable even in pedestrian circumstances–which these most certainly are not.

I am confident that the truth of the matter will, in time, come out. It may be that M.’s genius is so profound, and so sui generis, that the conventions that apply to everyone else (Tao, Villani, et al.) do not to him. I mean that seriously. I am only suggesting, here, that there are reasonable worries.

This could become very interesting in terms of the sociology of mathematics. What if, purportedly using Mochizuki’s work, other well known results/conjectures are re-proved/proved in a manner that is understood? This will still not be an acceptable proof to (most) of the mathematical community but it would raise eyebrows. Take it up one more notch- what if, again purportedly using Mochizucki’s methods from his abc work, new conjectures/results are created that can proved or re-proved using other well accepted techniques? There is a possibility that a path to understanding Mochizucki’s techniques could be interpreted through its mathematical application.

Re “new,” “interesting” and “true” it is possible to have the “true” part decided by an automated proof-verifier system such as MIZAR, removing all doubt, and providing a far higher standard of truth and believability that has occurred over the vast majority of all prior history of mathematics. (Well, you could still worry that the proof-verifier software, or hardware, is buggy, but I nevertheless claim any proof passing MIZAR is probably more rigorous and more valid than essentially anything in all prior mathematical history.) That is the good news. The bad news is, Mochizuki

would have to codify his work in the MIZAR language (or other proof-description language, several are available) which is probably an immensely long and hard task.

Normally “formal proofs” are somewhere between 5 and 30 times longer than papers giving “informal proofs.” It used to be 30 but over time these systems have acquired giant libraries of already-proved lemmas, and by using them your proof usually can be shortened. So if Mochizuki’s stuff can be proved informally in 500 pages then the formal proof would be expected to be 2500 to 15000 pages.

One of the very few people who has ever been willing to work that hard has been Thomas Hales, who proved the “Kepler conjecture” that the densest packing

of equal balls in 3-space, was the FCC packing (and various co-equally dense ones),

and eventually did so “formally.”

Formal proofs tend to be great at demonstrating correctness, but horrible at conveying understanding to other humans. In Hales’ case the ideas behind his proof

are capable of being understood by other humans, and have been, but the full details require very large computations no unaided human could perform. Mochizuki’s proof presumably/supposedly is nicer than Hales’ in the sense that everything is doable by an unaided human.

Tom,

I think the “hoax” scenario is highly implausible. To on purpose create a hoax of this kind that would resist experts attempts to see their way through it would be arguably even more difficult than coming up with a plausible proof the author believes. All indications are that there’s a well-thought out strategy of a proof here, based on ideas that Mochizuki has been developing, with some success, for a long time. The problem is not that he doesn’t explain the strategy of the proof, it is more that it is so complex and uses such unfamiliar ideas that others have great trouble understanding it in the deep sort of way that is needed to be able to convince oneself that the logic is air-tight. I’m sure Mochizuki is convinced he has an air-tight argument, the problem is that other experts need to be convinced, and that isn’t happening. Part of the problem here is that some of the usual ways these things get resolved haven’t happened due to Mochizuki’s unwillingness to travel. For instance, a more usual scenario would be that someone claiming such a proof would accept one or more invitations to speak at a home institution of other experts, then give a series of lectures there where such experts could interact with him, go through the proof at whatever pace was necessary for them to follow it.

As for publishing this in the journal he is editor in chief of, presumably that was done using whatever their standard mechanism is for editors to publish in their own journal, removing themselves from the refereeing process. Not a great idea in this case though, especially since arguably whoever the editor was should have been telling the author the papers needed to be significantly rewritten before they could be checked. Without knowing the details of the refereeing process though, it’s hard to know whether it really was problematic.

ET,

A large part of the problem here is that the methods used to supposedly prove abc have not found other applications. Normally one would expect a new set of methods like this to find several different applications. The most straightforward way to see there is a problem with a theorem is by looking at the things it implies, and finding one known to be not true by other methods. This bypasses the need to check every detail of the argument, you then know that somewhere one such detail is wrong.

Warren D. Smith,

I suspect the effort needed to rewrite this proof in a machine checkable form is many times larger than what it would take to rewrite it in a form experts can more conventionally check (which would be much more useful anyway).

I think the referees’ names should be made public, else, if they are not willing to stand personally and publicly by the acceptance of the papers for publication, they shouldn’t be accepted. I feel this is a stronger position that standing up at a conference and saying “I’ve spent 700 hours studying IUT and I think it’s ok”. At least two of Wiles’ referee’s are known, as one of them is on public record as saying how they found the initial mistakes in the proof of FLT.

Peter,

You say, “Without knowing the details of the refereeing process though, it’s hard to know whether it really was problematic.” I do not agree.

Let us consider the best-case scenario: M. “received” the paper. He then sent it to two top mathematicians (Lurie, Okounkov). They reviewed the 1,000 pp. (or whatever), and judged everything to be sound. They reported that to M. with their highest recommendation for publication. There’s STILL a problem! Why? Reviewers don’t accept papers–editors do. And you can’t be objective about your own work.

In contrast, suppose we find out that Go Yamashita was one of the referees. Now we have a problem. Unless the rules are different in math, this work can never be published in another peer reviewed journal. For that reason, and because it is impenetrable, it’s never going to receive outside scrutiny.

Why is it being handled like this? The whole situation makes no sense. In circumstances like this, it is important to consider the incentives that people face, and to ask whether there are simple explanations.

Pingback: The abc conjecture – The nth Root

Tom:

There are standard protocols in place for an editor to be entirely removed from the evaluation process of a paper in which there is a conflict of interest (e.g., the submission doesn’t go to that person, and the final decision can be made by some designated set of editors not including the one with the conflict of interest). The editor-in-chief need not have any more or less significant role in the decision process than other editors; their position may be purely for bureaucratic rather than scientific purposes.

I am an editor at a journal for which the editor-in-chief does not get involved in the scientific evaluation or the acceptance process at all; their role is to handle other aspects of the journal’s mission. So though it does indeed not look so good, in principle that doesn’t mean it must taint the process. However, you do raise the very apt point that the handling editor(s) bear just as much if not more responsibility than the referee(s) for the acceptance of the papers (and in particular for not forcing the author to carry out a substantial rewriting of the work to improve the clarity for experts in arithmetic geometry).

David Roberts:

Your suggestion wouldn’t address the core issue that the writing style of the papers (which remains essentially the same as 5 years ago) does not communicate key ideas and insights in a sufficiently understandable manner to the wider community of arithmetic geometry experts. Having referees attest to their confidence in the correctness in a public manner in their role as referees won’t help since no explanations of ideas are thereby conveyed. So abandoning the important principle of confidential refereeing would not accomplish anything in this case.

What I wrote just over 2 years ago in paragraphs 5 through 8 of section 6 of my notes on the Oxford workshop at https://mathbabe.org/2015/12/15/notes-on-the-oxford-iut-workshop-by-brian-conrad/ seems as apt today as it did back then.

Re: formal verification:

The formalization of the Feit-Thompson Odd Order Theorem in Coq took about six years, though a lot of that was R&D into how to formalize a proof that complicated, something that hadn’t been done before.

I would vaguely guess that a formalization in Coq of Mochizui’s work by people who understood it and had a reasonable background in formal verification might take a comparable period — the proof is significantly longer, but proof engineering (and that’s a real thing) is much better than it was when Gonthier began the Feit-Thompson effort. (This is, of course, only a guess, one only finds out by trying.)

(Note that Mizar is quite old, I don’t think I would even attempt to use a system for this that wasn’t, like Coq, based in some sort of modern type theory. Coq and Isabelle/HOL are probably the only realistic choices I’d say, though perhaps Lean might be mature enough at this point, I haven’t tried using it for anything real.)

Pingback: The ABC conjecture has (still) not been proved | Persiflage

BCnrd I would hope that the added profile of the referees’ responsibility in asserting correctness would make them extra careful in their assessment. If the referee reports were also public, so we do not have to rely on M’s cryptic comments of the form “Corrected slightly erroneous phrasing at the beginning of the second sentence of the statement of Corollary 2.4, (i)” as posted to his website (along such banalities as “Corrected a misprint (“(from above)” —> “[from above]”) in Step (x) of the proof of Corollary 3.12″), then we would have a better idea of how they purportedly analysed and “understood” the proof.

I do not think anonymity of referees such a sacred cow that in the case of what could be a result of immense import there is openness in the process. It would give people a mud-map of how others worked through the material if the documents were not secret, and give some idea how to proceed. If the referees are in fact people who have already gone on record as stating they think the theorem is correct, and said people are in M’s orbit, then that also gives people an idea of how much to trust the process.

On a different note, I saw recently some updated comments released on M’s website that introduced some terminology for something that was an existing concept (a certain topological group being second-countable was given some opaque adjective, like being “Galois”, and then every single peculiar object that relied on this was decorated with the adjective). In principle one could go through the background papers are remove all the cruft or just rename objects to something less “creative”. Some kind of dependency graph as the Stacks project has for the various theorems/definitions would illustrate what is really needed, and what is not. Crowd-sourcing efforts like these I feel would be more productive than wringing one’s hands and saying “it’s impenetrable”.

David Roberts:

Based on 5 years of experience of talking with very many top professional number theorists, I can say with extremely high confidence that the referees have to be among those in M’s orbit: there is nobody else out there who has been willing to say even off the record that they are confident the proof is complete. Indeed, even some of the speakers at the Oxford IUT workshop who have put a lot of effort into the papers remain uncertain if the proof is complete, particularly confused about the crucial 3.12 in IUT3. So although I can understand the perspective from which you hold out hope that identifying the referees may lead to a clarification about the mathematics, I am sorry to say that I am very confident that it will not help at all in the end.

The unusual terminology for existing notions is a distraction, but if that were the primary difficulty then this unusual saga would have been resolved long ago. I’m doubtful that dependency graphs or crowd-sourcing are going to help either. Maybe it’s a reflection of my Luddite personality, but I’ve never understood the purpose of dependency graphs for learning or measuring the complexity of a serious piece of mathematics: the way to learn is to sit down and read and follow back the references to earlier results until one is done (this is how I read EGA, for example: begin at the interesting stuff and go backwards), and to evaluate difficulty one should simply know the ideas and proofs. For example, when I look at dependency graphs in the Stacks Project for results I know well, I find them to be quite irrelevant to measuring difficulty. (To be honest, I don’t understand at all what real purpose those are meant to serve in terms of understanding mathematical ideas. I’d love to be enlightened about that.) Likewise, I don’t see how crowd-sourcing can be expected to reveal ideas in a high-level paper on arithmetic geometry when experts at the level of Faltings are completely mystified. It is a much harder task than DeepBlue or alphaGo defeating the world master. 🙂

Frank Calegari’s blog post from this evening does an excellent job of summarizing the current view of many professional number theorists:

https://galoisrepresentations.wordpress.com/2017/12/17/the-abc-conjecture-has-still-not-been-proved/

Regarding the question of the paper being accepted: In my view, it adds essentially no new information (regarding their correctness) to learn that the papers have been refereed and accepted. Because if the refereeing process was meaningful, then it was undertaken by experts who are known to the number theory community. (I don’t mean that the identity of the referees is known; just that the referees had to have come from a known group of experts. There is no outside group of referees who can adjudicate the situation separate from the various experts who have already tried to come to grips with the papers.) And in this case, the number theory community essentially knows where every expert stands on this matter. Most regard the situation as ambiguous at best, and are not confident in the correctness of the papers. Some experts do claim that the papers are correct, and that they understand the mathematics — but they have not been able to explain the papers, or their understanding, to a wider audience of experts, and so this wider audience remains unconvinced. It seems almost a necessary conclusion that the referees came from this second group of experts, given that they did certify the correctness of the papers, and so there is no reason to give their views any more credence just because they are now being expressed a second time in the guise of the opinions of the anonymous referees.

The bottom line: no expert who claims to understand the arguments has succeeded in explaining them to any of the (very many) experts who remain mystified. This situation hasn’t changed just because some of the former experts have now weighed in wearing the hat of an anonymous referee.

In this comment I would like to make three somewhat perhaps bizarre observations on the matter at hand. The first is that I think the publication of these papers doesn’t matter. The second is that all the fallout from the Mochizuki ABC situation has already happened. The third is that it was unavoidable that these papers would be published.

The publication of these papers doesn’t matter as we will just continue exactly as we have with other published papers that contain errors and/or have incomplete proofs. We will continue to try and prove the ABC conjecture exactly as we were doing before the publication of these papers. The results of these papers will not be used more than they already are; in fact I would think, because of the bruhaha involved, there is much less chance of referees overlooking a reference to these papers than there is of referees overlooking a reference to one the (small but nonzero number) of completely erroneous papers in the literature. It is harder still to catch the use of an erroneous lemma in an otherwise completely correct math paper, especially if the authors are famous.

The fallout has already happened, since if we could use the ideas and methods in these papers to successfully prove something as important as ABC, then number theorists and arithmetic algebraic geometers would already have figured out to use it to prove more. In fact, you’d have many people working on refining the method, reformulating it, and improving the precise bounds found by M. Especially recently we have seen that a breakthrough very quickly leads to a much improved exposition as well as much improved bounds. Two examples are: (1) “bounded gaps between primes” proved by Yitang Zhang which led to a very succesful polymath project, and (2) “bounds for cap sets” by Croot-Lev-Pach which led to a paper by Ellenberg-Gijswijt and then via Tao to slice rank, etc, etc. The stereotypical lone math researcher working in an attic still exists, but I would say more and more of the advancement of highly technical fields in mathematics (such as arithmetic algebraic geometry) relies on having larger groups of mathematicians distilling, refining, combining, and adding pieces to construct a larger whole.

It was unavoidable that these papers should be published. In 2012 M convinced himself he had proven the ABC conjecture (as predicted by M several years earlier), after working very hard and long (starting around 2000 or 2002). Of course getting the papers published was then the next logical step. Not getting the papers published would be a public humiliation. I think the outcome we have now is strange but we can deal with it easily and it is best for M. My hope for the future is many more papers of M such as his amazing papers on the Grothendieck conjecture.

A colleague of mine who reads Japanese tells me that the article does not explicitly say that the papers have been accepted, only that they are “forecast”（見通し）to be accepted.

I am afraid your posts related to Mochizuki’s theory contain many incorrect statements.

I do not have time to go through everything. Here are just few examples.

“while most experts in the field who have looked into this have been unable to understand the proof.”

Who do you call an “expert”? Someone who knows her/his narrow area of vast arithmetic geometry or number theory? What makes such a person an “expert” in Mochizuki’s theory or anabelian geometry?

Have you read Fesenko’s interview to the AMS?

I’ve talked with several experts, they understand the proof and agree it is correct.

How many experts (who at least know anabelian geometry) have you talked with, to make your statement?

“Looking at these documents, the daunting task facing experts trying to understand and check this proof is quite clear.”

The task is daunting if you lack enthusiasm. If you lack enthusiasm, it is better to switch to another area.

Do you expect that the most fundamental achievement in modern mathematics will be simplified in the first years of its existence to the extent that everyone can easily read it?

How many people understood Deligne’s proof of GRH in positive characteristic (the Weil conjecture) when it was published?

How many people understood Voevodsky’s proof when it was published?

I’ve heard about several PhD students who have already studied Mochizuki’s theory in full. I see that several talks at the two international conferences on IUT were given by students.

“the current situation of understanding of the proof has not changed significantly since last year”

What is the source of your information? Have you talked with several experts?

I’ve heard that the number of experts is now 13-17.

“A small number of those close to Mochizuki claim to understand the proof”

What do you mean by “close”? There are 5 or more experts outside Japan.

What is “a small number” for you and what do you compare it with?

Are you aware that there are already more (maybe, two times more) people who understand Mochizuki’s theory than the number of people who understood Deligne’s proof of GRH in positive characteristic or Voevodsky’s proof of the Bloch-Kato conjecture at the time of their publication?

“but they have had little success in explaining their understanding to others.”

Have you talked with those young researchers who have mastered IUT by learning from the other experts? Do you imply that everyone who has understood Mochizuki’s theory should stop doing everything else and dedicate all her/his full time to explaining the theory to others?

Have you read the excellent survey by Mochizuki (2016)?

I’ve heard about 5-6 reviews of parts of IUT being written for the volume of proceeding of the RIMS workshop on IUT.

“The usual mechanisms by which understanding of new ideas in mathematics gets transmitted to others seem to have failed completely in this case.

What is your evidence? How to you measure the failure? Have you at least talked to people who attended the RIMS workshop on IUT?

I am afraid your post demonstrates a lot of disrespect to the truth, to Mochizuki, and to those people who have worked hard to become experts in Mochizuki’s theory

not even wrong

Mochizui should just go back to working and not focus on what the occidental community thinks about what he’s done. The math is there, and if it is useful it will be used. The subject is too interesting, and a young mind somewhere, not burdened by hardness, will eventually take it up and clarify things for those that seem incapable of working outside of their comfort zone. I think he wastes his time now trying to be a google math translator for intractable, inflexible thinkers.

Re: formal verification. I would like to try and refute a conjecture made by Perry Metzger above. Metzger says

“I would vaguely guess that a formalization in Coq of Mochizui’s work by people who understood it and had a reasonable background in formal verification might take a comparable period” [6 years, the approximate time it took to verify Feit-Thompson in Coq].

Unfortunately the two situations are completely incomparable. The reason Feit-Thompson was verifiable in Coq in finite time was that there is a very well-written two book project (Bender-Glauberman, published 1994, and then Petervalvi, published 2000) which gives all details of the Feit-Thompson proof at a level which is understandable by a graduate student in group theory without any “extra training”. In other words, the books are undoubtedly _mathematics_. This part is the job of humans, and humans did it well. Humans “pre-formalised” the proof if you like, making it clear how each step could be checked from the axioms of mathematics without any pretence of requiring 300 hours of training.

Because this had already been done, a team of computer scientists and mathematicians could then go on and really formalise the proof in a computer proof verification system. The paper describing the formalisation of Feit-Thompson in Coq explicitly explains this essential prerequisite on its first page.

Humans have not yet finished the pre-formalisation part of Mochizuki’s work. The papers contain claims which many members of the number theory regard as being unclear. A team of computer scientists cannot work with a document which contains assertions for which the only way to unravel them is to ask a small group of people who claim to understand the proof, especially if the response is that the computer scientists should just go away and do 300 hours of training. Because the pre-formalisation is not yet finished, the formalisation can not yet begin, and until the pre-formalisation is finished I think it would be very unwise to speculate about the length of a possible future formalisation project.

I speak as someone who has been formalising his first year mathematics undergraduate lectures in Lean this term, and can verify first hand that whilst learning the language of these systems is a barrier to entry, if you fully and properly understand an argument as a mathematician there seems to be no barrier in formalising the argument in Lean other than the simple fact that it can sometimes take quite a long time to convince a computer that something is “obvious”.

I don’t want to get into an argument in comments, and so I will most likely not follow-up if there are any responses to this comment. That being said, I would like to address some of the assertions and implications in a couple of the comments above.

The reason for this is that there are many people (including e.g. science journalists, and their lay readers) who are interested in the status of ABC, and who are not in a position to independently evaluate it, and so who are relying on the reports and commentary of those closer to the action to help them in forming their own opinions and making their assessments. Because of this, I think it is important to give some indication of how the situation with ABC is different to that with other previous breakthroughs.

I am not old enough to have seen the reception of Weil I and Weil II first hand, but I don’t think there can be any comparison with the reception of IUT. As Persiflage discusses in his blog post, very soon after the Grothendieckian revolution in algebraic geometry began, there were many centres of expertise in the theory, all over the world. Deligne’s strategy for proving the Weil conjectures incorporated many known methods, and could be explained in those terms: sheaf-theoretic methods building on the classical technique of Lefschetz pencils, combined with other ideas such as tensor-product amplification, and certain methods of estimate familiar from analytic number theory. The proof could be broken down and explained in these terms (and there is a lovely article of Nick Katz that does this). There is no doubt that the proof was immediately accepted by the community at large, and immediately recognized as a breakthrough.

I was a graduate student when Voevodsky’s work first appeared, and saw first-hand the process of it being disseminated in seminars at Harvard. Many people, some junior, some senior, attended these seminars. His methods were almost immediately taken up and further developed by other experts, and there were existing frameworks in which they could be explained and understood: they combined and built upon methods from the theory of algebraic cycles, methods from sheaf-theory and homological algebra, and methods from homotopy theory. Experts from these various areas were quickly able to grasp various of the essential aspects of the arguments, and further develop them. (This is the phenomenon that has *not* happened regarding IUT — the methods have not been understood and taken up by a panoply of other researchers.)

The suggestion that the experts who do not understand IUT are not *true* experts, but are instead hidebound and lazy, and should not have their judgement listened to, is not reasonable. It is close to a no true Scotsman fallacy.

To give another example: when the papers of Wiles and Taylor–Wiles appeared announcing the proof of modularity and FLT, they used methods from the arithmetic of modular forms and the deformation theory of Galois representations. The best-known experts on these methods, who had written papers on these precise topics (besides Taylor and Wiles themselves) were perhaps Gross, Hida, Mazur, and Ribet. Faltings was a celebrated expert in arithmetic geometry and neighbouring topics, but had not written papers that were as close in technique and subject matter to the W-TW arguments as these latter named experts. Nevertheless, he was certainly able to read their papers, understand the arguments, and he famously contributed one of the first simplifications to the crucial TW patching argument.

If the situation with IUT were comparable to the situation with previous breakthroughs, there would be plenty of experts outside the immediate orbit of its author, and outside the specific field of anabelian geometry, who would have successfully engaged with the arguments, and could comment with certainty on various aspects of the proof. This is manifestly not the case, and the situation with IUT is manifestly not comparable to the earlier situations with the Weil conjectures, the Milnor conjecture, or modularity/FLT.

Stephane Hubl:

I too have spoken with some experts who are convinced the proof is complete, and I am sure that they are sincere in their conviction. But I have also spoken with a rather larger number of experts in anabelian geometry as well as some participants and speakers at the RIMS workshop on IUT who have all invested tremendous amounts of time in the study of the IUT papers and remain frustrated by the task of extracting essential ideas from the papers in their current form (though in time some of them are making modest progress in parts). In particular, there are very well-informed speakers from multiple IUT workshops who remain puzzled by key steps. This cannot be disregarded.

The bottom line is that there has been a serious breakdown of the usual modes of communication of mathematical ideas. This has nothing to do with the question of whether or not the IUT papers contain a complete proof or at least the crucial ideas which yield such a proof, and what follows is not focused on that, but rather on the unprecedented communication breakdown that is the primary concern of many. (Just to be clear, I think that Mochizuki is a tremendously talented mathematician, with a remarkable work ethic.) Matt Emerton has addressed this well in his comment above, and I’d like to share some of my own thoughts on this (and then I too will probably bow out from further comments here).

Stories of PhD students whose have “studied Mochizuki’s theory in full” cannot be taken to mean (in the absence of more specific information) that they have understood much when it comes to the difficulties that have confronted experts in anabelian geometry. We all know plenty of examples of well-intentioned PhD students who delude themselves into thinking they understand a piece of mathematics much better than they really do. V. Dimitrov was quite up front in his own presentation at an IUT workshop that he was treating the details as a black box while seeking to make some refinement to get a better conclusion. His experience doesn’t indicate anything about the status of IUT (as I am sure he would be the first to acknowledge). So likewise even if a PhD student makes some new observation about one facet of things, that fact itself has no bearing on the overall concerns that persist. Moreover, the fact of a student — or anyone! — giving a talk at such a workshop says nothing about the person’s grasp of the overall theory or the technical details. It makes good copy for a journalist to write about energetic bright young people making great progress while the experts complain about being confused, but such stories have little to do with reality.

What is making this situation so different from that of other big advances used by many mathematicians who haven’t understood the technical details of the proof is that even after many years have passed there still hasn’t been the wider grasp of some key new insights and how they plausibly fit together overall to yield a result of the desired type. It is of course a weaker kind of understanding than technical mastery of the entire argument, but is the kind of “reality check” that mathematicians have always used when judging things and it is usually addressed by the Introduction. The reason this has been rather more difficult than is usually the case is likely due to some singular communication aspects (the unconventional writing style and the decision by the creator not to travel widely to give talks about it) that can hopefully be addressed in time as other explanations are created and especially as the ideas within are used to do other things.

The comparison to Deligne’s GRH is based on a flawed perception of the situation at that time (and it also not a fair standard anyway, since as Matt Emerton notes that proof was done within a framework that had existed for a decade and had already been worked through in seminars around the world). At the time of Deligne’s publication of his proof of GRH in positive characteristic, experts in algebraic geometry from around the world understood the proof right away, certainly far more than at present for IUT. Although some specific details about the proof of the Lefschetz trace formula hadn’t yet been published (which apparently delayed Deligne’s Fields Medal from 1974 to 1978), those ideas had been disseminated in seminars at a variety of places. And more importantly, the key insights of Deligne’s method were rapidly understood and within a few years Deligne had gone much further with his Weil II paper, and progress kept going in those directions (with new applications emerging, etc.).

An ongoing point of frustration, even among many leading experts in anabelian geometry, has been to extract essential new insights from the IUT work, to get a clearer sense of what is making it tick (so to speak). This is of course asking for much less than a complete command of the entire proof. To the extent that doesn’t happen and the methods are not used to do other things, the result can be treated as was Hironaka’s resolution of singularities. That proof at the time of its publication was fully understood in its details by almost nobody, but (i) the broad outlines of the method were grasped, including key new ideas (such as normal flatness in commutative algebra) that were used to do other things, and (ii) it was written up in an entirely conventional writing style. Together, (i) and (ii) probably explain why the result was quickly accepted as having been proved, even though Hironaka’s intricate nested inductive arguments with delicate invariants were too exhausting for most to get through at the time (even defeating energetic brilliant young people such as Deligne). The reliance on resolution of singularities in proofs of other results was tracked similarly to how one tracks reliance on a widely believed conjecture, and in time a small but dedicated group of experts in the details did emerge and gradually streamline and improve the techniques over many years until an understanding was reached that could be more widely disseminated. One can hope that eventually something similar will happen in the present case. In the meantime, I share the sentiment at the end of Lucifer’s comment above and I hope that the modes of communication that have broken down in this circumstance can eventually be repaired.

Brian, Matt, Kevin, why three of you are not seriously studying IUT? Why did not you attend the Kyoto conference on IUT?

Matt, you are not presenting the history of Voevodsky’s proof correctly

Brian, you write, “What is making this situation so different from that of other big advances used by many mathematicians who haven’t understood the technical details of the proof is that even after many years have passed there still hasn’t been the wider grasp of some key new insights and how they plausibly fit together overall to yield a result of the desired type.” – but surely you know many examples of this kind of things, e.g. class field theory was viewed as impenetrable for most number theorists for at least 25 years.

“An ongoing point of frustration, even among many leading experts in anabelian geometry, has been to extract essential new insights from the IUT work, to get a clearer sense of what is making it tick (so to speak).” – who are these “many leading experts”? Do you know if they managed to read the 2016 survey of Mochizuki?

“The bottom line is that there has been a serious breakdown of the usual modes of communication of mathematical ideas.” – we live, we learn, we change, we adopt. Number theorists have been tremedously slow in adapting to new things in comparison to almost everyone else. Smart phones did not exist 20 years ago, but various number theorists still think about mathematics as they live 20 years ago. Adopt. Learn. Improve.

His blog contains personal thoughts and philosophy as opposed to mathematics. I don’t think it is informative about the contents of his papers.

He claims that the world needs walls, and hearts that can see through walls. Without walls, it is a Tower of Babel or Russel’s Paradox. He feels he needs a wall of privacy between him and many things, including English speakers and the English language. The wall in his Inter-Universal Teichmuller Theory is the Teichmuller symmetry, and the heart that sees through it is the Etale symmetry.

From the Internet Encyclopedia of Philosophy:

No True Scotsman

This error is a kind of Ad Hoc Rescue of one’s generalization in which the reasoner re-characterizes the situation solely in order to escape refutation of the generalization.

Example:

Smith: All Scotsmen are loyal and brave.

Jones: But McDougal over there is a Scotsman, and he was arrested by his commanding officer for running from the enemy.

Smith: Well, if that’s right, it just shows that McDougal wasn’t a TRUE Scotsman.

I see many non-mathematician posting a comment here. For those who do not understand mathematical comments posted here, informal Japanese intro to IUT is now available for free at http://live.nicovideo.jp/watch/lv303564022#7:26:33

It is done by Kato @Tokyo Institute of Technology. He states his talk is accessible to high school students. No translation available.

Yamashita’s corresponding theorem is 13.12, not 3.12.

Dale,

Thanks for pointing that out, fixed.

To make a different comparison, for the people who aren’t number theorists.

The most spectacular advances in arithmetic geometry since Grothendieck/Deligne are being made *now*, by Scholze (building on foundational work of Fontaine, in collaboration or conversation with many — Fargue, Bhatt, Kedlaya-Liu, …)

In particular, as just one example, we now have a road map for local Langlands (it is geometric Langlands on the ‘curve’ built out of the period rings for p-adic Hodge theory).

The new techniques that make this possible are explainable at every level, from that of high school student (a bit vaguely, but this is a comment not a blog post: we are able to treat the prime p like a variable x, by also considering all possible p^n’th roots of p), to the professional mathematician (say, the definition of the v-topology, and how passing to the perfection kills Ext groups).

This is all new to all of us, and learning seminars — on Scholze’s papers, and on the background — are happening at almost every institution on the planet.

They are difficult papers. It is a joy to read them — the ideas sing, the implications are staggering, and Scholze also writes really well.

Nothing like this has happened with Mochizuki’s papers. Not a single new idea has made it out to the rest of us, nor even old ideas (with the one exception of BConrad’s rephrasing in one of the blog posts after the Oxford conference).

This is sad for those of us who havent the time to invest in reading his proof — we all *want* his proof to be both correct and new — and annoying as hell for the ones who have tried reading his proof.

[let me just say beforehand that I certainly give place to the expertise of the number theorists here on the status of the proof]

My thoughts about referees should be viewed as more sociological in nature: either we are pleasantly surprised that someone outside the inner circle has apparently understood the work, and we can ask for an explanation in ordinary language of what they think is happening, or all the world knows a handful of junior colleagues at the same university refereed the paper. Experts suspecting who the referees are doesn’t make it clear what is going on. I expect it is closer to the latter option…

Regarding dependency graphs etc, I would like to draw a comparison between “just read back through all the references” in this case and another: that of FLT and the perennial question (now settled

in printby McLarty) of whether the proof used universes. One expert here said several times on MO that they aren’t used, because he has read back through all the references from Wiles to EGA. Is it left to the rest of us to take him on faith, or duplicate that effort? Or better, record somehow exactly which bits of thousands of pages of mathematics are or aren’t actually used in a given proof. To the case at hand, apparently the full theory of Frobenioids is not used in the IUT papers. Being able to know what to skip from the prerequisite papers would be a head start for someone trying to figure out what is going on. And I would like to know what on earth happened, whether even any of the prerequisite papers are good for something other than IUT. I would ask M’s inner circle what problems Frobenioids, for example, are able to solve that one can’t do in ordinary Arakelov geometry.Or, since the number theorists are floating about here, up to which point are people confident in what is going on in M’s work? Is the reduction of Szpiro to the special case sound? Is all the stuff with the étale theta function legit? As a category theorist there is simultaneously a huge amount of naïvety in the IUT papers (isomorphism classes of functors? Totally unused constructions with Grothendieck universes? Why??) and interesting results (equivalences of categories of certain topological groups and of arithmetic objects), but I can’t tell if these are being used arithmetically in any way beyond “I can reconstruct stuff from data. Tadaa!”

I think that some “amateur mathematicians” here are being misled into a false sense of understanding of how mathematics research works, and what is actually going on in this particular saga. As someone who has been in a position to observe arithmetic geometry research rather closely over the past few years, but who is also blissfully free from the title of “arithmetic geometry expert”, I think I can be a little more blunt.

This is not an issue of mathematicians being too lazy, or too attached to traditional methods of dissemination and communication, or Mochizuki’s proof being too difficult. The community has seen many spectacular and difficult breakthroughs over the past few decades, all of which were accompanied by various signs of legitimacy. One such sign is that the paper’s introduction should be able to summarize a strategy which is both new and at least plausible. Furthermore, as one reads the paper one should have the feeling of building up contentful observations. And those purporting to understand the solution should be able to give talks and answer questions in a way that seems to convey knowledge.

It should be mentioned that Mochizuki’s earlier work was brilliant and difficult, and was absorbed by the usual process of mathematical dissemination. Unfortunately, the paper presently in question fails to exhibit these signs of legitimacy. It introduces an incredible amount of jargon, which is explained in terms of more jargon. The talks “explaining” the paper continue this trend of seemingly empty verbiage. It is unprecedented (for legitimate work) that after dozens hours of talks, and hundreds of “survey” pages, not a single meaningful idea has been absorbed by the wider community of experts. Although people are too tactful to publicly state this, the situation has led to a strong impression that the reason why nothing is being communicated is that there is no real content to communicate.

Pingback: Mathematician set to publish ABC proof almost no one understands - News Flash

Because there are nonmathematicians, physicists, amateurs and the like reading this blog, it is perhaps worth pointing out that posters like Brian Conrad and Matthew Emerton are very good mathematicians with a lot of specific and relevant technical expertise and who generally speak/write with a lot of care. While arguments based on appeals to authority are anathema to the scientific/mathematical spirit, they are all that is available to the ignorant, and for the ignorant the question is to properly pick the authorities. For me at least, I give a lot of credit to what those two write because when they write about things I do understand, my judgment is that they write interesting things and communicate them well. So I take their opinions on the Mochizuki situation seriously.

New discoveries in science and mathematics are usually made by standing on the shoulders of giants. By building on what came before and adding a new piece or idea that opens up new vistas to explore. Surely, Mochizuki is similar. His ideas must have started with a kernel of an idea clearly relating to known work. He must have took that first step into the unknown from known territory. Why can he not explain how his thinking evolved from that first step and trace back to the known ideas?

Thank you all for this very interesting conversation. A viewpoint somewhat relevant to it might be “On proof and progress in mathematics” (http://www.ams.org/journals/bull/1994-30-02/S0273-0979-1994-00502-6/S0273-0979-1994-00502-6.pdf or https://arxiv.org/abs/math/9404236), by William Thurston (Peter Woit had posted a link to it a few years ago).

Got an email from the journal PRIMS: “The papers of Prof. Motizuki on inter-universal Teichmuller theory have not yet been accepted in a journal, and so we are sorry but RIMS have no comment on it.”

This seems to implicitly admit that the papers _are_ under review at PRIMS. Otherwise how would the journal feel like it can comment on the status of papers under review somewhere else? Oh, but the editor-in-chief of PRIMS is Mochizuki himself, so perhaps he is making the journal speak for him?

Given PS’ comment and BCnrd’s corroboration, I thank them for being open about this, and retract my previous bluster and confidence. It’s a pity it took this long for the specific sticking point to become public, though.

I was very fortunate to take Algebraic Geometry from Hironaka. He presented resolution of singularities, but I completely failed to understand it. He saved my life, however with this comment: ” You are an undergrad. Undergrads know nothing! As a grad student, you will learn one thing – your thesis. Then, as a Professor, you will start to learn Mathematics.”

I can not thank him enough for that perspective.

David Roberts:

Although the concern about the proof of Corollary 3.12 in IUT3 has been known to quite a few number theorists for some time, and in particular had to be known to all near to IUT (e.g., some speakers at IUT workshops are very well aware of this and have been trying to sort it out), there didn’t seem to be a compelling mathematical reason to make this public before now (say by mentioning it somewhere on the Internet). The reason is that the referee process should be permitted to run its course without undue public pressure.

In particular, if the referees had identified this matter as something needing clarification and if it had then been addressed (prior to acceptance) then there would have been no purpose in having made a public statement about the fact that such a non-trivial concern had arisen. (Indeed, the identification of gaps/errors and their correction during the referee process is entirely standard, so it would not be a reasonable policy that such concerns are announced in public when they are found prior to acceptance.) The assumption of many number theorists was that the referees had to be going very carefully through the papers, so they must have highlighted this known point of concern, and hence in due time before acceptance it would somehow be clarified.

But then there emerged the recent (ultimately incorrect) news that the papers were accepted, yet there had been no change in that proof which adequately addresses the concern (there have been some minor changes in the writing in the proof, but not relevant to this issue). That situation, together with the fact that more than 5 years have passed since the original release of the papers, exhausted the patience of many number theorists; the time had arrived to make this concern more widely known. What seems a pity to me is not that it took until now to make this long-standing concern more public (ideally it would have been clarified N years ago and then would never need to have been discussed publicly at all), but rather that after so much time it still has not been adequately addressed even though so many more trivial changes have been made in IUT3.

BCnrd,

Given the speed at which errors were identified on MathOverflow then rectified, imagine if the “mysterious proof” had been publicly identified and all eyes could have been focussed on that for the past n years. Has anybody told Mochizuki directly that 3.12 needs serious attention and that it’s not for lack of understanding the previous “trivial” sections? If I was in a position to do so (eg being a number theorist of good standing) I would have done so already.

David Roberts:

I prefer not to get involved in further discussion on the points you are raising; I have said all that I wish to say on these matters in my previous comments here and on Frank Calegari’s blog.

Just in case it might help, although probably experts have looked at that too, there has been two review papers in Japanese by Hoshi, and the second one http://www.kurims.kyoto-u.ac.jp/~yuichiro/intro_iut_continued.pdf touches a little bit upon that 3.12 issue.

Namely, at the end of page 92, just before the paragraph 26, it says (thanks to google translate) that “due to the existence of the compatible isomorphism $^{\dagger 0}\mathfrak{R}_{Frob}\xrightarrow{\sim} .^{\ddagger 0}\mathfrak{R}_{Frob}$ the log-volume of the target object $^{\dagger 0}\Theta$ cannot help but be less than $\text{vol}(^{\ddagger 0}\underline{\underline{\Theta}})$, hence the inequality …”

I’m sure people have extensively tried convincing Mochizuki to do this, but… mentioning anyways since the situation is so perplexing:

Fesenko says: “his theory is so vast that to explain his work, ordinary visits and lectures are not enough. For instance, prior to such lectures, the audience should already have studied the prerequisites, i.e. those 1,000 pages. It simply does not make sense to come to give a one hour or even ten hours of talks on IUT–people won’t understand much, I am afraid.”

I can understand thinking that eg a world tour of 2hr lectures is useless, but… 10hrs is useless? really?

and even if one accepts that 10hrs is insufficient, 2hr lectures 2x per week for 1.5 months = 24hrs of lectures. I don’t care how complex IUT is, I can’t imagine 24hrs of lectures at a major research center not having a positive impact on the dissemination of IUT (especially if it is video recorded). 1.5 months isn’t that long to be abroad either, especially if it is all spent in one place rather than constantly going through the annoyance of shuttling around the world.

I have absolutely no idea why such a 1.5 month long lecture series (video recorded or not) hasn’t occurred.

(The good news is that maybe with the public attention drawn to Corollary 3.12 the mathematical community will be able to confirm/deny the proof without such a lecture series ….. hopefully.)