Davide Castelvecchi at Nature has talked to some of the mathematicians at the recent Kyoto workshop on Mochizuki’s proposed proof of the abc conjecture, and written up a summary under the appropriate title Monumental proof to torment mathematicians for years to come. Here’s the part that summarizes the opinions of some of the experts there:

Mochizuki is “less isolated than he was before the process got started”, says Kiran Kedlaya, a number theorist at the University of California, San Diego. Although at first Mochizuki’s papers, which stretch over more than 500 pages1–4, seemed like an impenetrable jungle of formulae, experts have slowly discerned a strategy in the proof that the papers describe, and have been able to zero in on particular passages that seem crucial, he says.

Jeffrey Lagarias, a number theorist at the University of Michigan in Ann Arbor, says that he got far enough to see that Mochizuki’s work is worth the effort. “It has some revolutionary new ideas,” he says.

Still, Kedlaya says that the more he delves into the proof, the longer he thinks it will take to reach a consensus on whether it is correct. He used to think that the issue would be resolved perhaps by 2017. “Now I’m thinking at least three years from now.”

Others are even less optimistic. “The constructions are generally clear, and many of the arguments could be followed to some extent, but the overarching strategy remains totally elusive for me,” says mathematician Vesselin Dimitrov of Yale University in New Haven, Connecticut. “Add to this the heavy, unprecedentedly indigestible notation: these papers are unlike anything that has ever appeared in the mathematical literature.”

Kedlaya’s opinion is the one likely to carry most weight in the math community, since he’s a prominent and well-respected expert in this field. Lagarias has a background in somewhat different areas, not in arithmetic algebraic geometry, and Dimitrov I believe is still a Ph.D. student (at Yale, with Goncharov as thesis advisor).

My impression based on this and from what I’ve heard elsewhere is that the Kyoto workshop was more successful than last year’s one at Oxford, perhaps largely because of Mochizuki’s direct participation. Unfortunately it seems that we’re still not at the point where others besides Mochizuki have enough understanding of his ideas to convincingly check them, with Kedlaya’s “at least three years” justifying well the title of the Nature piece.

Organizer Ivan Fesenko has a much more upbeat take here, although I wonder about the Vojta quote “now the theorem proved by someone in the audience” and whether that refers to Mochizuki’s IUT proof of the Vojta conjecture over number fields (which implies abc), or the Vojta conjecture over complex function fields (such as in Theorem 9 of the 2004 paper http://www.kurims.kyoto-u.ac.jp/preprint/file/RIMS1413.pdf), or something else. The reference to Dimitrov as discussing “applications of IUT” might be better worded as “would-be applications of IUT”.

There will be a conference at the University of Vermont in September, billed as “An introduction to concepts involved in Mochizuki’s work on the ABC conjecture, intended for non-experts.”

**Update**: Fesenko has updated his report on the conference (see here) to include a more accurate characterization of talks by Vojta and Dimitrov (you can see changes to that report here). Between this and the Nature quotes, there seems to be a consensus among the experts quoted (Kedlaya, Dimitrov, Vojta, Lagarias) that they still don’t understand the IUT material well enough to judge whether it will provide a proof of abc or not. Unfortunately it still seems that Mochizuki is the one person with a detailed grasp of the proof and how it works. I hope people will continue to encourage him to write this up in a way that will help these experts follow the details and see if they can come to a conclusion about the proof, in less than Kedlaya’s “at least three years”.

**Update**: New Scientist has a piece about this which, as in its typical physics coverage, distinguishes itself from *Nature* by throwing caution to the wind. It quotes Fesenko as follows:

I expect that at least 100 of the most important open problems in number theory will be solved using Mochizuki’s theory and further development.

Fesenko also claims that “At least 10 people now understand the theory in detail”, although no word who they are (besides Mochizuki) and why if they understand the theory in detail they are having such trouble explaining it to others, such as the experts quoted in the *Nature* article. He also claims that

the IUT papers have almost passed peer review so should be officially published in a journal in the next year or so. That will likely change the attitude of people who have previously been hostile towards Mochizuki’s work, says Fesenko. “Mathematicians are very conservative people, and they follow the traditions. When papers are published, that’s it.”

I think Fesenko here seriously misrepresents the way mathematics works. It’s not that mathematicians are very conservative and devoted to following tradition. The ethos of the field is that it’s not a proof until it’s written down (or presented in a talk or less formal discussion) in such a way that, if you have the proper background, you can read it for yourself, follow the argument, and understand why the claim is true. Unfortunately this is not yet the case, as experts have not been able to completely follow the argument.

If it is true that a Japanese journal will publish the IUT papers as is, with Mochizuki and Fesenko then demanding that the math community must accept that this is a correct argument, even though experts don’t understand it, that will create a truly unfortunate situation. Refereeing is usually conducted anonymously, shielding that process from any examination. Lagarias gives some indication of the problem:

It is likely that the IUT papers will be published in a Japanese journal, says Fesenko, as Mochizuki’s previous work has been. That may affect its reception by the wider community. “Certainly which journal they are published in will have something to do with how the math community reacts,” says Lagarias.

While refereeing of typical math papers can be rather slipshod, standards have traditionally been higher for results of great importance like this one. A good example is the Wiles proof of Fermat, which was submitted to Annals of Mathematics, after which a team of experts went to work on it. One of these experts, Nick Katz, finally identified a subtle flaw in the argument (the proof was later completed with the help of Richard Taylor). Is the refereeing by the Japanese journal being done at this level of competence, one that would identify the sort of flaw that Katz found? That’s the question people will be asking.

In some sense the refereeing process for these papers has already been problematic. A paper is supposed to be not just free of mistakes, but also written in a way that others can understand. Arguably any referee of these papers should have begun by insisting that the author rewrite them first to address the expository problems experts have identified.

**Update**: Fesenko is not happy with the Nature article, see his comment here.

Last Updated on

Can anyone give some insight as to how someone can create something in one lifetime that is so complex it cannot be explained to other experts in that field in less than three years? From a non-mathematician’s perspective, it is incredibly hard to imagine such a situation. If I start from the same knowledge base as someone else, and then spend a day or two creating something new on top of it, I can’t imagine it taking more than 20 or 30 minutes to explain to that other person- a creation-to-explanation ratio of perhaps 50:1. I would also think that ratio would increase with scope of the creation, due to the various dead-ends that a larger project would have to have encountered that then would not need to be explained.

Thanks in advance for any insight anyone can offer!

Scott Lange,

Well, Mochizuki hasn’t spent all his time since the proof came out explaining to experts, and they haven’t spent all that time listening. On the other hand, it is four years, not three… This really is an extremely unusual situation, I know of no other like it. One thing that I think is fascinating about it is that by understanding what has gone wrong, you get insight into the complexities of how things work normally, when things go right, and understanding gets transmitted.

It sounds like the plot of the Taming of the Shrew where the father (Mochizuki) won’t allow any of the suitors of his popular daughter (ABC proof) to marry her until they find a husband for his unpopular and difficult daughter (IUT theory). If he wants as many people to put the effort into learning IUT as possible, a reduction of the ABC proof to existing mathematics would be a disaster for him.

Two properties of Mochizuki’s texts repel people: First, his too frequent and too often repulsive terminology, much of which is about things one usually does not need new words for – mathematicians usually have a better sense for words and take better care on them (esp. Grothendieck and his school). Then, a lack of expository structure – usually one gives an overview what is done where and with which ideas, so that readers can decide which parts they read in which order with which way of reading.

I think that Mochizuki’s case has some (distant) similarities with Louis de Branges’ claim to have proved the Riemann hypothesis. He also had a track record of solving important problems. and he too developed his own, nonstandard mathematical language that other mathematicians found indigestible. Although contrary to de Branges’ case, people are taking Mochizuki’s proof seriously and some are putting a lot of effort into reading it. See this story from several years ago:

http://www.lrb.co.uk/v26/n14/karl-sabbagh/the-strange-case-of-louis-de-branges

In the past, there have been cases of theories that (even though much less grandiose and elaborate) took a long time for people to understand — and eventually had to be reformulated in a different language. When I wrote my profile of Mochizuki back in October, some of my sources mentioned Newton’s Principia as an example. And just the other day, one source told me: “It was mentioned at the conference that sometimes things take a long time to be accepted. Two examples mentioned at the conference were Galois theory and Class Field Theory, each of which took about 40 years.”

Cohen’s proof of the independence of the continuum hypothesis using forcing seemed at first nearly incomprehensible, and alien, to most mathematicians. It took several years before its ideas were distilled and made accessible. (I still find forcing to be somewhat magical for that matter.)

ehrenweist,

Cohen’s proof was quickly judged as correct by Gödel (as well as other prominent logicians in the US) and in a few years after Cohen’s breakthrough there was an avalanche of new applications of forcing. In fact Shoenfield’s 1967 textbook contains an exposition of forcing, just 3 years after Cohen’s papers, so how is this in any way similar to Mochizuki’s case where experts can’t understand even the overarching strategy after 4 years?

ehrenweist,

as mahmoud says, Cohen’s proof was understood very quickly, and what confusion and misunderstanding there was, mostly had to do with the fact that it was so unexpectedly “mathematical”, involving, in its more natural, non-syntactic form, surprisingly little particularly logical machinery in any substantial sense. It took a while, but not very long, before set theorists and logicians sorted out what was essential, what was just idiosyncratic technical scaffolding, and so on, resulting in the now familiar unramified partial order and Boolean valued formulations. There were plenty of easy cases to try forcing on, and a vast array of immediately apparent possibilities for further application and refinement — collapsing cardinals, iterated forcing, class forcing, and so on, and so on — so that people were able to quickly develop a sense for the sort of mathematics that the new stage of set theory would turn out to be. As Kreisel (who apparently was somewhat miffed he hadn’t come up with forcing himself, and even, somewhat implausibly, suggested it was nothing new, really, pointing to all manner of in hindsight analogous technical whatsits in constructive and intuitionistic mathematics) and others were at pains to point out, the basic ideas and proofs were very simple and straightforward by any mathematical standards, even if they did at points give out a slight whiff of magical out-of-hat-pullery, and there certainly wasn’t years worth of new terminology, visions, grand theory, to digest. Indeed, it was all positively pedestrian in comparison to, say, what proof theorists back in the day were up to.

All,

Please, no more comments using this as an excuse to discuss a completely unrelated topic.

The thing stopping me from going near his papers are the language and notation as highlighted by Dimitrov in the quote you have. Faltings said that he didn’t think that Mochizuki’s work would be taken seriously until “he wrote a readable paper”—recall that Faltings was Mochizuki’s advisor.

There might Be a need for a “Rosetta stone” or a dictionary in order to speed up IUT learning. Fortunately such a stone sesms to be around. Mochizuki writes the following in his paper “Bogomolov’s proof of the geometric version of the Szpiro conjecture from the point of view of inter-universal Teichmüller theory” (Res. Math. Sci. 3(2016), 3:6), that:

”

aspects of inter-universal Teichmüller theory may be thought of as arithmetic analogues of the geometric theory surrounding Bogomolov’s proof. Alternatively, Bogomolov’s proof may be thought of as a sort of useful elementary guide, or blueprint [perhaps even a sort of Rosetta stone!], for understanding substantial portions of inter-universal Teichmuller theory.”

http://www.kurims.kyoto-u.ac.jp/%7Emotizuki/Bogomolov%20from%20the%20Point%20of%20View%20of%20Inter-universal%20Teichmuller%20Theory.pdf

I think Castelvecchi is quite right in his comparison to de Branges, what he did not mention is de Branges earlier and correct(!) proof of Bieberbach’s conjecture, which also was presented in form of a lengthy and mostly indigestible manuscript that most mathematicians dismissed at the time.

So the Mochizuki affair could be summarised by asking: are we dealing with another case of de Branges’ proof of the Riemann hypothesis, or of his proof of Bieberbach’s conjecture?

For the interested non-mathematician – what even are the objects or concepts involved in IUT? What are the universes that are supposedly inter-connected by Teichmüller theory? I suppose a more basic question would be about Teichmüller theory itself… but has anyone constructed any examples of these spaces and the relations between them in terms of more familiar things?

Davide’s comparisons to Newton and Galois seem spot-on from my perspective as a historian and sociologist of mathematics (though it will take a very long time indeed to say whether IUT has anything approaching the importance of the calculus or Galois theory). Some historical references for those who are interested: on Galois, if you read French you should definitely track down the recent work of Caroline Ehrhardt, who has come up with some real breakthroughs in how historians understand the interpretive history of Galois’s very confusing texts. On Newton, Niccolo Guicciardini’s Reading the Principia is an accessible and insightful place to look. A now-canonical comparison-point for sociologists is the classification of finite simple groups, though that was a large collective project rather than a proposal from an individual; see Alma Steingart’s important article “A Group Theory of Group Theory.”

Radioactive,

The problem here is that such questions as “what are the basic new objects and concepts” have baffled experts, so the non-expert doesn’t have a chance. Knowing anything about the usual Teichmuller theory I don’t think helps.

Apparently the 500+ page introductory material is too complex and time-consuming to digest. Is the only way out that someone (or a group of people) digest the material and re-write it to more closely follow mainstream mathematics? And should they re-write just the parts essential to the ABC proof, or the whole material? In the beginning, there was not enough motivation to do that, as no outsider knew if the re-write was worth the effort. It appears that the attitude might be changing.

Dale C.,

I’m not sure what “introductory material” you mean. The problem I think is basically that

1. Mochizuki refuses to rewrite material, on the grounds that this should not be necessary, people should just spend the time it takes to understand what he has written.

2. The small number of people who were supposed to have digested this material, and rewrite it in this manner (e.g. Go Yamashita), have not been able to complete this task.

3. Others have been unable to digest the material. If you can’t understand yourself why a proof works, you’re not going to be able to rewrite it for others to understand.

That, four years later, no one else has been able to write up their own version of the proof is the central mystery here.

With the “introductory material” I was referring to “Mochizuki, S. Inter-universal Teichmüller Theory I-IV”, that apparently the ABC proof is based on. It looks very much like no outsider understands the material despite 4 years since the release of the claimed proof.

I suppose Mochizuki re-writing material has its own problems, as probably according to his understanding the material is complete already. If this is the case, then somebody else has to invest time to make the translation to a more mainstream formulation. The problem might be that the material is so large and so far away from mainstream mathematics that it will take years to understand and re-write it.

Peter,

What do you think about Lagarias’ comment that Mochizuki’s work has “some revolutionary new ideas,” ? This seems really odd. Aren’t they still trying to figure out if the alleged proof is right or wrong still? How do they than separate a possible collection of crackpotish ideas from “revolutionary” ones? Is this a case where no matter what the outcome the tools he invited (which?) are already useful?

Also shouldn’t be on Mochizuki’s shoulder the burden to try to prove his work is worthy? Why aren’t these guys reacting like “You have a proof? Make it readable” His refusal to do so, sounds a bit he’s hiding behind formalism.

It’s not like they can do an experiment to verify it so…

Dale C.,

Those papers are where the details of the proof are supposed to be, and it’s exactly those that everyone is having trouble with.

Bernhard,

There are definitely lots of “new ideas”, the problem is understanding them and whether they really are powerful enough to give a proof. They are not being just dismissed, partly because Mochizuki has a serious track record, partly because as people work on them, they are not finding that they are wrong.

I think there is a strong feeling among a lot of people in the field that it is Mochizuki’s responsibility to do a better job of communicating his ideas, so they’re not going to spend more time on this now.

Here is what Mochizuki thinks of the situation, December 2014:

“Indeed, I have been participating for over 20 years now, as author, referee, editor, and editor-in-chief, in the refereeing of countless papers for math- ematical journals, and, as far as I can see, the verification activities on the part of the three researchers discussed above already exceed, by a quite substantial margin — i.e., in their content, thoroughness, and meticulousness — the usual level of refereeing for a mathematical journal. Moreover, although I have received com- ments not only from the “core three” researchers, but also from other researchers as well, concerning numerous superficial technical oversights that may be repaired immediately (i.e., a routine aspect of the refereeing process),

I have yet to hear of even a single problem that relates to the essential thrust or validity of the theory.”

…

“My understanding, at present, concerning the verification of IUTeich is that

at least with regard to the substantive mathematical aspects of such a verification, the verification of IUTeich is, for all practical purposes, com- plete; nevertheless, as a precautionary measure, in light of the importance of the theory and the novelty of the techniques that underlie the theory, it seems appropriate that a bit more time be allowed to elapse before a final official declaration of the completion of the verification of IUTeich is made.

On the other hand, I should also state that, although such precautionary measures may serve a meaningful role for a limited amount of time, I am not of the opinion that such precautionary measures should be maintained for periods of, say, the order of 20 ∼ 30 years. That is to say, although there are perhaps numerous approaches to the issue of computing an appropriate length of duration for such precautionary measures, my current sense is that the length of duration of such precautionary measures should not exceed 10 years, i.e., counting from the time of the first oral presentation of the theory (i.e., in October 2010) and the posting of the series of papers on the theory (i.e., in August 2012). Put another way, my current sense is that some date during the latter half of the 2010’s would be an appropriate time for the termination of such precautionary measures.”

http://www.kurims.kyoto-u.ac.jp/~motizuki/IUTeich%20Verification%20Report%202014-12.pdf

From reading that it does sound like, if he hadn’t claimed to solve a famous problem, the refereeing process would have been done and dusted a long time ago. And it does seem that he has made quite a bit of effort to disseminate and teach the theory to people and recognizes the need for more exposition. Thus it must be frustrating to him to see such statements as the title of this article.

I suppose the proliferation of specialties in math can lead to situations like this – especially when important proofs come from unexpected sources – and rather than exposing some fault of Mochizuki it exposes some weaknesses in the mathematical community. With everyone an expert in their own subdomain, and with no time to carefully examine new approaches, if the one with a revolutionary result is, understandably, not willing to drop everything and publicize then a state of confusion will persist.

Though I suppose someone with a good expository style will eventually learn what a Frobenoid is and write a textbook.

Peter,

Is there likely to be a high-level blog post from someone who was there this time around? Or perhaps an interview with such a person with a serious mathematical audience in mind? (i.e. not merely educated layperson-level coverage, nice though that is)

David Roberts,

I agree that that would be a great service and really helpful for the field and for the math community in general. I hope it will happen (and hear rumors it might…).

I am curious about your comment that Mochizuki is “the one person with a detailed grasp of the proof and how it works.” What about Hoshi and Yamashita? Admittedly they have not produced expositions that satisfy others, but then again neither has Mochizuki.

You also said, “That, four years later, no one else has been able to write up their own version of the proof is the central mystery here.” This does not seem so mysterious. Writing up your own version requires a lot of talent in its own right. Perhaps Hoshi and Yamashita don’t have that particular talent. As for others, few if any made any concerted attempt to understand the papers for the first three years.

As someone who has left the filed of mathematical research many years ago, I only recently started following the ABC conjecture/Mochizuki saga and found it to be absolutely fascinating. Given that several mathematicians have claimed to have gone through the IUT theory thoroughly and Mochizuki’s reputation of being a detailed and bright mathematician, it is unlikely that IUT theory is complete rubbish. Of course, it is still possible that there are technical mistakes in the proof of the ABC conjecture that have not been caught due to the massive amount of new ideas and technique/concept/terminology.

One interesting point I have seen raised in another thread of discussion is the lack of HARD analytical number theory type of argument in Mochizuki’s claimed proof. This seems to be addressed in the most recent survey (though 100 pages long) article by SM:

http://www.kurims.kyoto-u.ac.jp/~motizuki/Alien%20Copies,%20Gaussians,%20and%20Inter-universal%20Teichmuller%20Theory.pdf

Section 1 is accessible to anyone with basic calculus background and makes for very interesting reading as SM explains how to do Gaussian integral to a fictitious high school student . Through section 1, one can get some cursory sense of what SM is trying to do with IUT and perhaps also understand his frustration. The following passage on page 7 in particular is clearly meant to address some of the criticism pointedly:

“the idea that meaningful progress could be made in the computation of

such an exceedingly difficult integral simply by considering two identical

copies of the integral — i.e., as opposed to a single copy — struck the

student as being utterly ludicrous.

Put another way, the suggestion of Step 3 was simply not the sort of suggestion that

the student wanted to hear. Rather, the student was keenly interested in seeing some

sort of clever partial integration or change of coordinates involving “sin(−)”,

“cos(−)”, “tan(−)”, “exp(−)”, “ 1

1+x2 ”, etc., i.e., of the sort that the student was

used to seeing in familiar expositions of one-variable calculus.”

Timothy Chow,

I disagree that writing up a proof requires unusual talent, this is what mathematicians are trained to do. This is what we do with students: try and teach them why something is true, then see if they understand it by asking them to give the argument in their own words. Until they can do this we don’t recognize them as someone who understands the argument.

To apply this to the current situation is of course oversimplifying and being rather harsh on Yoshi and Yamashita. The situation with this proof is quite unusual due to its complexity and use of unfamiliar concepts. But, it seems to be a fact that they have not been able to either write out a version of the proof that others can understand, or give lectures that transmit understanding. Why this is is a big mystery, I just don’t believe that the problem lies with laziness of the audience.

Joe Yang,

The Gaussian integral business is I think a good indication of the problem with Mochizuki’s survey (the latest one, and similar early documents). It is advertised as an explanation to other mathematicians of his ideas, but the Gaussian integral explains nothing at all about them since it’s just a very vague analogy. Bringing in the high school student makes explicit that what this is is a parable, and in the parable experts in the subject are being compared to ignorant, unsophisticated high school students. I think experts trying to get something out of this document find this beginning part of it not only useless, but insulting, it doesn’t exactly encourage them to read on.

Peter,

Yes, the tone used in the high school student analogy can certainly be taken as insult to some. On the other hand, I don’t see anything wrong with using Gaussian integral as a motivating analogy (even if a “vague” one as you put it) in a survey paper to explain the concept behind “mutually alien copies”. It should not be “useless” if indeed the idea behind mutually alien copies is similar to taking two identical copies of integrals. Having met / worked with”experts” in various mathematical fields, I find it common occurrence that many who are brilliant in understanding ideas close to their own research to be not necessarily so fast at grasping “alien” concepts. More often than not, the “experts” would like to see an analogy from concepts that they are familiar with and as such a simple analogy can often be a good starting point.

Joe Yang,

I’m actually normally a big fan of mathematicians including more motivational material in what they write, including even vague analogies like this one. Too much of the math literature is only readable by experts, including no help to the nonexpert about how to get oriented, how to get the general picture of what is happening. The problem here is a very unusual one, that there’s plenty of such motivational material, but the more precise material experts need to understand exactly what is going on is missing from the survey, and in the long IUT documents buried in a huge array of new definitions and formalism, often depending on earlier papers. There’s somewhat of a standoff here, with Mochizuki’s attitude: “it’s all there, you just have to work harder”, a lot of experts saying: “we’ve tried that, couldn’t get anywhere, you need to write this up in a more conventional form, an outline of the proof with precise, checkable statements”. I can’t recommend Brian Conrad’s blog post highly enough as a serious explanation of what the problem is.

The workshops at Oxford and Kyoto have been the best attempt so far to overcome this standoff. This is a difficult situation and I think lots of people are honestly working hard to understand the potential new mathematics here. I do think though that it would help if Mochizuki (or those close to him) made more of an effort to write something that would address the concerns experts have pointed out. Responding to these with even implicit insults is not helpful.

Peter, I may have misunderstood what you meant by “writing up their own version of the proof.” Hoshi and Yamashita have, after all, given lectures and produced written text that is not simply a verbatim reproduction of Mochizuki’s writings. I took you to mean producing a fundamentally new exposition that meets the popular demand for a high-level sketch that conveys the main ideas of the proof in more conventional language that the experts are already accustomed to. That sort of thing requires talent and doesn’t always happen within a few years. Consider, for example, the account of etale cohomology in SGA4, or Hironaka’s paper on resolution of singularities. These have a reputation for being formidable and inaccessible texts, and it took a very long time for alternative expositions to appear.

Timothy Chow,

As far as I know the only English-language written texts available from Hoshi and Yamashita are sketchy slides from presentations (Yamashita is supposed to be writing a conventional document but it hasn’t emerged, Hoshi has some notes on IUT in Japanese which I gather he is translating into English). Among those slides, I don’t see anything like an outline of the abc proof, they appear to be addressing more specialized topics that are somehow part of the proof. They’re meant to complement lectures, undoubtedly those in attendance at the lectures could get more out of them. But the bottom line seems to be that no one in attendance at those lectures, or readers of those slides, has come away saying “now I understand the proof, at least in outline”.

I wasn’t asking for the kind of thing you describe, readable by many mathematicians. That would be great, and hopefully someday we’ll have such a thing, but, sure, that’s the result of often a lot of time boiling down the initial version. What doesn’t seem to exist is something readable by experts.

SGA4 is maybe a good example. When it came out (1972?)[

actually much earlier, see Brian Conrad’s comment] it was already the product of not just one person, but multiple experts who had absorbed Grothendieck’s ideas.A more readable version SGA 4.5, did take 5 years to appear, and something like that for IUT is still a long ways away. By the way, my guess is that if we ever do have a version of Mochizuki’s ideas, integrated with the rest of mathematics, he may be as unhappy with the way they are written up as Grothendieck was with SGA 4.5…

Peter:

Roughly speaking, SGAn was the seminar in 196n (so SGA1 in 1961, etc.). The Springer-Verlag edition of the SGA4 volumes was widely published in the early 1970’s, but the original copies were distributed to a more limited set of math departments in the mid-1960’s (e.g., in the Harvard math library the original SGA’s are massive volumes in yellow binding consisting of typewriter-generated text on paper that must by now be very brittle; laid end to end, these volumes occupy much of an entire shelf and constitute a very daunting sight!)

But your main point about others in geographical vicinity to Grothendieck picking up the ideas and pushing them into new terrain is apt: in those days one had to be in one of a handful of places (e.g., Paris [Serre/Grothendieck/Deligne/etc.], Moscow [Shafarevich/Manin], Princeton [Katz], Bonn [Harder], or Boston [Artin/Mazur/Mumford/Tate]) to acquire a “working knowledge” of the new concepts by talking with local experts, that being much more efficient than trying to read on one’s own, and things really spread that way. (I have been told by a top expert from those days that much was learned through seminars and talking with people, rather than by direct study of EGA on one’s own, etc.)

For example, already by the late 1960’s the students of Mike Artin, Barry Mazur, David Mumford, and John Tate were making very creative use of the ideas of SGA4 (e.g., Knutson’s 1968 PhD thesis on algebraic spaces, Jim Milne’s 1967 thesis using etale and crystalline methods to prove BSD for constant abelian varieties, Tadao Oda’s 1967 thesis on Dieudonne theory for abelian varieties, Larry Roberts’ 1968 thesis on flat cohomology of finite group schemes, Friedlander’s 1970 thesis on etale homotopy theory). And that’s just the PhD students in Boston, to say nothing of what was being done by the senior faculty there (Deligne-Mumford, Artin-Verdier, Artin-Mazur, Artin-Tate, etc.) and by both PhD students and senior faculty elsewhere.

The publication of more accessible references such as SGA 4.5 in 1977 (let alone Hartshorne’s textbook at a more basic level in the same year, the impact of which on the explosion of the subject probably cannot be overestimated) helped tremendously in spreading the understanding of these ideas for those who weren’t able to attend lectures by Katz, Deligne, etc. But well before the huge increase in accessibility of the subject through written alternatives to the foundational references, within a handful of years after SGA4 a growing number of people were picking up the principles through the oral tradition and producing high-level research based on these ideas.

In more recent times one can see something similar happening with perfectoid spaces (which were introduced only in 2012 and have already led to tremendous advances in many directions never anticipated at the inception), thanks in no small part to the creator of the subject traveling early on to numerous places to give substantive lectures and producing insightful surveys on the basis of which (along with original papers) seminars have been run all over the world. Of course, the available options for dissemination of ideas are far greater now than in the 1960’s.

New Scientist article here reports I. Fesenko as having made the absurdly ambitious statement (2nd paragraph from end):

“I expect that at least 100 of the most important open problems in number theory will be solved using Mochizuki’s theory and further development.”

Unless New Scientist has fabricated the “100 problems”-quote by Fesenko it seems high time to put him in the crackpot camp together with de Branges. Also, didn’t Mochizuki himself say before that IUT is likely to be a one hit wonder without any major applications besides abc?

mahmoud: It is correct that when an audience member asked Mochizuki during one of the Q&A Skype sessions at the Oxford workshop whether he was aware of other applications of his IUT work to other problems in mathematics, his reply was a forthright “No” (which is of course quite fine).

He also explained in very reasonable terms why he had explored trying to use it to attack RH. The point is that his notion of Frobenioid was inspired by the search for a replacement for Frobenius maps that is useful in characteristic 0, and the IUT work provided a context for applying that notion in some specific instances. Hence, it seemed quite natural to follow up the IUT work by considering if Frobenioids could be applied to characteristic-0 versions of problems for which Frobenius maps have been very fruitful in characteristic p, such as RH.

He was very straightforward in then saying that after trying this for some time (maybe a couple of years?) and not making much progress, he wasn’t looking into that approach to RH anymore.

On Go Yamashita’s page (http://www.kurims.kyoto-u.ac.jp/ja/list/yamashita.html) you can find this:

近年は, 望月新一氏による宇宙際幾何学のさらなる発展の方向性で同氏と共同研究をしている. 望月新一氏の計算においてabc予想の誤差項にRiemannゼータ関数との関連性を示唆する1/2が現れる. 一方, 同氏の宇宙際Teichmüller理論においてテータ関数が中心的役割を果たすのであるが, テータ関数はMellin変換によってRiemannゼータ関数と関係する. さらに, 宇宙際Teichmüller理論において宇宙際Fourier変換の現象が起きている. これらのことから, 長期的な計画であるが”宇宙際Mellin変換” の理論ができればRiemannゼータ関数と関係させることができるのではないかと期待して共同研究を進めている.

I have a translation:

Recently, I am researching in collaboration with Shinichi Mochizuki, the perspective of future development of Inter-Universal Geometry, of his authorship. According to Mochizuki’s calculations, on the abc conjecture “error” there appears a 1/2 as an indicator of the relationship that can exist with the Riemann zeta function. On the other hand, on Mochizuki’s Inter-Universal Teichmüller Theory, the theta function presents a fundamental role, that by its turn is related to the Riemann zeta function by the Mellin Transform. Moreover, in the Inter-Universal Teichmüller Theory appears the occurrence of Fourier Transform. Based on this facts, although it’s a long term project, I am advancing the collaborative research in expectation that if the “Inter-Universal Mellin Transform” is completed, then the relationship with the Riemann zeta function can be proved.

Sadly the second link to what I gather is Fesenko’s Facebook feed (in the sentence “you can see changes to that report [here]”) requires a FB account. Can anyone with access let us know what it says?

David Roberts,

That’s not very important, just the edit history of that post, showing the half-dozen or so different versions as the author edited and added some material, most substantively clarifying what Vojta and Dimitrov had to say (I assume you can see the latest version).

Conrad and Alan,

Glancing at Mochizuki’s papers the connection to RH is actually discussed at some length in IV: Log-volume Computations and Set-theoretic Foundations: Remark 2.2.1 on page 47 is essentially an elaboration of what Alan translated from Yamashita. Despite his answer over skype that Conrad reported it appears he hasn’t given up on that approach yet (it isn’t clear when that remark was written, he has added new remarks to the paper in June 2016 so 2.2.1 might be a new addition).

Anyway, this suggests a way forward for experts in number theory who are unwilling to go through the whole terminological mayhem around IUT: study the computations (especially the “log-volume estimate” 1.10) and take Mochizuki on faith that it should apply to (arbitrary Dedekind) zeta functions; if anything emerges that contradicts known bounds on these functions this would throw serious cold water on the reliability of (this application of) IUT.

mahmoud:

Thanks for the clarification; perhaps something changed between December and June. Due to discussions I’ve had with some who have looked in detail at the computations (including around IV.1.10), it seems that those computations are not going to shed much light in the direction you mentioned. Hopefully there will eventually appear an essay by an audience participant at the Kyoto meeting who can address what mathematical insights from the IUT papers were communicated to the audience.

In case you may have wondered, the question and answer at http://mathoverflow.net/questions/245438/mochizukis-gaussian-integral-analogy has essentially no mathematical content and does not address statements which are precise. Nothing helpful is likely to come out of Math Overflow Q&A’s at this stage (not that one should have expected otherwise, given the circumstances).

@Brian Contrad, perhaps what’s changed is the last two slides from https://www.maths.nottingham.ac.uk/personal/ibf/files/dimitrov.pdf ?

marshall flax: No, that definitely has nothing to do with it (and I recommend not to spend time wondering what someone else may or may not achieve in the direction of RH). The appearance of an L-function there is related to Siegel zeros, and the connection between a suitable formulation of ABC and Siegel zeros has been known for some time since work of Granville & Stark (as Dimitrov notes on his slides too), well before 2012. Those slides reflect considerations of Dimitrov done on his own, treating the methods of IUT as a black box (as one can see from how the slides are written); the core arguments in his slides are applications of the standard methods of Diophantine analysis.

I’m not the first to say this by any means, but I sometimes wonder whether a change in culture is needed, a change particularly highlighted by this example. Proofs have two functions: to certify the truth of mathematical statements, and to give insights into why they are true. If one mathematician provides a long and almost incomprehensible, but correct, proof of a statement (perhaps checkable by only a handful of experts), we tend to say that that mathematician has solved the problem, but if another mathematician comes along and finds a shorter proof that everybody understands, that is often much more valuable. I think there’s a case for saying that Mochizuki hasn’t really solved the abc conjecture even if his proof is correct. However, if someone uses his ideas to produce a comprehensible proof, then obviously Mochizuki will have played a major — probably

themajor (but that depends on the details of how the comprehensible proof is found) — role in the solution.Just a quick comment, Fresenko is wrong about publication and what it means. Plenty of things have been published which no one in the field treats seriously. It depends on where it appears, which essentially has to do with how good the refereeing is. Annals for Wiles is the classic example, but there are plenty of others. Incorrect stuff gets published plenty, but experts usually know it’s wrong and just ignore it. The fact that no one can (yet) explain this to people who should be able to understand it is a giant red flag, in my opinion. Nothing has ever been like that, ever. Grothendieck confused plenty of people, but as brian Conrad points out, quite a few people were using his stuff to generate all sorts of amazing results within a short period.

I’m closing comments on this for now, since I’m on vacation and don’t have time to moderate them. In addition, this has become kind of depressing, with some people seemingly intent on choosing pro or anti Fesenko or Mochizuki positions and arguing them. I’d urge everyone to stop this and stick to the question of whether or not this is a proof. I’ve seen what can happen to your field when, lacking such a standard, people start choosing sides and arguing. It’s not pretty, and mathematics should not go there.