I’ve stolen the title of this posting from Michael Harris, see his posting for a discussion of the same topic.

A big topic of discussion among mathematicians this week is the ongoing workshop at Oxford devoted to Mochizuki’s claimed proof of the abc conjecture. For some background, see here. I first wrote about this when news arrived more than three years ago, with a comment that has turned out to be more accurate than I expected “it may take a very long time to see if this is really a proof.”

While waiting for news from Oxford, I thought it might be a good idea to explain a bit how this looks to mathematicians, since I think few people outside the field really understand what goes on when a new breakthrough happens in mathematics. It should be made clear from the beginning that I am extremely far from expert in any of this mathematics. These are very general comments, informed a bit by some conversations with those much more expert.

What I’m very sure is not going to happen this week is “white smoke” in the sense of the gathered experts there announcing that Mochizuki’s proof is correct. Before this can happen a laborious process of experts going through the proof looking for subtle problems in the details needs to take place, and that won’t be quick.

The problem so far has been that experts in this area haven’t been able to get off the ground, taking the first step needed. Given a paper claiming a proof of some well-known conjecture that no one has been able to prove, an expert is not going to carefully read from the beginning, checking each step, but instead will skim the paper looking for something new. If no new idea is visible, the tentative conclusion is likely to be that the proof is unlikely to work (in which case, depending on circumstances, spending more time on the paper may or may not be worthwhile). If there is a new idea, the next step is to try and understand its implications, how it fits in with everything else known about the subject, and how it may change our best understanding of the subject. After going through this process it generally becomes clear whether a proof will likely be possible or not, and how to approach the laborious process of checking a proof (i.e. which parts will be routine, which parts much harder).

Mochizuki’s papers have presented a very unusual challenge. They take up a large number of pages, and develop an argument using very different techniques than people are used to. Experts who try and skim them end up quickly unable to see their way through a huge forest of unrecognizable features. There definitely are new ideas there, but the problem is connecting them to known mathematics to see if they say something new about that. The worry is that what Mochizuki has done is create a new formalism with all sorts of new internal features, but no connection to the rest of mathematics deep enough and powerful enough to tell us something new about that.

Part of the problem has been Mochizuki’s own choices about how to explain his work to the outside world. He feels that he has created a new and different way of looking at the subject, and that those who want to understand it need to start from the beginning and work their way through the details. But experts who try this have generally given up, frustrated at not being able to identify a new idea powerful enough in its implications for what they know about to make the effort worthwhile. Mochizuki hasn’t made things easier, with his decision not to travel to talk to other experts, and with most of the activity of others talking to him and trying to understand his work taking place locally in Japan in Japanese, with little coming out of this in a form accessible to others.

It’s hard to emphasize how incredibly complex, abstract and difficult this subject is. The number of experts is very small and most mathematicians have no hope of doing anything useful here. What’s happening in Oxford now is that a significant number of experts are devoting the week to their best effort to jointly see if they can understand Mochizuki’s work well enough to identify a new idea, and together start to explore its implications. The thing to look for when this is over is not a consensus that there’s a proof, but a consensus that there’s a new idea that people have now understood, one potentially powerful enough to solve the problem.

About this, I’m hearing mixed reports, but I can say that some of what I’m hearing is unexpectedly positive. It now seems quite possible that what will emerge will be some significant understanding among experts of a new idea. And that will be the moment of a real breakthrough in the subject.

**Update:** Turns out the “unexpectedly positive” was a reaction to day 3, which covered pre-IUT material. Today, when things turned to the IUT stuff, it did not go well at all. See the link in the comments from lieven le bruyn to a report from Felipe Voloch. Unfortunately it now looks quite possible that the end result of this workshop will be a consensus that the IUT part of this story is just hopelessly impenetrable.

**Update**: Brian Conrad has posted here a long and extremely valuable discussion of the Oxford workshop and the state of attempts to understand Mochizuki’s work. He makes clear where the fundamental problem has been with communication to other mathematicians, and why this problem still remains even after the workshop. The challenge going forward is to find a way to address it.

Hi Peter,

Thanks for this post. I think what is so confusing about this situation actually comes from how many mathematicians portray the field (at least to the outside world). In this fantasy description, mathematics is a uniquely objective subject and the only thing that matters is the proof. To my mind, Mochizuki is taking this portrayal quiet literally. In so he is revealing to the wider world what many Mathematicians are reluctant to admit to the public if not themselves.

In fact, the mathematical literature is very human in that it is filled with ambiguous statements, logical gaps, wrong proofs, attribution errors. With that said, there is *something* objective about it. In general, when well regarded(1) people from well known research institutions(2) with little history of making big false claims(3) are willing to write up something explicit(4) which is not too long(5) in roughly familiar terminology(6) in English/French/Russian or other major languages(7) and talk about their work to experts in seminars(8), they trigger a process that produces a level of scrutiny which is usually equal to the task of validating the claim in a matter of days to months. There is also the perverse requirement that the author not develop too much deeply original thinking(0) on her or his way to a proof if this process is to complete quickly.

Now, the problem here is that Mochizuki is pushing enough on various parts of 0-8 that it exposes what mathematicians know but are not always happy to admit: that the field, if it is objective, is not objective in the simple way conveyed to the world. Rather there is an economics of mathematics, a politics of math, and numerous failure modes of math which dominate the short run. In fact many senior mathematicians are of two minds where they know that the field is somehow completely objective and non-objective.

As a former insider turned outsider, I personally think this situation is terrific. In one simple stroke, Mochizuki has showed us that the field is driven by incentives, culture, and human failing. I hope that this proof is correct so that in the second act, he can show us that after the politics, recriminations, and resistance, the field usually ends up with something pretty close to objective truth. Over the long run things are much better, at least as far as the proofs are concerned (if not always the narrative and attribution.)

The mood appears to have changed today. Felipe Voloch’s daily update is just in :

ABC day 4 : https://plus.google.com/106680226131440966362/posts/LLHPN3QLoqX

Something opposite – Math Quartet Joins Forces on Unified Theory at Quanta: https://www.quantamagazine.org/20151208-four-mathematicians/

My two cents, as a mathematician, Peter has the process for this kind of thing exactly right. I watched it happen with Perelman’s proof of Thurston’s geometrization conjecture. I was at a conference maybe a week after it was released, and various experts in the field were there (my field is pretty close) – they had already been working on it feverishly the whole time. And they already were saying it looked likely to hold up. Of course that was a very different set up, since Perelman was doing work which followed in the footsteps of Hamilton (who teaches with Peter at Columbia), and basically what he did was to figure out a way to get through the wall which Hamilton hit. This seems way different, unprecedented more or less. I’m trying to think of a case where there was a completely new way of approaching something, and no one could understand, and I can’t. Plenty of times someone comes up with a really new way to look at things, but it’s always the case that people catch on pretty quickly, at least the examples I can think of.Usually, if a bunch of very good mathematicians look at something, and tell you it doesn’t make any sense, it actually doesn’t make any sense.

Given that

1) Its basically in an entirely new sub field and its incredibly rare for someone to move from one area to another

2) At least four people already understand a relatively young theory

3) No mathematician understands the vast majority of extant proofs

Is it maybe not that big a deal if experts working in other areas never understand this proof?

Rubbernecker,

This is supposed to be part of arithmetic algebraic geometry, and say new things about that subject. Many of the people at the workshop are among the best arithmetic algebraic geometers in the world.

What went wrong yesterday is that two of the four people (Mok, Hoshi) who supposedly understand IUT were unable to explain it to anyone else. A third (Yamashita) will try today, but his talks were supposed to assume material from the other two.

Yes, few people understand the details of many complicated proofs. As I was trying to explain here, that’s not what’s going on. What the experts are trying to find is a new idea that says something about arithmetic algebraic geometry.

The danger here is that IUT is a subject disconnected from the rest of mathematics. If you spend more time studying it, you will learn lots of new definitions, be able to prove lots of theorems relating them, but learn nothing new about other mathematics.

z,

Opposite indeed. That’s an inspiring story—another great article from

Quanta.The Scottish legal system has a verdict which may be applicable in this case: Not Proven.

The stark contrast between this conjecture and the recent, say, Polymath projects is glaring.

Eric Weinstein: Your comment seems to imply that there is resistance to Mochizuki’s work due to non-objective factors. There are of course various heuristics (usually with good reasons) which mathematicians use in deciding whether it is worthwhile for them to invest large amounts of time trying to understand something new. In Mochizuki’s case, we have a conference of leading experts, including more than one fields medalist, getting together for a week to do their best to understand his work. One can’t really say that his work isn’t getting a fair hearing.

Hi Michael,

Thanks for this. I do not mean to imply that Mochizuki is running into something untoward. I mean to imply that we do a disservice to mathematics when we confuse the objective nature of the underlying subject matter with the way in which humans attempt to do mathematics. I don’t mind the heuristics if they are recognized not to be fundamental truths.

Of course there is a resistance here. He is (to the best of my understanding) not bending over backward to decrease the burden on the proof checkers. I would find that annoying were I an expert in this field. But I would never pretend that math is uniquely objective as a profession. Proofs may be objective, but proof checking is not.

Let me make an analogy. In computer software, there is a difference between a code review done by a subjective human doing their best and a code review done by the compiler. What I find at turns amusing and annoying is when mathematicians pretend to be compilers. The compiler doesn’t care about commenting code properly. It doesn’t see whether a style guide has been adhered to or whether readability mattered to the developer. It just compiles or it fails.

Mochizuki seems not to care particularly for decreasing the burden on his proof checkers. For the subset of those proof checkers who are open about how prone to error and delay this process is, I believe he is acting sub-optimally and should be pushed to be more helpful. But, for those mathematicians who pretend that the profession is blind to all but the objective truth….I think this is a splendid reveal. No compiler takes this long.

Warmly,

Eric

Eric,

What I was trying to get at in my posting is that “proof checking” isn’t really what this is about (at some later date “proof checking” comes into play, but other things have to happen first: you can’t sensibly check a proof you don’t understand). What mathematicians actually do is something much less mechanical, trying to individually embody understanding of mathematical ideas, and share this understanding as a community. The problem here is that Mochizuki is not successfully communicating mathematical ideas to the rest of the community. “What we’ve got here is failure to communicate…” Why this is happening is a fascinating question, involving both the standard ways the community operates, as well as some very unusual special features of this case. I don’t actually know of any other comparable example, and also it’s not at all clear where this is going to end up.

Mochizuki has made minimal effort to engage with the community at large. In that case, workshops of this sort set a dangerous precedent. Simply the fact that he has already demonstrated that he is a first rate mathematician means that we should take this thing seriously without a real and proper involvement is silly. Until he begins making more of an effort it should be ignored. And the description given by Veloch of Kuehne’s talk on day 2 shows, I think, that this thing is an embarrassment to the organisers and to Oxford by proxy.

There appears to be a black hole: if someone understands Mochizuki, then

hecan’t be understood by anyone else.“The danger here is that IUT is a subject disconnected from the rest of mathematics.”

This is not unprecedented. In 1968, P. S. Novikov and Adian published a

(negative) solution of the famous Burnside problem. The proof was more then 300 pages long, which is impressive but not unheard of even in 1960s. Its peculiarity is

that it does not use any standard methods; the authors pretty much developed a whole new theory from scratch.

Not being an expert, I can’t say to what degree this theory (extremely hard to master)

is detached from the rest of mathematics. But it looks like this far it has only been used in a very narrow area of research, more or less close to the original problem.

Admittedly, the “Bernside theory” is built on a rather elementary basis, while Mochizuki started from the height of arithmetic geometry, but I would not call it a radical difference.

If the analogy, and the proof itself, is correct, then eventually we will see half a dozen of experts in this theory who understand it, while the rest of mathematicians, even from close fields, won’t bother. Not terribly promising, but we do not always have what we dream of, in life or in mathematics.

Under an earlier post, a commenter said that Mochizuki is a family name associated with the samurai class.

A Scientific American article, “Japanese Temple Geometry,” by Tony Rothman, with assistance from Hidetoshi Fukagawa, May 1, 1998, says that during Japan’s period of national seclusion, (1639 – 1854), there was a tradition in which samurai and others would prove mathematical theorems, usually about Euclidean geometry, and inscribe them on delicately colored wooden tablets called sangaku, and hang them under the roofs of temples, as an offering to the ancestors. Perhaps Mochizuki views his work in this spirit.

I am a complete outsider to this area, but it seems to me that a very important thing that is lacking is some kind of story that explains why Mochizuki’s machinery might be appropriate for proving the ABC conjecture. If I contrast it with another area I don’t understand — Grothendieck-style algebraic geometry — the latter comes out far more favourably (in this one respect — I’m not talking about the fact that it has obviously been checked by thousands of mathematicians), because there are all sorts of accounts of how the more abstract way of looking at varieties is a fruitful thing to do. I know that if I did want to learn it, I wouldn’t be told that I had to become comfortable with a vast array of complicated definitions before any benefits fed back into what I already know about.

The case with Perelman was again very different: I don’t understand his proof at all, but I did understand accounts for the non-expert that explained about Ricci flow and what it was supposed to achieve.

What I would want to see from Mochizuki and his followers is a baby result that can be proved by his methods, that points the way towards more complicated ones.

On this Mathoverflow page Minhyong Kim, who is one of the organizers, has just written the following :

Update (12 December, 2015): I’ve written a brief summary of the Oxford workshop on IUTT rather rapidly, so as to save people the trouble of circulating rumours. This seemed to be a reasonable place to put it. All errors in it are my own:http://people.maths.ox.ac.uk/kimm/papers/iutt=clay.pdfThat link seems to be broken (also on the mathoverflow page)?

Best, đź™‚

Marko

The link is probably only temporarily broken … Kim explains it in MathOverflow; he took it down while getting permission to quote Mochizuki.

Gowers,

There are cases when it is impossible. The Novikov-Adian theory I mentioned

is an example of a huge machinery designed specifically for an extremely

difficult but very narrow target, which is difficult to use for anything else.

No baby results there.

As I know nothing about IUT I can’t say if it is the case, but it may be

a possibility.

Gavrilov, that’s very interesting, but it raises an obvious question. If the only way to hit a narrow target is via a huge, elaborate, and seemingly irrelevant theoretical apparatus that has no other applications, then how does anybody discover that apparatus in the first place? There must be some story. The least Mochizuki could do is tell us that story. What is the story in the Novikov-Adian case? It cannot be that, just for fun, they developed an incredibly complicated theory and then observed to their great surprise that it solved precisely one interesting problem. Sometimes the story is that a theory is developed for another purpose but turns out to be useful for the given problem, but you imply that that is not the case for the the Novikov-Adian theory.

I realize that your point is that there aren’t baby results along the way. Maybe I should generalize my requirement and say that there should be a path from not understanding the theory at all to understanding it completely that does not require huge leaps of faith that there is some point to what one is learning.

I’m far from an expert on the Burnside problem but describing the proof as isolated and without baby results does not seem correct to me. As I have understood it Novikov and Aidan set out to understand and classify periodic words and cancellation in groups and as a result of that work could prove that there are infinite Burnside groups. So before they got to their famous result they also had relevant “smaller” results.

Gowers, you ask questions I would like to know the answers myself.

Apparently, the idea of a solution first came to Novikov in 1950s, but

I do not know how far it was from the final theory, and what was the *story*.

Much less what is the path “from not understanding the theory at all to

understanding it completely” in this case.

These are interesting questions, but probably they could only be answered by an expert.

For those who are interested, I have found a nice piece about the solution

of the general Burnside problem on Mathoverflow, by Mark Sapir.

http://mathoverflow.net/questions/48184/a-synopsis-of-adyan-s-solution-to-the-general-burnside-problem

In particular, it is said that there is no “short description” of Novikov-Adian work.

Thank you Chris Austin for the sangaku reference. That was an interesting read:

“Many of the problems are elementary and can be solved in a few lines; they are not the kind of work a professional mathematician would publish. Fukagawa has found a tablet from Mie Prefecture inscribed with the name of a merchant. Others have names of women and childrenâ€”12 to 14 years of age. Most, according to Fukagawa, were created by the members of the highly educated samurai class. A few were probably done by farmers; Fukagawa recalls how about 10 years ago he visited the former cottage of mathematician Sen Sakuma (1819â€“1896), who taught wasan (native Japanese mathematics) to the farmers in nearby villages in Fukushima Prefecture. Sakuma had about 2,000 students….The best answer, then, to the question of who created temple geometry seems to be: everybody. On learning of the sangaku, Fukagawa came to understand that, in those days, many of the Japanese loved and enjoyed math, as well as poetry and other art forms.”Sorry Peter for the off-topic posting. Please remove if inappropriate.

Gowers, using your terminology from “Two cultures”,

the work of Novikov and Adian is much closer in spirit to problem solving then

to theory building. This is why there is, apparently, no way to describe it in two words.

There are other examples of this sort, although less extreme, such as the Feit-Thompson theorem.

(But probably there are reasons to call this work a new theory, albeit an odd one,

and not a huge collection of tricks. It may be systematic in its own way.)

My point is that when you are focused on a single extremely difficult problem,

then no matter where you started, you have an (unfortunate) chance of producing something as incomprehencible as the Novikov-Adian proof.

@Tim Gowers

I think Mochizuki *has* been telling the story of how he came to think of his ideas, and how one should think of his work as approaching the solution: by analogy with other theorems and so on. The problem is that this other, existing, body of work requires something that doesn’t exist in the arithmo-geometric world, and this is what his theory is designed to give, at the expense of catapulting out of the usual techniques and objects.

The problem is, there’s too much analogy (“alien arithmetic structures” and so on) and less middle-ground explanation before one gets to pages and pages of definitions. I think Lieven le Bruyn did a great job of working through what a Frobenioid is, in a simple and known case. Such unpacking is something Mochizuki didn’t do; clearly somebody or some collection of somebodies needs to go back to the precursor papers and fill in all the worked examples that are absent. This is for me the clearest way forward, and how to approach the massive wall of definitions with something like a climbing strategy.

@Tim Gowers

I think Mochizuki has attempted to tell the story of how he came to develop his ideas in the slightly more expository piece, A Panoramic Overview of Inter-universal Teichmuller Theory, available on his website:

http://www.kurims.kyoto-u.ac.jp/~motizuki/Panoramic%20Overview%20of%20Inter-universal%20Teichmuller%20Theory.pdf

As far as I can tell, and this is with the caveat that I could be very wrong in such a brief space, this grew from Mochizuki’s proof of a conjecture of Grothendieck in anabelian geometry. One of the first things he seemed to have done, after proving Grothendieck’s conjecture, was to build an analogue of Hodge theory for Arakelov geometry, which he called Hodge-Arakelov theory and wrote about in papers in 1999 and 2002.

In the introduction to Panorama, he characterised the current theory as the result of trying to overcome the difficulties of applying scheme-theoretic Hodge-Arakelov theory to diophantine geometry. The resulting theory appears to be (in part) a theory of non-scheme-theoretic deformations, i.e. a theory that presumably involves geometric structures that go beyond schemes.

(Rh L) “[he built] an analogue of Hodge theory for Arakelov geometry, which he called Hodge-Arakelov theory and wrote about in papers in 1999 and 2002. ”

That’s strange, because it would have been more important than proving ABC if he had succeeded. Building a working machinery that unifies Hodge theory with its number theoretic analogue (etale cohomology and site, l-adic Galois representations) has been a central goal in algebraic geometry for the past 50 years. It would have been the news of the millenium had somebody done this.

The more limited goal of building a more adelic Arakelov geometry, has been around since the late 1970’s, and also considered very important. It is a fearsomely technical subject in ways that run in a different and much less algebraic direction than anything Mochizuki is known to have published or studied. If he had surmounted the difficulties (1) at all, and even better, (2) using his methods that don’t rely on heavy doses of modern differential geometry and analysis, that would be considered a titanic achievement. It would have been noticed and recognized in the past 15 years.

That comes to one of the weird points about the Mochizuki ABC papers: where in those documents is there any of the hardcore analysis that one would expect, relating the very general algebra to the analytic number theory problem that is ABC? One would expect at least a page or two (or fifty) of grungy estimates and hard analysis at least for getting a weak form of ABC that might be boosted to full ABC by more algebraic arguments. Mochizuki does seem to use the latter, but there isn’t much sign of hard analysis in his papers, and one should be able to find it just by skimming if it’s there. This is a question that must have come up in various forms at the Oxford conference — where is the hard work being done? — and it would be a big confidence builder if the believers would just point to the locations in Mochizuki’s papers where the analytic-number-theory part of the action takes place. The idea that it can be black-boxed into a few lines of analysis and 500 pages of algebra doesn’t sound right.

Having said that, I think the sociological concerns about not leaving Japan to explain the proof, the papers not having been refereed, etc, have been overplayed. The papers contain plenty of motivation and exposition that is illuminating apart from the stated goal of proving ABC. They are quite discursive compared to anything else in the field of arithmetic geometry, which has more than its share of long dense papers, and are not in the impenetrable dense theorem-proof style.

(can’t get the formatting to work when posting with firefox, sorry about that.)

@David Roberts, thanks for the nice words. I only “checked” one paper as a non-specialist, got stuck and then discovered the wealth hidden in the Arakelov bit. Just the same, if a student would hand in Fobenioids1 as a paper, she’d have to rewrite it seriously.

Sad to see that some refer to my blog as criticising Mochizuki (or even calling his work ‘nonsense’), most recently at â€śTodeszone der Mathematikâ€ť. I’m just getting tired of his lack of interest in reaching out.

Also sad to see that Minhyong Kim did not (yet) put his report on last week’s Mochizuki-Fest in Oxford back online. I learned a lot from it. Anyway, luckily there’s always the mysterious @math_jin Twitter account to repost images of ‘lost’ files.

As my own ‘lost’ blog is getting more hits recently I’ve put a little story online about the Log Lady and the Frobenioid of Z. It’s about the Arakelov bit, but probably only digestible if you did see Twin Peaks, the log lady, and Norma at the Double R Diner, way back then…

@random reader: The “hard analysis” you’re looking for is contained entirely in the known equivalence (from several decades back) between Szpiro’s Conjecture for elliptic curves and the ABC Conjecture when each is formulated over general number fields (proof going via consideration of Frey curves associated to an ABC triple and a robust variation of the ground field).

Everything Mochizuki is doing is focused on proving Szpiro’s Conjecture for all elliptic curves over all number fields. Moreover, it is in the nature of the method that his main work is in the case of elliptic curves satisfying certain auxiliary local and global properties that necessitate working over a somewhat large number field (and the general case is then deduced by a very short argument); in particular, his method does not work directly over Q in the main parts.

For the same reason, one cannot get some insight into Mochizuki’s methods by trying to unravel what they are saying in the context of some of the other known concrete consequences, since his entire proof takes place in the setting of Szpiro’s Conjecture (whose link to the known consequences via ABC goes through long-known arguments which treat Szpiro’s Conjecture as a black box).

It is somewhat akin to the fact that Wiles’ proof of Fermat’s Last Theorem works not with the Fermat equation, nor even with elliptic curves over Q (for which general modularity in the semistable case was sufficient to apply to hypothetical Frey curves), but rather with Galois representations and modular forms, which in turn admit powerful operations having no interpretation in terms of elliptic curves (let alone the Fermat equation).

That is, one cannot get insight into Wiles’ method by thinking solely about the more concrete framework of elliptic curves (because spaces of weight-2 modular forms with a given level generally have Hecke eigenvalues that are not rational, so not all eigenforms in the space are related to elliptic curves).

Brian Conrad,

thanks very much for the comment. Good to see the big guns weighing in here.

I don’t see how Szpiro and ABC differ here, or how the lack of visible hard analysis is comparable to Wiles’ proof of Fermat’s Last Theorem.

In Wiles’ work on FLT, the analytic objects he was showing to exist were known to have a rigid algebraic description with rich properties, and conjectured to satisfy a relatively precise (Langlands) equivalence between the algebraic and analytic sides of the coin. Wiles made a breakthough on the Galois (algebraic) side and consequences on the automorphic (analytic) one flowed, but this transfer of results was not itself the novelty of his work. The relations to analysis and the automorphic side of Langlands philosophy were, if vague memory serves, encapsulated in the use of some results from Tunnell’s work on icosahedral(?) representations. Correct me if that’s wrong, it is surely in your line of expertise and not mine. But the point is that a sufficiently precise translation to the non-analytic setting was already known and Wiles used it, maybe with some new twists, but the main action was in the algebraic theory, deformation of Galois representations, the commutative algebra of Hecke rings, and the patching argument with auxiliary primes.

It is a reasonable and important question to understand what Wiles’ proof does at the level of concrete objects such as coefficients of modular forms, since he is ultimately proving an existence theorem for concrete objects and it is a bit outre if there is no way to describe in principle how the machinery unpacks to some sort of complicated manipulation of those objects. Asking the equivalent about Mochizuki’s work does not strike me as a form of confusion or category error, but a basic conceptual point that (if answered) would clarify what is happening in his papers.

Returning to Mochizuki’s proof and the absence of visible hard analysis there:

Both the Szpiro conjecture and ABC are analytic conjectures, one about numerical invariants of elliptic curves and the other about invariants of pairs of integers (or infinite families of either type of object, and the extension to number fields). In both cases one would expect a proof to include some sort of nontrivial estimation process involving inequalities, real/complex/harmonic analysis, L-functions, differential equations, etc to take place in order to obtain conclusions. Mochizuki works with some version of theta functions, which (maybe in a different setting) were known for a long time to have a more algebraic description, so to an extent there is an algebraization of the things to be proved, but I am not aware of any purely algebraic statement that is known to imply ABC or Szpiro or Vojta conjectures. His papers do involve some estimates, but rather short ones that make up very very little of the content of the papers. This is not the distribution of labor one (or this one commenter) would expect in a paper that purports to accomplish an amazing feat in what is ultimately analytic number theory.

In brief, even taking Szpiro/ABC equivalence as a black box, which might or might not contain a lot of hard analysis, it’s hard to see how the Szpiro part can then be proved without lots of additional hard analysis. Perhaps Mochizuki shows that there is so much uniformity in the way the elliptic curve invariants vary, that easy estimates will do. But even this would require some strong results interconnecting the algebra and the analysis and at some point estimates seem likely to intervene.

@random reader:

Perhaps I was unclear in the analogy I was trying to make. The aspect of the proof of FLT I was alluding to was just that even though the statement of most primary interest is about showing that a specific equation has no Q-solution or that a specific q-series with Q-coefficients is in fact a modular form, the actual context for the argument must take place (as you are aware) with Hecke operators acting on spaces with eigenvalues outside of Q and with Galois deformation rings, neither of which can be “interpreted” entirely in terms of an initial more concrete structure of interest (such as a Diophantine equation or a specific q-series or a specific elliptic curve over Q).

That is, it was just meant as an illustration of the well-known fact that to prove a theorem of interest about a concrete thing we may need to enlarge the scope of the problem and then could lose the ability to “interpret” the core ideas of the proof in terms of operations involving just the original concrete thing. An expert in analytic number theory asked me recently how to unravel Mochizuki’s arguments in the context of some other concrete consequences, and I had given a related explanation for why that couldn’t be done and that this isn’t a danger sign at all, and so I tried to import the same explanation for my original reading of your question: I had mistakenly thought you were specifically asking about trying to see where in his arguments he is getting his hands dirty with estimates involving ABC-triples. You won’t find it in that form because he never works with ABC-triples.

In the context of Szpiro’s conjecture, he also doesn’t apply IUT to any old elliptic curve, but has to assume several local and global properties which are always attained after a finite extension of the ground field with controlled degree. So it is an essential feature of his technique that he is permitting rather general ground fields, and in fact the conditions he needs can’t ever be fulfilled over Q.

In the end he is going to aim to prove Szpiro (for a given epsilon, with elliptic curves satisfying some specific local and global properties) with a constant depending on the ground field only through its Q-degree, so it is kosher for him to making extensions of controlled degree in the middle of the argument. There is a separate “short” argument using Belyi maps that reduces the general case of Szpiro to the ones he actually handles in the IUT machinery, but this latter argument is a clever proof by contradiction that makes things ineffective roughly as in Roth’s theorem.

To tell you where the “analysis” yielding an inequality should be found, I need to say something about what is going on in his method (for which I only have an impressionistic awareness based on some lectures at the Oxford workshop last week). He uses serious algebro-geometric constructions with p-adic theta functions to make cohomological constructions that encode some local numerical invariants arising in Szpiro’s conjecture (for an E satisfying the local and global hypotheses alluded to above) in terms of a special kind of fibered category (arising from E – {0}) called a Frobenioid. This encoding involves a controlled ambiguity after accounting for variation of choices made in the construction (i.e., what is intrinsic is not a specific cohomology class, but rather a certain coset by a controlled subgroup of an ambient cohomology group on a certain fundamental group). The full force of Mochizuki’s work on the anabelian properties of hyperbolic curves is used to show that everything which just happened can be expressed in terms entirely intrinsic to a Frobenioid without any reference to the original elliptic curve.

The purpose of encoding number-theoretic data (with controlled error) in terms of Frobenioids appears to be that Frobeniods admit additional intrinsic operations (such as a weak version of “Frobenius maps”) which one can’t express in terms of the original geometric objects (such as punctured elliptic curves). By analyzing how the cohomological constructions interact with those operations (this involves introducing yet more abstract notions, such as “Hodge theaters”) eventually after a lot of work he arrives at two bounded domains in an R-vector space, one domain inside the other, and comparing their volumes (which have to be computed!) gives the desired logarithmic form of the inequality with an “error term” (arising from various ambiguities in the constructions) that is the desired uniform constant if one can exert sufficiently precise control on it uniformly in the original elliptic curve. So it is in this final step of computing volumes with an “error term” that your sought-after “hard analysis” should be found.

But the constant obtained in that way isn’t the one for Szpiro’s Conjecture (for a given epsilon) for all elliptic curves over number fields of controlled degree! It is only suitable for elliptic curves satisfying a specific list of local and global properties. To bootstrap this back to general elliptic curves over an original number field of interest, one needs to go through a proof by contradiction as mentioned above, so in the end the final constant is ineffective (but depends on just epsilon and the Q-degree of the original ground field). So one recovers Mordell but not effective Mordell.

Wow, I didn’t expect my comment to have sparked off such an excellent discussion.

@Brian Conrad: Thank you for the illuminating comments and for the excellent notes on the Oxford IUT Workshop you’ve posted here (via the “mysterious” @math_jin):

http://mathbabe.org/2015/12/15/notes-on-the-oxford-iut-workshop-by-brian-conrad/

@Lieven le Bruyn: I’ve also very much appreciated your expository work on Frobenioids. Thank you for pointing out the @math_jin account, it would seem to be very useful at this stage.

@random reader: I was going to chime in on your comment to my remark on Hodge-Arakelov theory, but I think Brian Conrad has addressed that in his notes, i.e. that what was produced in the earlier papers would not have lived up to your expectation of what a “Hodge-Arakelov theory” would be. I think Mochizuki had said as much in the introduction that I was paraphrasing.

For those following comments here, but not updates, you should be reading Brian Conrad’s report on the workshop here

http://mathbabe.org/2015/12/15/notes-on-the-oxford-iut-workshop-by-brian-conrad/

Commenting because I can, and because it’s not true that people don’t read old comment threads[1]: they are an important source of a) historical information b) inside information. The number of times I’ve searched for things mathematical and found blog discussions on it that I found excellent…

I also wanted to comment on the mathbabe thread after it was closed, to prove up-to-date links for people trawling the interwebs for the history of this interesting episode.

[1] Sure it was a generalisation: I had assumed though that old comment threads were closed to prevent spam. The suggestion of starting a new blog to continue the discussion baffles me.

David Roberts,

The main reason is because of spam, but more specifically because after a certain period the ratio of non-spam/spam comments becomes quite small, and the great majority of traffic to the postings is spambots trying to break in.