A special seminar has been scheduled for tomorrow (Monday) at 3pm at Harvard, where Yitang Zhang will present new results on “Bounded gaps between primes”. Evidently he has a proof that there exist infinitely many different pairs of primes p,q with p-q less than ~~17,000,000~~ 70,000,000.

Whether this proof is valid should become clear soon, but there still seems to be nothing happening in terms of others understanding Mochizuki’s claimed proof of the abc conjecture. For an excellent article describing the situation, see here.

**Update**: The “bounded gaps” talk is now on the Harvard seminar listing with abstract

The speaker proves that there are infinite number of pairs of primes whose difference is bounded by 70 million.

For more on the significance of this, see this Google+ posting by David Roberts.

I haven’t seen a paper, but rumor is that one exists and two referees at a major journal have found it to be correct.

**Update**: The most recent version of Mochizuki’s lecture notes for a general talk about his work is here. As mentioned in the Caroline Chen article, Go Yamashita has been talking to Mochizuki. Yamashita has now posted a short document FAQ on “Inter-Universality” and promises “For the details of the theory, please wait for the survey I will write in the near future.” He also notes:

I refuse all of the interviews from the mass media until the situation around the papers will be stabilised.

**Update**: In a weird coincidence, another major analytic number theory result is out today, a proof by Harald Helfgott of the ternary Goldbach conjecture. This says that every odd integer greater than 5 is the sum of three primes. The result had been known for all integers above e^{3100}, and Helfgott’s proof reduces that bound to 10^{30} which is small enough so that all smaller values can be checked by computer.

**Update**: *Nature* has a story up about the Zhang result, including details of one of the *Annals* referee reports (I gather the paper will be published there).

**Update**: For some background to the methods being used by Zhang, see here. For Terry Tao on Zhang, see here, on Helfgott, here.

**
Update**: New Scientist has a story about the Zhang result here, with quotes from Iwaniec, who has reviewed the paper, finding no error.

**Update**: A report from the talk at Harvard is here.

**Update**: There’s more about the Zhang proof at Emmanuel Kowalski’s blog, including a link to the Zhang paper.

**Update**: Nice piece about this in Slate from Jordan Ellenberg.

Last Updated on

So I’m tempted to believe this simply because it’s 17 million, yellow pigs and all that…

hmm, 17 million, they’re not exactly twin primes, Hardy and Littlewood would be only mildly impressed (but impressed nonetheless)

There is nothing on the Harvard Math. Dept. website about this seminar at the moment.

http://www.math.harvard.edu/seminars/index.html

The only mathematician by that name that I found is a lecturer at UNH that goes by Tom Zhang. Is that him?

http://www.math.unh.edu/faculty

Peter, could you give the source of your information?

If this result holds, it’s a big deal, JG’s comment notwithstanding. Chen Jin Run’s old result was a big deal and this is a lot more. Twin primes is a hard, hard problem.

Yes, this is the Yitang Zhang at UNH. The source of the information is an e-mail circulated by Yau to announce the seminar. There’s been a correction to the 17 million, now it’s 70 million.

need to correct the correction (UN-strikeout 70,000,000)

Awesome news link on the ABC, Peter, thanks. I was a little perplexed by this quote, though:

“For centuries, mathematicians have strived towards a single goal: to understand how the universe works, and describe it.”

I thought that was physicists?

I have been disagreeing a bit here https://plus.google.com/108081058828040288656/posts/aTMDLugKbHR with that article by Chen, concerning the claim that what Mochizuki writes is plain “gibberish”.

@S, I had the exact same reaction to that line. “Science journalism,” sigh. Still, pretty good summary of the situation … especially with Mochizuki maintaining radio silence to the consternation of the entire math community.

I’m curious, did anyone send the author money? Ten years from now there will either be no content providers; or we will have figured out how to compensate the class of professional freelance content providers.

@Steve L: I tried (for $1) but my (good) card was not accepted by Paypal. So much for the future…

Schecky R,

Thanks, fixed.

Mochizuki’s claimed proof is not even wrong!

Pingback: The Paradox of the Proof | oh tempo le tue piramidi

Pingback: Primes gotta stick together | The Aperiodical

I sent the author $4. I thought the article was exceptionally well-written and clear, overgeneralizations about mathematics aside. In fact, I was planning to ask Peter for an update on the status of the ABC conjecture sometime soon. It seems like the status is not very promising.

Nothing on the arXiv?

arx,

No paper publicly available on the arXiv or elsewhere as far as I know. But rumor is that there is a paper that has now been checked by referees at a top journal, so there’s a lot of optimism that Zhang has a correct proof.

from the abstract of the talk : “The speaker proves that there are infinite number of pairs of primes whose difference is bounded by 70 million”

And so what?

What is the significance of this result?

@Thomas,

Not that I’m a number theorist, but I believe the best know result in this direction is that there are an infinite number of pairs of primes with p_{n+1}-p_n less than c log(n) (due to Erdos I think) and recent results show that you can let c -> 0. This of course is a significant improvement on that since there is no dependence on n. Someone please correct me if I’m wrong, i was too lazy to check on Wikipedia, and I haven’t really followed number theory since I took a class with Peter Sarnak in grad school…

Thomas,

I’ll add something to the posting about the significance, but, basically this is progress towards the “twin prime conjecture”. Naively one might think that primes just become uniformly sparser and sparser as you go to larger numbers, but this says that you’re always going to find pairs relatively close (e.g. for 70 million “close”) no matter how far out you go.

Jeff & Peter, thank you for taking the time, this helps.

“…rumor is that there is a paper that has now been checked by referees at a top journal…”

Come on, this blog is all about objecting to hype. You shouldn’t be doing this.

Michael,

I’ve seen no evidence so far that this is hype. Spreading rumors is part of the mission statement of this blog, when they’re rumors I have good reason to believe…

Meanwhile, over at Matt Strassler’s scrupulously peer-reviewed and rumour-free blog, not a breath about prime gaps! Now

that’swhat I call professionalism.If you do feel you need to learn more than scurrilous hype surrounding this 70 megascalar “result-like” scale-independent gap exclusion, you ought to respect the scientific process and wait until it is had been properly vetted and the publisher has printed the journal and mailed it to your institution’s library.

Pingback: Posible avance en el estudio de los primos gemelos - Gaussianos | Gaussianos

If two journal referees back it up and a Harvard seminar is planned, then that’s a rumor worth spreading!

And I’m quite curious how this will pan out: that would be a major result, and Zhang seems to be a 50+ lecturer with few previous papers, so that’s potentially a nice story too.

If two journal referees back it up and a Harvard seminar is planned, then it makes no sense not to already circulate a preprint. Someone should ask him to post something on the arXiv and/or personal website.

I wish there was a like-button for some of the very funny comments above. I think it is nice to post about these rumors, doing so adds a little excitement to our field. It is not as if we all stop working and hold our breath until we see the proof.

I would like to know what you think about this “proof as a social construct” idea:

http://mathbabe.org/2012/11/14/the-abc-conjecture-has-not-been-proved/

http://mathbabe.org/2012/08/06/what-is-a-proof/

Richard,

I pretty much agree with mathbabe on this, as most things (by the way, she’s a good friend). For a related take on the role of proof, see Bill Thurston’s fascinating essay “On proof and progress in mathematics”, available here

http://arxiv.org/abs/math.HO/9404236

Something that struck me when I moved from the physics to the math community was the extent to which mathematics really is a living, oral tradition. Several times when I asked mathematicians where I could go read about something, their answer was “there’s nothing easily readable available, but call up or go see X and he/she will explain it to you.” (this was before the internet or e-mail or mathoverflow…).

One thing that Cathy doesn’t emphasize is that it’s not necessary for someone trying to convince the math community that they have a proof to travel around, talk to people, give lectures etc. It’s fine if they don’t like doing this, all they have to do is to follow the standard tradition of writing up their work in a form others can understand and sending the manuscript to a high-quality journal for refereeing. If it’s an important result, the best experts in the field will generally be willing to work very hard as referees to check it. As far as I know Mochizuki hasn’t done this, and I don’t know what his reasons are. His papers are so difficult to follow that the refereeing process would likely be a hard one to carry through, but this is not that unusual. The initial reaction from expert referees might be “this is not comprehensible for reasons X, Y, and Z, the author needs to rewrite the thing before it can be checked.”

So, there’s lots of interesting things to discuss and argue about concerning how proof really works in the math community, but in this case, there’s one very simple thing to say: if Mochizuki wants people to acknowledge that he has a proof, he needs to submit one or more papers with complete details to a good journal for refereeing. I hope what is going on is that now that he thinks he has a finished proof, he is writing up all the details in as clear a form as possible, and planning on sending the result to a journal.

I would have to say, I think that (with all due respect to a very interesting blog) mathbabe is pretty out there when she suggests that, if the proof is right, but this is not discovered for many years, and it is then realized to be right through the work of some fictitious additional “M”, then “M” should receive equal credit with Mochizuki. I can’t really see this happening, given the primacy that is rightly granted to coming up with the big ideasl; nor should it.

We have a recent example of something not entirely dissimilar. Perelman’s proof presumably would have remained incomprehensible for a long time if not for the serious and dedicated work by a few small teams in translating and expanding on it. Several of those involved were rewarded with improved stature and even better jobs, which is only fair; but I don’t think anybody makes the mistake of thinking that they, and not Perelman, proved the Poincare conjecture.

(I suppose there is a sliding scale, though. Various people who “rediscovered” something that CF Gauss had not bothered to publish do typically get dual credit for discovery; I suppose one can imagine a spectrum, where if a paper were released that was so genuinely incomprehensible that even much work were not rewarded in understanding it, then somebody who, perhaps inspired by the general approach, managed to reinvent the techniques and illucidate the paper would probably also get and deserve dual credit.

Sorry for the double post)

S,

I think she’s exaggerating a bit to make a point. For instance, no one believes that referees should share significant credit for a result they check, no matter how hard they have to work (unless they have to fix serious errors…). You can make up all sorts of hypothetical situations to argue about how to balance credit for an author’s work and what needs to be done so that others understand it, but sticking to actual examples, the Perelman story was very different. He did not provide details of his proof, but did provide sufficient explanation of the new ideas needed for a proof so that experts could fill in the details (with a sizable amount of work..). He travelled around to meet with experts and give lectures explaining his ideas (I was at one here at Columbia, as it happens, sitting next to Richard Hamilton), and he patiently answered questions from experts who wrote to him to ask him for details of specific arguments. From what I remember, it fairly quickly was clear to experts what his new ideas were and what his argument was, even if checking in detail that the argument worked was a fairly long process.

So, there are lots of ways of doing this, of communicating the ideas that make up a proof to the rest of the community. At first I thought that the Mochizuki story would be somewhat like the Perelman one: experts would hold seminars, go through his papers, and puzzle out how his argument worked, even if they needed to reconstruct a lot for themselves since his writeup was hard to follow. So far, we have a failure to communicate here….

I made 70 millions in my prime…per year!

Tiger, the goal here is to make less, not more!

There is definitely a paper by Zhang. I was at the Harvard talk and a paper was available in the room. It has not been made publicly available, but some well-known experts in this area have read the paper and think the proof is correct.

How convenient for Helfgott that he is 35, the International Congress is one year away, and there are no obvious front-runners!

In fact, one could regard both breakthroughs as the revenge of the middle-aged and mathematically unproductive. Zhang’s first paper dates from 1985, yet he has only 2 papers on Mathscinet and holds the rank of lecturer at UNH. And Helfgott’s work was done “in close coordination with” that of David Platt, who received a computer science BA in 1983, but only got his mathematics doctorate 2 years ago and is pictured on his home page with his grandson. Who says mathematics is a young man’s game?

Just a note, the New Scientist piece doesn’t say anything about Iwaniec refereeing the paper, only that he read it. Goldston has also read it, it must have made the rounds some at least. It would be unusual for someone to publicly acknowledge refereeing something (though I have seen it, at a 60th birthday conference for Scott Wolpert, where someone admitted refereeing two papers of Wolpert for the Annals because “he was the only person who could understand them” 🙂

Jeff M,

You’re right, that was an unjustified inference from the New Scientist story, I’ve edited the text.

It’s strange to say that H. Helfgott proves the Ternary Goldbach conjecture (the abstract of the paper says “The ternary Goldbach conjecture, or three-primes problem, asserts that every odd integer N greater than 5 is the sum of three primes. The present paper proves this conjecture.”) This is an unusual way of putting things because we have already known that the conjecture is true for each sufficiently large number: First, conditionally, due to Hardy and Littlewood, and then unconditionally due to Vinogradov. Not to take away from Helfgott’s work, but it would have been better to present things as extending the range of validity of Goldbach by a finite amount, or in terms of complementing Vinogradov’s proof, but not appear to claim credit for the proof, which confuses non-experts.

Mathematician: I think the point is that one can check by computer the range not covered by Helfgott’s theorem (which one could not do with the earlier results), so now the ternary Goldbach conjecture is known to be true. Of course it would be more satisfying if the proof didn’t involve the number-crunching-by-computer aspect, but the at least the conjecture has now been verified.

Michael, surely it takes a lot of optimization work to lower the constant from gazillion or whatever it was down to a range checkable by a computer, but it is still an arbitrary finite range… Of course someone had to do it, but in light of Vinogradov’s theorem it’s more accurate to say that the proof of the 3-prime problem has been completed, or that the finite number of possible exceptions in Vinogradov’s theorem has been ruled out. I guess I might be missing something, especially since I don’t know well the culture surrounding Vinogradov’s theorem. On the other hand, I consider hypotheticals like, what if Wiles’ proof of FLT had worked only for n > n0, and years later mathematican Y were to prove it via brute-force, case by case, for n = 3,.,n0, would it be reasonable to say that Y proved FLT? Or, say, the Riemann hypothesis was proved for t > gazillion, and then it was a matter of further optimization and brute-force (so inevitable, requiring no breakthroughs or the breaking of a barrier) to verify it in the remaining range…etc. Of course, ruling out a finite number of exceptions can require turning ineffective constants into effective ones, or a wholly new proof strategy, or eliminating exceptions from an infinite range, or…etc, which can be very difficult.

P.S. Regarding using number-crunching in proving mathematical theorem, I have no problem with that provided the number-crunching produces a certificate that can be quickly checked (possibly via computer) by anyone wishing to do so. If not, then I might still get convinced if the computation is carried out by multiple independent sources and their results agree to the expected precision. Of course, number-crunching has a lot to offer in gaining insight or to obtain support for conjectures, but perhaps is not suited for proving theorems since it usually doesn’t meet the accepted standards of a mathematical proof.

@Mathematician

the number crunching was done by interval arithmetic with ‘safe’ margins of error. Helfgott has said that the number at which his pen-and-paper proof takes over in his article is purposefully chosen to have large overlap with what has been done by the computer, and that he’s actually checked it further down than what the article says. He also said that he believes the bounds he gets are far from optimal, since that isn’t his forte, and people experienced in arriving tight bounds will be able to improve his result, lessening the reliance on the computer.

Pingback: Liquidity Trap and Number Theory News | Pink Iguana

David, the issue is not solely whether high enough precision was used by the computer, which is just one possible source of error. There are other obvious sources such as code bugs and arcane hardware/software details that were overlooked. For me, a sufficient standard is that multiple independent sources get the same result to within expected precision, which is far more convincing of the correctness (even if it doesn’t meet the usual standards of a mathematical proof!). In any case, I still prefer not to use number-crunching in proofs unless the computations produce easily verifiable certificates or the code is readable-enough that it can be verified by an independent human outsider (this is very hard to achieve in practice except in the simplest of cases).

Mathematician,

“…if Wiles’ proof of FLT had worked only for n > n0, and years later mathematican Y were to prove it via brute-force, case by case, for n = 3,.,n0, would it be reasonable to say that Y proved FLT?”

By the standards of the profession, I do think it would be considered that Y proved FLT. Many results today rely on the cumulative efforts of many, and the person(s) who finishes the proof of the theorem is considered the person who officially proved the result, even when it’s the end of a long string of theorems. Standing on the shoulders of geniuses and all that.

“… I still prefer not to use number-crunching in proofs unless the computations produce easily verifiable certificates or the code is readable-enough that it can be verified by an independent human outsider (this is very hard to achieve in practice except in the simplest of cases)…”

I would expect that the programming aspect of the proof would be held to the same standards as the mathematical aspect, meaning that the theorem would not be declared proven unless the code was vetted as thoroughly as the proof. It may be complicated, but so is the mathematical argument, so I think this vetting will occur.

Michael, I respectfully disagree with your assessment of the FLT hypothetical. Consider another hypothetical, suppose the Riemann hypothesis is proved for t > gazillion, and then a person Y (a mathematician or not, but who happens to have access to enough computer power), brute-force their way in the remaining range, is it reasonable to say that Y is the one who proved the RH…? Also, I think it’s a legitimate point of view for a mathematician not to accept computer number-crunching as part of a proof no matter how much vetting the code goes through, simply because there are hardware/chip/compiler issues that won’t/can’t be checked. Or maybe because the requested standard is impractical, such as evey block of the code should come with a mathematical proof, or the code should be completely written in assembly, both of which are actually reasonable requests if the goal is to meet the usual standards of mathematical rigor. In particular, if tomorrow, mathematician Y (or a group of hardworking grad students) puts in enough effort to tighten the bounds Helfgott’s work so as to reduce to a range checkable by hand and by other mathematical techniques, then according to this point of view it would be Y rather than Helfgott who proved the weak Goldbach conjecture… In any case, one certainly shouldn’t take away from the new result, as we now can say the weak Goldbach conj. applies for each odd n > 10^30 (or n > 5 if you’re willing to believe the number-crunching), rather than n > 10^10000, or whatever it was, which is a nice thing.

“Or the code should be completely written in assembly, both of which are actually reasonable requests if the goal is to meet the usual standards of mathematical rigor”

Well that’s just completely wrong and betrays a sad misunderstanding of what programs are about. One might as well demand full brainscans and chemical analysis of the mathematicians’ brains.

Even in engineering, you will rather have recourse to a well-written compiler of a subset of Ada, lots of contracts and assertions and three devices operating on a voting principle than to go down to manual checking of assembly language.

“Also, I think it’s a legitimate point of view for a mathematician not to accept computer number-crunching as part of a proof no matter how much vetting the code goes through, simply because there are hardware/chip/compiler issues that won’t/can’t be checked.”

No it isn’t.

Please, enough about the reliability of computer calculations. This has now been beaten to death, and has little to do with the interesting news from Helfgott.

Yatima, I guess that we can agree to disagree, but I’m wondering, when you say “Well that’s just completely wrong and betrays a sad misunderstanding of what programs are about.”, what are programs about?