I ordinarily keep a short list on my desk of things I’ve seen recently that I’d like to write about here. The last few days this list has gotten way too long, so I’ll try and deal with it by putting as many of these topics as I can in this posting.

The June/July issue of the AMS Notices is out, with many things worth reading. The two long articles are one by Ken Ono about Ramanujan and one by Arthur Jaffe telling the story of the founding of the Clay Mathematics Institute and the million dollar prizes associated with seven mathematical problems. There’s also a book review of Roger Penrose’s The Road to Reality, news about the proposed US FY 2007 budget for mathematical sciences research, and an account of a public talk by Michael Atiyah, who evidently closed by explaining some of his very speculative ideas about how to modify quantum mechanics, then said:

*This is for young people. Go away and explore it. If it works, don’t forget I suggested it. If it doesn’t, don’t hold me responsible.*

The June issue of Physics Today is also out. In its news pages it reports that Robert Laughlin is out as the president of the Korea Advanced Institute of Science and Technology (KAIST), and will return to Stanford in July “where he plans to teach, research, and write ‘anything that brings income.'” The report gives conflicting reasons for why things didn’t work out for him at KAIST, but notes that “90% of KAIST professors gave him a vote of no-confidence and nearly all deans and department chairs quit their administrative posts to protest his continuing in the job.”

There’s an extremely positive review of Leonard Susskind’s The Cosmic Landscape by Paul Langacker, which ends:

The Cosmic Landscape* is a fascinating introduction to the new great debate, which will most likely be argued with passion in the years to come and may once again greatly alter our perception of the universe and humanity’s place in it.*

Why any particle theorist would want to encourage other physicists outside their field to read this book and give them the idea that it represents something theorists think highly of is very unclear to me.

Finally there’s an article by Jim Gates entitled Is string theory phenomenologically viable? Gates aligns himself with the currently popular idea that string theory doesn’t give a unique description of physics:

*The belief in a unique vacuum is, to me, a Ptolemaic view – akin to that ancient belief in a unique place for Earth. As I wrote in 1989, a Copernican view, in which our universe is only one of an infinity of possibilities, is my preference, but there were very few Copernicans in the 1980s.*

He seems to promote the idea that one should not use 10d critical string theory and thus extra dimensions, but instead look for 4d string theories, and that perhaps the problem is the lack of a “completely successful construction of covariant string theory.” For more about this point of view, see Warren Siegel’s website.

There are quite a few idiosyncratic things about Gates’s article, including the fact that he refers to non-abelian gauge degrees of freedom as “Kenmer angles”, after Nicholas Kemmer (not Kenmer) who was involved in the discovery of isospin.

Some of his comments about string theory are surprising and I don’t know what to make of them. He claims that “some aspects of string theory seem relevant to quantum information theory”, and the one supposed observational test of string theory he discusses is one I hadn’t heard of before and am skeptical about (observing string-theory-predicted higher curvature terms in Einstein’s equations through gravitational wave birefringence). His discussion of supersymmetry seems to assume that observation of superpartners is unlikely, since for reasons he doesn’t explain he expects their mass to be from 1 to 30 Tev. Finally, he worries that people will not investigate things like covariant string field theory since we are about to enter an “era that promises an explosion of data”. I certainly hope he’s right about the forthcoming availability of large amounts of interesting new data.

The Harvard Crimson has an interesting article about Ken Wilson.

John Baez is getting ever closer to having a blog in its modern form, now he has a diary.

Read about the tough summer life of theoretical physicists in Paul Cook’s report from Cargese (which reminded me of when I went there as a grad student), and JoAnne Hewett’s report from Hawaii (which reminded me of a very pleasant vacation I spent on the Big Island).

Science magazine has an article about progress on increasing luminosity at the Tevatron, hopes for getting enough events there to see the Higgs before the LHC, and the debate that is beginning about whether to run the machine in 2009.

Slides are available from the Fermilab User’s Meeting.

There’s a news story out from China (and picked up by Slashdot) about the new paper by Huai-Dong Cao and Xi-Ping Zhu soon to appear in the Asian Journal of Mathematics. This paper is more than 300 pages and is supposed to contain a proof of the Poincare conjecture and the full geometrization conjecture, filling in an outline of a proof due to Perelman, who used methods developed by my Columbia colleague Richard Hamilton. Other groups have also been working on this in recent years including my other Columbia colleague John Morgan together with Gang Tian; for another example, see the notes on Perelman’s papers recently put on the arXiv by Bruce Kleiner and John Lott. Cao and Zhu have evidently been explaining their proof in a seminar at Harvard run by Yau during the past academic year, and Yau will talk about this at Strings 2006 in Beijing later this month. When the paper appears it will be interesting to see what some of the other experts in the field think of it and whether there’s a consensus that the proof of Poincare and geometrization is finally in completely rigorous form.

**Update**: According to a blog entry from the Guardian, “Perelman seems to be active in string theory.”

MathPhys,

if I read implicitly between the lines of your blog and take an optimistic (or pessimistic, depending on one’s point of view) interpretation, then you are saying that the off-shell BPHZ (Piguet-Sorella) approach applied to the model (which Grigore-Scharf take to make their point) would be free of that anomaly which they find in the on-shell Epstein-Glaser approach. This is of course a possibility, although I maintain that statements from Grigore-Scharf merit high respect (they are assured of that respect not only from me, but also from Raymond Stora), notwithstanding the obvious fact that we are all fallible (except jesters on the string court as Lubos Motl).

MathPhys,

Standard literature works in the BRST approach, i.e. one is studying some classical field theory with Grassmann fields and then constructs the generating functional for the Green functions. Everybody supposes that this process will lead to a good theory in some Hilbert space. One could take the Grigore-Scharf paper as an indication that this road is not so easy. So, you guys who are working in this direction should really construct the quantum theory in all details at least up the the second order of perturbation theory

starting from the functional approach. Then we will see if there is a way to circumvent the no-go result of G-S or if susy is in trouble

MathPhys:

“You see, I know supersymmetry well enough to believe that, while perfectly consistent, susy is the most obvious stumbling block in the development of theoretical high energy physics today.

There is something very wrong with the idea of sypersymmetrizing everything until you can compute something, then break the supersymmetry by hand and push everything you don’t want all the way up till you can’t see them (only to find that you recover the problems that made you supersymmetrize in the first place).

It is just too easy, and too contrived at the same time. And it doesn’t work.”

My feeling is that what you recognize here is, probably, a sharper indictment of SUSY than the whole of Grigore-Scharf’s argument.

You are also not the first person, among those who actually worked in SUSY whom I have heard express those suspicions…

Most probably susy could emerge not for everything but in part, or as an approximate symmetry. The fact is that something must control the divergence of the scalar particles; and this something should be a kind of cancellation between diagrams. Such cancellation mechanism, when found, should not be far of the one of supersymmetry.

More interesting stuff from Sean Carroll on

CV, which might deserve its own post here.Alejandro Rivero,

We all agree about the perturbative aim: to cut a breach into the universal infinite dimensional space which incorporates all particles with all coupling strength, e.i. to find selfclosing finite parametrig islands under the action of a finite-parametrig RG (Petermann-Stueckelberg, Wilson, any way you want).

But the standard way you want to cut these breaches is much too narrow: start with pointlike free fields couple them paying attention to the power counting. You will get stuck, precisely at the place where we are now and supersymmetry does not seem to help in this, it just pushes the problems to a higher order nbut seems to be incapable to generate new islands.

Imagine you start with massive vector fields coupled to themselves and to other scalar and spin=1/2 fields. Any coupling you contemplate to write down, I repeat any coupling will carry you to dimension 5 for the interaction (a massive free vectorfield has scaling dimension 2 and not 1) which is way beyond the limit of renormalizability. We all know what we can do in such a situation, we modify the one-vectormeson space by non-Hilbert BRST stuff and get it down to scale dimension one (paying attention to consistency problems which may require the enlargement by additional physical degrees of freedom). Then we descend thanks to the cohomological properties of BRST and finally obtain a physical theory (back to Hilbert space) in which, lo and behold the the vectormeson really has the physical dimension 2. But we don’t know the physical reason why we are doing this.

Alejandro, before you are not perturbed by this magic BRST “catalyzer”, you will never make any progress and your proposal to look into the neighbourhood of supersymmetry will only be a loss of time.

The secret to maintain dimension 1 even without that catalyzer is to permit the free fields to have a spacelike semiinfinite stringlike extension (already mentioned before on several occasions) and stay all the time in Hilbert space. I am very confident that the next step of how to implement interactions for such stringlike fields will be understood in the near future; the theory is expected to have a pointlike subalgebra (which must be idential to the old gauge invariants, otherwise the attempt must go into the dustbin). Whereas the BRST magic is essentially limited to spin one, there is no such limitation for the idea of working with stringlike localized free fields instead of the usual Lagrangian pointlike fields. Stringlike localized fields fluctuate in x and in e (unit spacelike direction = de Sitter) and having part of the fluctuation strength in e you can improve the short distance properties in x. I predict it will be this Feynmanian spacetime viewpoint (and not those momentum space manipulations) which will lead to those above islands. The closeness of supersymmetry to such a spacetime view is a Fata Morgana.

Lubos,

When you say that one can put susy on a lattice “using deconstruction”, what do you mean?

MathPhys: see, e.g., http://arxiv.org/abs/hep-lat/0503039. As far as I am aware no one knows how to do N=1 SUSY on a lattice, but (for instance) N=4 works.

Try hep-lat/0602007.

Thanks.

Bert, ah, but I do not propose to look in the neighbourhood of supersymmetry. What I say is that we will first to find the mechanism, and later someone will find a mean to reformulate it as an approximate supersymmetry, because there are tools and coincidences -say, climatic preconditions, to follow your last metaphor- enough to be able to do it.

As for the BRST “catalyzer”, I had never thought about it in the way you describe; but it seems worthwhile to take a time on this reflexion.

Alejandro,

All free pointlike fields with short distance scale dimension larger than 3/2 (equivalent to spin larger than 1/2) generate a barrier against renormalizability because you cannot find interactions with dimension bounded by 4. The first such beyond case is a free vectormeson (taken massive in order not to confront infrared and ultraviolet problems together) which has dimension 2. Historically we are inclined to follow t’Hooft and invoke the classical gauge principle although it forces us to violate important principles of QT (positivity) in intermediate steps which classically is not an issue at all. The modern form of this intermediate violation (the catalyzer) of QT is BRST. Although we should be grateful for this metaphoric trick which permitted us to start with a free vectormeson of dim=2 and obtain an interacting vectormeson of dim=2 plus logarithmic corrections (i.e. to get around that standard 3/2 barrier), we should on the other hand uphold our Heisenberg heritage which requires to find a formulation which avoids any (even intermediary) use of objects which are not observables. The prerequisites of such a step have already been accomplished: use string-localized potentials for those pointlike field strength as explained in previous blogs. The suggestion can already be found in an old paper of Mandelstam, but the new concept of modular localization leads to a conceptual and mathematical backup for its implementation. The logic of t’Hooft

Lagrangian quantization + classical gauge principle –> renormalization consistent with unitarity

was always subterfuged by the observation that there is only one renormalizable selfinteracting vectormeson theory, namely the one we know (whereas in classical field theory you can write down many Lorentz invariant interactions involving a vectorpotential, that’s why you need the selective gauge principle), so we really do not need any principle in addition to renormalizability. In addition, if you abandon the Mexican hat setting and start from the very beginning with massive vectormesons, renormalization theory will convey the very interesting message to you that it wants an additional physical degree of freedom (whose simplest realization is a scalar field, the alias Higgs but now with a vanishing vacuum expectation)

http://br.arxiv.org/abs/hep-th/9906089

Actually this idea is implicit in previous work of Scharf. Remember Gunter Scharf from that discussion yesterday? the one which together with his use of the Epstein-Glaser renormalization approach was vilified by a Harvard professor (Lobos with the Lubotomy).

The advantage of use of string-localized free fields is that (after understanding who to use them in the presence of interactions), different from the gauge idea, there is no limitation to spin=1.

It seems to me that ST Yau is a fame seeker: he exists wherever there is room for him to expand his reputation. Someone told me that in some Chinese news, Yau claimed that 50% of the credit in “resolving” Poincare Conjecture goes to Richard Hamilton, who developed the Ricci Flow WITH *HIS* (Yau’s) SUGGESTION; while Perelman’s work worths only 20% and his loyal student Cao and loyal friend Zhu’s work takes up 30% of the total credit. What a shame!

The Chinese Media is somewhat ignorant about what actually is going on in the mathematical community and totally rely on the words of this what they called “World famous mathematician, Harvard Professor, and Field Medalist Shing Tung Yau”. This creates some sensational moments in China right now. And the report is misleading: it makes people think that the two Chinese mathematicians Zhu XiPing and Cao HuaiDong completely resolved the Millenium Million Dollar Problem Poincare Conjecture. S.T. Yau described that Zhu and Cao’s made the final step in resolving the Poincare Conjecture.

It seems that the ultimate winner is Yau. Why? Because of his vision in Ricci Flow, playing 50% in the Poincare proof, he suggested and encouraged Richard Hamilton to investigate it. Then his student Cao and his friend Zhu made the final step into the gate of the PROOF 30%. He seems to imply that he is the MAN behind Poincare Conjecture! Truely beautiful! No Wonder! In several of his speeches, he told people that he had two goals: one is to make himself an everlasting mathematical icon; the other is to help China build world-class math institutes like the IAS. The first one is for real, while the second one has but only an reinforcing kind of effect.

The glory of Fermat’s Last Theorem introduced some sensational moments. Andrew Wiles and other mathematicians became the heroes. Although they might have competed to prove Fermat’s, they were not malicious towards any of the competitors. On the other hand, they appreciate and admire their fellow mathematicians’ efforts in triumph over human intellect. Tian Gang and John Morgan as well as many other mathematicians are also examining Perelman’s work and have produced some results in the Poincare Conjecture. Rumor has that Yau was one of the Main editors of the Journal, he would not like that Tian receives such a glory; and he realize that his loyal supporters’ work can beat Tian and Morgan’s –whichever comes out first wins… Everyone would like to have a moment when Andrew Wiles finished the Proof of Fermat’s. But for me this seems like a deceptive stealing.

I do admire Perelman. He’s such a humble mathematician. Having made such a revolutionary work, he still works as hard as usual, totally doesn’t care about fame or prizes. His contribution to Poincare Conjecture is obvious. He’s just a modern Grothendieck or if he doesn’t like be called that way, he is just Perelman! If one solves a great problem, he knows that he’s triumphed over himself and the problem already; he does not need any complimentary prizes or whatsoever. Because he just knows he can do it and has done it. While for those who always dreamt of making himself more famous and influential but are not able to actually accomplish it, they can only be like that MAN behind Poincare Conjecture.

Yau once claimed that one of his students received Veblen under his influence on the committee. But his student’s work is actually not belonging to first class mathematics and he regret helping him. Oh, good lord, how powerful he is!

Please, no more attacks on Yau or anyone involved in the Poincare Conjecture proof. There’s too much contention on this blog already.

For the record, I don’t think it makes any sense to assign numerical values to people’s various contributions to this.

woit is right at holding back attacks on Yau. “Milnor the Giant” is just wrongly attibuting to Yau something other people said. Actually you can find some English blogs citing one Xinhua news in Chinese, saying a Chinese mathematician Yang Le estimates a roughly total 105% (Hamilton 50%, Perelman 25%, Zhu-Cao 30%). This cannot be taken seriously. I doubt “Milnor the Giant” was informed wrongly on purpose.

In the original Xinhua news this mathematician was cited saying Perelman’s work is 70 pages, now Zhu-Cao 300 pages; hinting that they fill the gaps left by Perelman.

http://news.xinhuanet.com/english/2006-06/04/content_4644754.htm

Let wait and see what will happen in ICM Madrid.

The proof of the Poincare conjecture has eluded quite a few mathematicians for many years. It would be surprising to know that the chinese mathematicians in two years were able to solve the conjecture without the help of Perelman, Hamilton and others. So at most I believe that their contribution amount to a 10% if it is correct, as per filling gaps and important steps, that will have to be studied and checked.

The purported proof by Cao-Zhu (I haven’t talked to anyone who has seen it) is based on the arguments developed by Hamilton and Perelman.