The Columbia Math department has been doing extremely well in recent years, with some wonderful mathematicians joining the department. A couple items first involving some of them:

- Kevin Hartnett at Quanta Magazine has a great article about developments in the field of technical issues in the foundations of symplectic topology. This explains work by my colleague Dusa McDuff, who together with Katrin Wehrheim has been working on such issues, trying to resolve questions raised by fundamental work of Kenji Fukaya and collaborators. For technical details, two places to start looking are here and here.
The Hartnett story does an excellent job of showing one aspect of how research mathematics is done. Due to the complexity of the arguments needed, it’s not unusual for early papers in a new field to not be completely convincing to everyone, with unresolved questions about whether proofs really are airtight. The way things are supposed to work, and how they worked here, is that as researchers better understand the subject proofs are improved, details better understood and problems fixed. Along the way there may be disagreements about whether the original arguments were incomplete or not, but almost always people end up agreeing on the final result.

Also featured in the article is another of my Columbia colleagues, Mohammed Abouzaid, who provides characteristically wise and well thought out remarks on the story.

- Via Chandan Dalawat, I learned of an interesting CIRM video interview with another colleague, Michael Harris. The same site has this interview with Dusa McDuff, as well as a variety of other interviews in English and French.

For some other non-Columbia related links:

- The 70th birthday of Alain Connes is coming up soon, and will be celebrated with a series of public lectures and conferences on noncommutative geometry in Shanghai.

This year will be the last series of lectures by Connes at the College de France. They’re appearing online here, and I highly recommend them. He’s taking the opportunity to start the series with a general overview of the point of view about the relationship of geometry and quantum theory that he has been developing for many years. - For employment trends in theoretical particle physics, there are some updated graphs of data gleaned from the particle theory jobs rumor mill created by Erich Poppitz and available here. In terms of total number of jobs, there has been some recovery in the past couple years, with about 15 jobs/year, above the 10 or so common since the 2008 financial crisis (before 2008 numbers were higher, 20-25). As always, an important thing to keep in mind about this field is that this number of permanent jobs/year is a small fraction of the number of Ph.Ds. in the subject being produced each year at US universities.
The numbers for distribution of subfields separate out “string theory” and lattice gauge theory. There have always been few jobs in lattice gauge theory, appear to be no hires in that subject for the past two years. I’m putting “string theory” in quotes, because it’s very hard these days to figure out what counts as “string theory”. With Poppitz’s choice of what to count, hiring in string theory has recovered a bit, now around 25% of the total for the past two years, up from more like 15% typical since 2006 (earlier on the numbers in some years were around 50%).

- As pointed out here by commenter Shantanu, on Wednesday John Ellis gave a talk on Where is particle physics going? at Perimeter. I’d characterize Ellis’s answer to the question as “farther down the blind alley of supersymmetry”. He spins the failure to find SUSY so far at the LHC as some sort of positive argument for SUSY. The question session was dominated by questions about SUSY, with Ellis taking the attitude that there’s no reason to worry about the failure so far of the fine-tuning argument for SUSY, all you need to do is “ratchet up your pain threshold”. I fear that’s some sort of general advice where this line of research is going.
About the failure to find any evidence for SUSY wimps that were supposed to explain dark matter, Ellis explained that he had been working on this idea for 34 years, first writing about it in 1983, so with that much invested in it, he’s not about to give up now.

**Update**: Davide Castelvecchi points me to another new mathematics story at Nature.

**Update**: One more. A profile of Roger Penrose by Philip Ball. Penrose explains that his main problems with string theory come from two sources. One is the instability problem of extra dimensions, the other is his aesthetic conviction that sticking to four space-time dimensions is a good idea since it is only in four dimensions that you get the beautiful geometry of twistors. Ball raises the interesting question of whether Penrose could have a successful scientific career if he were starting out today:

Worst of all, the career structures and pressures facing young researchers make it increasingly hard to find the time simply to think. According to several early-career scientists interviewed by Nature, the constant need to bring in grant money, to produce papers and administer groups, leaves little time to do any research, still less indulge anything so abstract and risky as an idea.

Hasn’t Joyce been working on this subject for years now? I was surprised not to see a mention.

Peter, one thing I was surprised to see Ellis mention is that so far no one has found a convincing connection between neutrino oscillation phenomenology and TeV scale physics (and that’s why he skipped neutrino physics) Isn’t this the most important question which particle physicists and string theorists should try to answer? Why is there no progress (or even concern that there is no progress) inspite of a nobel prize last year in this area?

It is very difficult to imagine that the proof mentioned in Nature will help geophysicists in any practical way. The algorithms surely are in place and independent of detailed mathematical proofs.

I also wonder whether it applies to areas of extreme velocity that are avoided by “raypaths”. Perhaps finite bandwidth helps. Dunno.

not that I wanted to give here remarks which would go into any detail, but the problems in symplectic topology seem to go FAR beyond Fukaya’s foundational papers. Just as an arbitrarily chosen question: did anyone here understand ‘Eliashberg’s existence proof on symplecic topology’? It is not 120 pages long (it is merely 10 pages or so), it doesn’t contain too many technicalities, but it contains several crucial statements from singularity theory that are nowhere proven, provided with no proper links or explanation and thus give the paper a highly folkloristic character. Just to say this. Things developed with some reason the way they do.

Many approaches in ‘modern’ symplectic geometry were somehow ‘adopted’ from algebraic geometry but without its firm algebraic or systematic foundation, a firm dictionary between certain algebraic concepts and their ‘analytic’ counterparts was never set up, Floer theory, while easy in its idea and conception, proved to be an analytic nightmare and over the years, symplectic geometry mutated into some sort of, as I would call it, Sobolev space monster with, in compensation to the high degree of technicality, highly esoteric concepts whose existence was rather assumed than understood.

But even worse: the rise of Floer theory and Gromov-Witten theory obscured the number theoretic origins and character of quantum mechanics and mechanics, it obscured many traditional developements in quantization and physics which were far better understood than the ‘new methods’ that were deemed by everyone working in the field as ‘hard methods’ vs. all traditional methods were suddenly depreciated as (the somehow ill defined category) ‘soft methods’. The common deformation theoretic background of hard and ‘soft’ methods was never understood, classical (real and complex) singularity theory, a tremendously rich and well understood field, was never fully incorporated or acknowledged by the ‘new’ methods. To understand this, a little bit knowledge of Horkheimer’s and Adorno’s writings on the dialectic way knowledge and science progresses would have been neccessary, with every knew theory tending to destroy the old systems of reference, but such meta research was never done.

I deeply disagree with the view that Fukaya’a papers are the main source of problems in symplectic topology, it is a highly scapegoat-defining approach to the many deeply running problems in this field.

Peter,

when listening to the lectures of Alain Connes – thank you for the link ! – I get a strange impression. It seems that this has very little to do with physics, and I am not really sure that this has a lot to do with math. It seems to me – but I may well be wrong – that he is playing around with abstract concepts in some abtsract world of thoughts. There is no real result. Less famous people that do such things are not taken seriously by the community. Another way to put my impression is the following; Connes’ world is akin to that of string theorists: it is complex, interesting, but has no relation to nature.

But let me return to the first lecture, which is his motivation and advertizing one. It is really bizarre. For example, he states that the Higgs has a mass value that makes the standard model wrong at high energy – wheres enough people say just the opposite, namely that it has a mass value that makes the standard model valid up to Planck mass.

The citations of the mathematicians in the first lecture are pretty. The stories about the definition of the metre show his enthousiasm. His explanation of the reason for non-commutativity is pretty as well – but it is not physics.

I like his naive enthousiasm – but is this physics? I would be interested to hear other people’s opinion about this – incuding yours, Peter. In any case, thank you for pointing this out!

To see the “results” for this parts of my talks you need to reach the sixth hour where the standard model (extended to Pati-Salam) coupled to gravity appears as the spectral action on all spin 4 manifolds from irreducible representations of the higher Heisenberg relations. This is both a difficult mathematical result (using in particular the theory of immersions) and a physics result explaining “why nature is as it is”. It would be invalidated by the discovery of SUZY and would have been invalidated by the dicovery of the diphoton 750 Gev. So it is physics. When one runs the scattering parameter (H^4 coupling) of the SM at Planck mass it becomes negative if the Higgs mass is too low, this is what was referred to in the talk.

Dear Alain,

thank you for your kind answer. I remember how enthousuastic I was when I first heard about your model about 25 years ago. I followed it ever since. Please allow me to add two points that explain what I wrote above and that dampened my enthousiasm.

Somewhere I read that your non-commutative geometry model does not lead to U(1)*SU(2)*SU(3) UNIQUELY, but that several other groups could also arise. Is this true?

A good friend of mine here in Munich showed me a text by Niels Bohr where he speaks of quantum theory stemming form h-bar as the smallest action in nature. This simple definition is a strong contrast to the very complex idea of quantization presented in your series of talks. As a physcist, I have a tendency to choose the simpler solution.

Best regards

Friedrich

Dear AC,

when you say that the discovery of the diphoton 750GeV would have invalidated the model, are you referring to the analysis of Aydemir and al (https://arxiv.org/abs/1603.01756) according to which (I quote their paper):

“even though the 750 GeV diphoton resonance can be accommodated within the NCG motivated unified Pati-Salam models, the price one has to pay is a certain amount of fine tuning in the sector involving the necessary colored scalars” .

Or are there other reasons ?

Friedrich,

I don’t think saying “h bar is the smallest action in nature” gets you much of even the pre-1925 “old quantum theory”, much less quantum theory itself. Quantum theory as we understand it today is based on some very deep mathematics, not some simple physical intuition, and getting an even deeper understanding I think is going to require even deeper mathematics. Connes is one of very few first-class mathematicians (or physicists…) trying to do this, in a very original way. The kinds of questions he is asking and trying to find answers to are the most fundamental ones, ones most physicists seem now to have given up on, in favor of a pseudo-scientific excuse (“the multiverse did it”).

I’ve made a few attempts to follow his ideas, each time impressed by the originality of what he is trying to do, and finding some of it quite compelling as well as providing some new insight into things at the boundary of math and physics that had always fascinated me. I’m looking forward to watching all of the latest series of lectures and trying to follow in detail, so far haven’t had the time to get very far yet. As for whether he’s got a predictive model that convincingly explains things the SM doesn’t, I haven’t understood what he is doing well enough to know. I confess to being more interested in putting time into understanding the underlying ideas and seeing what new insights I can get from them.

Dear Martibal

The answer to your question is « yes ». In general the road that we

followed with my collaborators on this issue of using the new paradigm

of « spectral triples » (which could also be called « spectral geometry » )

to understand what the SM coupled to gravity is telling us about the nature

of space-time, is the one proposed by Riemann in his inaugural lecture.

There Riemann explicitly suggests that for very small distances, since the

notions of light ray and of solid body loose their meaning one should be ready

to accept that the structure is more involved than the continuum. He makes

two key points

1) “Es muss also entweder das dem Raume zu

Grunde liegende Wirkliche eine discrete Mannigfaltigkeit bilden,

oder der Grund der Massverhaltnisse ausserhalb, in darauf

wirkenden bindenen Kraften, gesucht werden » which is (badly) translated as

“Either therefore the reality which

underlies space must form a discrete manifold, or we must

seek the origin of its metric relations outside it, in the binding

forces which act upon it »

This point is fully taken up by the NCG approach where the inverse line-element

(which encodes the metric relations) exactly encodes all the forces (gravitational

and gauge bosons).

2) He continues by saying (let me skip the german):

“The answer to these questions can only be obtained by starting from

the conception of phenomena which has hitherto been justified by

experience, and which Newton assumed as a foundation, and by

making in this conception the successive changes required by

facts which it cannot explain. »

So here the key word is « successive changes » and in our experience this has been

a long struggle but each time it has been rewarding. There were quite long periods

where we were abandoning the model, saying that after all it could have been a

coincidence of some sort that it seemed to fit. But for instance it survived the neutrino

mixing (after a silent period from 98 to 2006) and more recently the wrong m_H>170 Gev

and in both cases we learned. In the first we understood the KO-dimension 6 of the

finite space (giving the fine structure) but we also rediscovered the see-saw mechanism

from the calculation (not artificially put by hands). For the wrong m_H>170 Gev we learned

our stupidity to have neglected the effect of a scalar field (not the higgs) which was there

and which we had ignored in the RG calculation.

What I have learned myself is that it is never a good idea in this stuff to try to force

a result, and the 750 Gev diphoton would have been such a case. That’s for instance

why I avoided the subject when I taught my penultimate class in the College last year

in the winter of 2016!

For a long time we proceeded in the « bottom-up » manner but the recent work with

Chamseddine and Mukhanov, which is the subject of lectures #5 and #6 this year gives

a potential explanation, finally, for the slight amount of non-commutativity present and

that was the main point of this first half of my class this year!

Dear Alain, thanks for the detailed answer !

AC wrote in small part:

The term “spectral geometry” might have more potential than “spectral triple” to communicate to the bulk of the high energy physics community that this is a concept not alien to what they are long familiar with. Indeed the idea to encode effective target spacetime geometry in the worldvolume quantum mechanics — hence in the operator spectrum — of a quantum object that roams in it is familiar from perturbative string theory, where the only difference is that instead of a 1-dimensional worldline for a quantum particle, leading to a “spectral triple”, one considers a 1+1-dimensional worldsheet, leading to what could be called a “2-spectral triple” if it were not already called a “2d SCFT”.

This close relation between the Connes-Lottes-Chamseddine-Barrett approach of modelling particle physics and that of perturbative string theory has long been pointed out by people like Jürg Fröhlich and Maxim Kontsevich, with details worked out by people like Katrin Wendlandt and Yan Soibelman (here), but it seems to remain underappreciated among members of both communities.

There is a review with further pointers to the literature on

PhysicsForums-InsightsatSpectral Standard Model and String Compactifications.This general relation of the underlying theory gets all the more interesting with the developments starting with Barrett’s insight into the reality condition of realistic spectral triples and the result that the KO-dimension of the compact space in the Connes-Lott-Chamseddine model has to be 6, thus leading to a total KO-dimension of spacetime of d = 4 + 6 in these models (albeit seen only mod 8 by KO-theory). This of course coincides with the result of the criticial dimension found earlier in “2-spectral triples”, namely in perturbative superstrings.

This interesting match seems to make it well motivated to ask whether under the point-particle reduction from 2d CFTs to spectral triples due to Fröhlich-Gawędzki, Roggenkamp-Wendland and Soibelman the Connes-Lott-Chamseddine spectral triple is indeed the point particle limit of a 2d SCFT. If so, that 2d SCFT would be a natural candidate for the UV-completion of the model, and hence potentially of realistic particle physics.

An interesting prospect, whose further examination seems to be stalled by a detachment and mutual misunderstanding of the communities on both sides of the relation.

I think one important piece of information is missing (or not stressed enough by anyone here) for the readers of Peter’s Blog interested in watching Connes last mathematical lectures at College de France.

To appreciate the potential physics interest of noncommutative spectral geometry one has to mention the already established connection (in arxiv.org/abs/1409.2471) between the quantization of volume in 4D Riemannian manifolds mathematically formalized by Connes with the non-dynamical scalar fields introduced by the theoretical physicist Chamseddine and cosmologist Mukhanov to account simultaneously for both dark matter and dark energy (not to mention the more implicit link with another connected idea about a limiting curvature of spacetime in 1612.05860 and 1612.05861).

To answer Friedrich question, I do not know what was the reception of the Mukhanov lecture “Non-commutative Geometry and Mimetic Dark Matter” at Pierre Fayet (French physicist well known for his contribution on supersymmetry) fest last december at ENS Paris (moriond.in2p3.fr/Fayet/program.php) but LHC and LUX data as well as numerical tests of some minimal non-supersymmetric gauge unified models with Pati-Salam structure (1412.4776 and 1612.07973) seem to confirm the message provided by the spectral noncommutative geometry : the phenomenology of our unique observable universe at the 230 meV current temperature requires, for the time being, just the standard model particle content up to an extrapolated 10^12 GeV scale leptogenesis where three right-handed Majorana neutrinos and some new Higgs brother(s) and other gauge boson(s) need to appear on stage.

So may-be it’s worth focusing on measuring the Higgs couplings and top mass with the best precision before dreaming of the construction of a 100 TeV accelerator. Spending more time to the tedium computational exploration of vacua solutions of the spectral action and their astrophysical connections might be not very inspiring task at first look but who knows, may be the solution to the hierarchy problem lies there…

At the webpage of Aleksey Zinger he posts further evidence of a dysfunctional foundational situation in symplectic topology. Papers there document what happened when he tried to carefully read, and then repair, the literature on the “formula” for Gromov-Witten invariants of symplectic sums. A major paper in the Annals was eventually retracted, and much other fireworks ensued.

Just a quick comment on HET hiring, concerning the lattice breakout. Most hiring of those doing lattice field theory in the last decade has been in Nuclear Theory, and the numbers there are a bit more encouraging.

Thanks Steve,

That’s good to hear.

Hi Peter,

I see that there are no tenure track faculty in Columbia’s math department. What’s the difference between a Ritt Assistant Professor and a post doc? The teaching responsibility for a Ritt position seems little heavy. Do math departments directly hire into a tenured position after an extended post doc these days?

colorado,

In math in the US (unlike physics), there are few purely non-teaching postdocs, with the standard research career path after grad school a few years in a non-tenure track job like our Ritts, then a tenure track job. Even NSF postdocs are often combined with some teaching (making it a longer position). The current teaching load for a Ritt is 2 and 1 (it was 2 and 2 way back when, when I had one). This is a lot more teaching than physicists do, on the other hand, job prospects for those coming out of these jobs are much better than for theoretical physicists.

One reason for having few if any tenure track positions is that the market for the best young people is quite competitive. If you do a search for a tenure track position you often find that the top candidates have competing offers from institutions that decide to offer them tenure to attract them, and you need to match that.

Since my last lectures in the College de France are in French, I am giving the link to a paper which I am writing (and which is still evolving), it is

https://www.dropbox.com/s/8jz865ezxjwrr91/J-Kouneiher.pdf?dl=0

In particular I explain in section 3.4 the deep mathematical roots of the notion of spectral geometry (spectral triple) coming from the work of Sullivan on KO-orientations and the origin of the KO-cycles in index theory starting with papers of Atiyah and Singer in the 60’s. The above paper is far from final and critical comments, missing references, etc are very useful at this point.

Discussion in comments to that Quanta Magazine article seems to have gone off the rails…

anon,

Yes, and an excellent illustration of a couple of important general principles that people are sometimes not aware of until it’s too late:

1. The world is full of people who know nothing about the topic being discussed but want to use your comment section for their own ax-grinding. Unchecked, they will destroy any intelligent discussion.

2. Attributed to John Baez: it’s not easy to ignore Lubos, but it’s always worth the effort.

Dear AC

thank you for putting your draft online! It is very illuminating, and useful for those who grasp of spoken French is insufficient, or whose internet bandwidth is not up to the task of downloading the videos.

Dear David, thanks, here is a much improved version as far as the second half of the paper goes, ie the discussion of the “quanta of geometry” in section 4. The new link is

https://www.dropbox.com/s/gzrqzjvilvgwfk9/J-Kouneiher.pdf?dl=0