Pretty much everybody in the math community seems to be getting a blog. Many of the new bloggers are quite good research mathematicians (including even some Fields Medalists). Two very new ones are:
Secret Blogging Seminar: Named after a “Secret Russian Seminar” at Berkeley, a group blog of several ex- and current Berkeley math graduate students (Ben Webster, A.J. Tolland, Scott Morrison, Noah Snyder and David Speyer)
Math Life: The blog of UT Austin’s number theorist Fernando Rodriguez Villegas
A few things I learned from the Secret Blogging Seminar postings and following associated links:
The Microsoft Research group at UCSB working on “topological quantum computation” is now known as Station Q, and has a web-site.
Googling “Secret Russian Seminar” led to the web-site of Scott Carnahan, a student of Richard Borcherds who will be a postdoc at MIT this fall. Carnahan has some interesting sets of notes there, including notes from Borcherds’ 2004 course on QFT. Back in 2001, Borcherds had taught an earlier version of this course, and notes taken by Alex Barnard are available. According to Carnahan, Borcherds began the 2004 course with the comment:
Some of you might remember I gave a class a few years ago on the standard model. It ran into a few technical problems, the main one being the fact that I didn’t know what I was talking about. I’ve learned a thing or two since then, and I’m going to try again.
I’ve finally been making some progress in understanding some mathematics associated to BRST; if this keeps making sense I hope to get something written about it this summer. I recently noticed that the pretty much incoherent Wikipedia entry on the BRST formalism, has been joined by another incoherent one on BRST quantization. Both entries contain the warning at the top “This article or section may be confusing or unclear for some readers”, which is an understatement.
A new issue of Symmetry Magazine is out, and it contains a report on the recent string theory debate in Washington between Brian Greene and Lawrence Krauss. An editorial noted how amazing it is that this sort of thing drew 600 people willing to pay $25. There’s lots of interest out there in fundamental physics in general, and this controversy in particular.
Can’t remember where I saw this earlier today, but there’s a famous quotation I hadn’t heard before from economist John Kenneth Galbraith which seems to apply well to the current situation in string theory:
Faced with the choice between changing one’s mind and proving there is no need to do so, almost everyone gets busy on the proof.
Update: One more. Adrian Cho at Science Magazine has an article about the debate going on over how long to run the Tevatron. The chair of the P5 panel is saying that they will recommend running through 2009, but that “It would take some unusual circumstances to justify running beyond 2009.” But, if the LHC takes longer to get working correctly than planned (there’s a history of this with new accelerators), and Tommaso’s Dorigo’s rumors of sightings of a Higgs at the Tevatron ever start to firm up, it’s going to be hard to justify starting to tear the machine down…
Update: Yet one more about math blogging. Lieven le Bruyn has changed his blog from NeverEndingBooks to Moonshine Math (also known as NeverEndingBooks, v. 2). He begins with a wonderful blog posting about the j-function which explains one of my favorite remarkable facts about numbers:
One thing that all this time on the internet has made clear to me is that physicists have done a pretty poor job of communicating that our understanding of QFT has changed. Renormalization is still communicated as this form of black magic that we do only because it gets the right answer. I would say that we now have a good understand of how it works. We even have a reasonable conjecture (see the previous discussion with Peter Orland) of how one might going about defining a quantum field theory via path integrals, and how the process of renormalization relates to that definition. Maybe someone should write a popular science book about effective field theory…
On another note (again from reading the notes on Borcherds’s lectures), as I understand it, the idea of using the nonisomorphism of Pin(1,3) and Pin(3,1) to distinguish the signature of spacetime is wrong. There are eight ways (IIRC) to extend the Spin group to deal with the full orthogonal group of which two are the Pin groups. I don’t know of any physical reason to distinguish the various choices.
Regarding BRST: recently I was trying to understand the old paper by Thierry-Mieg that interprets the ghosts as vertical components of the connection on the principal bundle. The antighosts are still ad hoc. After this the literature appears to bifurcate in several directions, and I can’t tell which if any of them give a clear explanation of the full BRST structure with both ghosts and antighosts. Can you point me in the right direction? Is there one authoritative reference?
I would love to be able to regard renormalization at something other than black magic, but it really does not help when someone of Borcherds’ standing comes out with a statement like this:
(See the previously-linked lecture notes).
I nominate you to write this book on effective field theory. I will look forward to reviewing it on Amazon.
I think Michel Le Bellac “Des phénomènes critiques aux champs de jauge” (don’t know the english name), part II (from chapter V on), is a great introduction to path integrals, renormalization and effective field theories ad could be studied in a QFT course (obiously also part I should be studied to undersand the statistical mechanical basis, but part II is pretty much independent).
The canonical reference is Henneaux and Teitelboim.
Aaron Bergman says: “…Maybe someone should write a popular science book about effective field theory… ”
Chris Oakley says: “…I nominate you to write this book on effective field theory. I will look forward to reviewing it on Amazon.”
I second that. There is more than one book on quantum mechanics out there at an “intermediate level”, i.e. in between the puerile pop science books and graduate level textbooks.
But as for QFT ? Feynman wrote a nice book on QED for the math-disabled, and Schumm did his best to explain the Standard Model [at least better than Peter did 😉 ] to the lay person without using too many silly analogies (as in say Randall’s “Warped passages”). Some of us laypersons have asked MacMahon to write a “QFT Demystified” book (he wrote fairly good ones on QM and GR.), but he seems more interested in doing a “String Theory Demystified” version.J :-))
Thomas is right that Henneaux-Teitelboim is about the best there is. But they don’t have much to say about the geometrical interpretation of the formalism they are working with, that’s what has always remained obscure to me.
please fix the link to Adrian Cho’s article (need to remove one extra http). Curiously, he did not cite me in his piece although he interviewed me on the issue, while you do cite me in the same paragraph when you cite him. Your psychic powers at work ? :-°
Thanks Tommaso, fixed. I guess I do have psychic powers…
This whole discussion about the Tevatron seems a bit odd. From Cho’s article, the cost of running the thing is 4% of the US HEP budget. It’s the only thing in the US producing HEP data at the high energy frontier, and it’s not like anyone has an obviously better idea for an HEP experiment that is not getting funded. I’m not seeing why the plan is not to just keep running it, only starting to shut it down after the LHC has definitively started producing the kind of results that would make the Tevatron data of little to no interest.
Thomas, Peter: Thanks, I have heard of that book but I haven’t obtained a copy yet. I am hoping to find something explicitly geometric, though….
Peter, I think the money issue is only part of the deal. By keeping the Tevatron alive, the US HEP program maintains that it is a good idea to invest on an alternative to LHC, while a lot of effort has been done and is currently ongoing to try and make the US participation in ATLAS and CMS as visible and clear as possible.
There is thus a contradiction between keeping the Tevatron on the scene indefinitely and putting money and effort in the CERN endeavour. That is only embittered by the fact that scientists that could be useful at CERN are enticed into continuing to work at the Tevatron.
From the outside, the choice of spending some 30-50M$ a year for a few more years of Tevatron running beyond 2009 is a no brainer – there are even things that the Tevatron is better at doing than the LHC at full power. But from the point of view of dictating the strategy of US HEP, it looks like the decision is a bit tougher.
Dear Peter and Thomas, concerning BRST, a few remarks;-
The classical (geometric) version of BRST is mathematically in good shape (cf. e.g. Stasheff arXiv:hep-th/9112002v1) and even connects with quantum theory through geometric quantization, cf. Duval e.a. in Commun. Math. Phys. 126, 535-557 (1990).
However a purely quantum BRST constraint reduction procedure
cannot rely on the classical theory (due to the problems with quantization maps – check out Gotay e.a. on Groenewald-Van Hove obstructions). For quantum BRST the theory is in much worse shape, with only a few sporadic rigorous papers, e.g. by Horuzhy.
There is not one BRST in method in heuristic physics, but several.
Broadly speaking, there is the Hamiltonian version (as in the
book by Henneaux and Teitelboim or in the Russian BFV school),
and the Lagrangian version (as in the papers by Kugo and Ojima),
and these come in several different flavours.
The Hamiltonian version starts from a set of quantum constraints
and give an algorithm for constructiong the BRST charge Q,
whereas the Lagrangian version starts from a set of gauge
transformations and “replaces” the gauge parameters by anticommuting ghost fields to obtain the BRST graded derivation d.
Whilst on the surface these two methods seem quite similar,
(with an obvious connection in d = graded commutator with Q)
there are subtle differences in the actual constructions
involved. The Hamiltonian version is the more problematic
one of the two;- it is quite easy to produce quantum constraint
systems where Hamiltonian BRST produces different results from
the usual Dirac method, or is inconsistent, e.g. by producing a nonpositive physical space, or the wrong physical observables. This has been noted in a number of places in the heuristic literature, but usually an ad hoc “fudge” is invoked to get out of such tight spots. In the rigorous literature the failure of BRST has also been noticed, cf. e.g. Landsman and Linden, Nuclear Physics B Vol 371, 415-433 (1992) or McMullan, Commun. Math. Phys. Vol 149(1), 161-174 (1992). I also have an (unpublished) preprint on the breakdown of Hamiltonian BRST which I can email to you if
On the other hand, the Lagrangian BRST is applied almost exclusively to gauge theories (i.e. where there is a gauge potential, not just a nonphysical degree of freedom), and closely follow Kugo and Ojima. A good example is e.g. the book by Scharf “Quantum Gauge Theories: A True Ghost Story” where their threatment of QEM can be made rigorous, and it produces the correct results. Whether Lagrangian BRST can be made into a general quantum constraint reduction procedure still remains to be seen (but I have not followed the recent literature on BRST so might have missed it).
I am aware that the real appeal for BRST comes from the path integral approach, but from the operator point of view I still cannot see how quantum BRST is an improvement over a simple Dirac constraint approach. Unnecessary and problematic formalism.
Dear anon, I think that Bonora and Cotta-Ramusino in Commun Math Phys, 87(4), 589-603 claim to have an improved version of Thierry-Mieg’s paper on BRST. This is fully geometric (hence classical).
It’s a bit surprising to see that those QFT notes got publicity – I had mostly intended them for fellow grad students who didn’t feel like taking notes. Most of the comments that I took down (e.g. Allen Knutsen’s remark about Pin groups) were basically off-the-cuff, so they should be taken with a grain of salt.
I should mention that I don’t think Borcherds’ ending comment about renormalization is completely representative of his viewpoint, and he may have been just tired and cynical. He is aware of effective field theories (and they are mentioned in the 2001 notes), and he has said several times that all of the ingredients for making perturbative quantum field theory (and in particular renormalization) mathematically rigorous already exist in the published literature, but that writing them down together is highly nontrivial.
Tao’s blog entry on the Fields medalist lectures has a shorter but more recent representation of his work.
For anyone who wants to have a look at the “Quantization of Gauge Systems” book, a djvu copy can be downloaded here.
Borcherds’ comment about renormalization may have been unguarded, but that does not mean that it did not accurately reflect what he thought. The argument about rigour is one that no-one ought to have. The rigour of a scientific theory one ought to be able to take for granted. Certain assumptions lead to certain consequences. The consequences may not be in accordance with experiments, but there should never be an issue about whether they connect with the underlying assumptions or not. Can you imagine a distinction being made between axiomatic and non-axiomatic classical mechanics? I once got an e-mail from an “axiomatic” field theory post-doc saying that I was trying to be “holier than the pope” in requiring rigour in quantum field theory, but why am I being unreasonable? Is there any mathematical cheating going on in going from Newton’s universal law of gravitation and the derivation of elliptical planetary orbits obeying Kepler’s laws, or between Maxwell’s equations and the calculation of the force between two current-carrying wires? Why should there be two sets of standards, one for QFT and one for everything else?
Hendrik, I would be interested in your preprint. You can find contact information on the arxiv.
When you talk about problems, is it the BRST method that breaks down, or do people make mistakes when applying it? One need to honor certain regularity conditions, but it is nevertheless possible to make subtle errors. E.g., thm 17.2 of H&T is flawed already for the harmonic oscillator, because it only works as stated if you say that the oscillator has a gauge symmetry. The reason is that there are “Noether identities” of the form (17.8) due to the fact that the equations of motion has solutions. The difference is that the index &alpha runs over a (d-1)-D manifold for solutions, and over a d-D manifold for genuine gauge symmetries.
Dear Thomas, expect my email on Monday. Regarding:
“When you talk about problems, is it the BRST method that breaks down, or do people make mistakes when applying it?”
I do mean the stronger statement;- if you take Hamiltonian BRST (as expressed in the book by H&T) seriously as a constraint algorithm, then it fails when applied to some simple constraint systems. To get the right answer additional assumptions are necessary (which of course means that you know the right answer by other means).
The input of a quantum constraint system is simple,- it is a *-algebra A of operators (on Hilbert or Krein space with some common dense domain), together with a set C of distinguished elements which you want to set to zero (constraints). It is easy to generate such pairs (A,C) for which the Hamiltonian BRST fails.
Dynamics (i.e. a one-parameter automorphism group of A) is independent from this;- just assume a dynamics which preserves the constraints. In any case Hamiltonian BRST is a kinematic procedure, dynamics adjustment is not part of it.
It is not even clear to me that if you start from equivalent constraints (i.e. they produce the same physical state space) that Hamiltonian BRST will produce the same physical algebra.
I have to emphasize again that Lagrangian BRST (which is closer to the classical geometric version) is not a general constraint algorithm, it is just for gauge theories. For these (at least the ones which can be analyzed rigorously) it seems to produce the right result.
for those who thought i was kidding … McGraw-Hill is releasing David MacMahon’s “String Theory Demystified” on December 24, 2007. (search for the title at McGraw-Hill’s website if you like)
Apparently: “Using the proven Demystified format, this book elucidates the highly mathematical and complex topic of string theory, covering key topics such as particle physics, quantum field theory, D-brane physics, and different types of strings. YOu will learn the mathematical tools necessary to truly decipher string theory.”
How mainstream can you get ? Eagerly awaiting the “ST for Dummies book” 🙂
Brilliant. I’m writing Santa for a copy for the book as part of my next Christmas present, plus a sexy supersymmetric partner and hopefully also an extra dimension (just to the new supersymmetric partner and I achieve perfect unification at high energy). 😉
Achieving perfect unification with your supersymmetric partner on an uncomfortable Planck scale must be very hard.
Well, really a nice idea make a blog about this argument. My best compliment about that.
“Is there any mathematical cheating going on in going […] between Maxwell’s equations and the calculation of the force between two current-carrying wires?”
As a matter of fact, there is. It’s a bit hazy in my mind, but I do know there’s all sorts of problems in defining various “fundamental” things in classical E&M. For example, the radiation-reaction problem. Fuzzy things like pre-acceleration.
All the same sorts of problems that show up in QFT as a matter of fact. It’s just that, as Aaron pointed out, we know a lot more about their origin and how to solve them.
It’s a bit hazy in my mind, but I do know there’s all sorts of problems in defining various “fundamental” things in classical E&M.
Put current into a transmission line (pair of wires) by connecting them to a battery, and you get a continuous flat-topped logic pulse propagating along the transmission line at light speed for the insulator.
This violated Ampere’s law of circuits because the current pulse doesn’t know in advance if there is a complete circuit at the far end of the line, or an open circuit.
Maxwell’s whole genius was adding an ‘extra current’ to Ampere’s law which can flow across space between the two wires (even across a vacuum), completing the circuit while a transient flows into an open circuit!
What happens when you do the experiment with sampling oscilloscopes is you find that the energy reflects back from the far end of the transmission line. If it’s an open circuit at the far end, the reflected current adds to the energy flowing in, so the transmission line charges up, a little like a capacitor.
All the same sorts of problems that show up in QFT as a matter of fact.
Maxwell’s extra current was supposed to be due to the displacement of virtual fermions in the vacuum, which polarize in an electric field. The vacuum ‘displacement current’ consequently flows in direct proportion to the rate of change of the electric field, dE/dt.
Nice theory, and it predicts light. Problem is, QFT involves a vacuum polarization due to pair production of virtual fermions, only at high energy (above Schwinger’s electric field strength threshold for pair production, or the IR cutoff energy for particle scatter). So below the IR cutoff, Maxwell’s displacement current mechanism is in difficulty. However, the correction is easy to see: electrons are accelerated by electric fields in the conductors, so they radiate transversely. Each conductor behaves as an antenna radiating an inverted version of the radio signal from the other one. At large distances from the power line, the superimposed radio signals cancel out perfectly. The conductors are therefore just swapping this radio energy, and the resulting effect of the swap is equivalent to having a ‘displacement current’. So you still justify the Maxwell equations when you dig deeply, though his original theory is wrong.
Enough about E and M, this is completely off topic.
I think mathematical rigor in a physical theory is an admirable thing, but history seems to indicate that it is not strictly necessary for progress. In particular, your example of celestial mechanics illustrates that people like Newton could make useful physical predictions using the theory of fluxions without the precision of epsilons and deltas that appeared two hundred years later. For another example, much of early quantum mechanics used objects like the Dirac delta, which was not made mathematically precise until Schwartz formulated the theory of distributions in the late forties. Quantum field theory is currently in the intermediate stage, like Kepler’s laws in the early 1800s.
I don’t think this sort of work should be rejected for a lack of rigor, because in the end, physicists are communicating ideas and calculational heuristics to other humans, not a proof-verifying computer. If an idea allows us to build a simpler mental picture of a process, or a heuristic consistently yields answers that agree with experiment, I think it increases our understanding of the universe, and some sloppiness should be tolerated. It is also occasionally useful for mathematicians who require rigor to have some bold theoretical leaps charted by people who aren’t burdened by such restrictions.
If I understand your comments correctly, you seem to suggest that mathematicians are “proof-verifying computers,” and that they should occasionally make some “bold theoretical leaps.” But mathematical proofs require imagination and creativity as well as rigor in thought, and mathematicians are far from the automatons that you seem to depict they are. They are more like poets in this regard. And in order to do creative research, they do have to take bold theoretical leaps sometimes.
Arghh, I guess even some physicists (assuming you are one) misunderstand the nature of the mathematician’s work. Well, at least you didn’t think – as many members of the public do – that their work involves crunching numbers all day long. 🙂
The important thing is that, when have a botch-up, you recognise it as such. The problem is that most QFT books & lecture courses sell renormalization like dishonest salesmen, harping on the miraculous achievements and hardly drawing attention to the caveats (such as my comment #461 here).
Peter, following on from your quote:
“Faced with the choice between changing one’s mind and proving there is no need to do so, almost everyone gets busy on the proof.”
There is a recent article in “Reason” supporting this exact point: http://www.reason.com/news/show/120455.html
Thanks for pointing to the “Reason” article.
Also, thanks a lot for the comments on BRST, quite helpful and interesting. I think you’re right that Hamiltonian BRST has a lot of problems with it, and in its present state is a very unsatisfactory formalism.
It’s closely related to Lie algebra cohomology, and there’s the same philosophical issue there: why not just look at the invariant part of a representation, what’s the point of looking at a complex, whose 0-cohomology is the invariant piece you want?. Anyway, Lie algebra cohomology has had some real uses in representation theory, I’ve been trying to understand those for quite a while, only recently have some promising ideas that seem to connect to BRST, we’ll see what that leads to.
Mathematicians have some really interesting theorems under the slogan of “quantization commutes with reduction”, but the connection of this to BRST remains unclear.
>“quantization commutes with reduction”
but that is not true, as far as I know. At least, not in gauge theories.
Not sure what you have in mind. I was being too telegraphic perhaps, here’s a survey article about exactly what I meant:
thanks for the reference, looks quite interesting. What I had in mind were some examples quite similar to the one given in that paper after the statement of the central “theorem”, as they call it. Regards.
Pingback: The physics of quantum field theory « Quantum field theory