An interesting paper appeared on the arXiv yesterday, by Hermann Nicolai and Kasper Peeters, entitled Loop and spin foam quantum gravity: a brief guide for beginners. It includes some of the same material as an earlier paper Loop quantum gravity: an outside view that they wrote with Marija Zamaklar.

Nicolai and Peters (as well as Zamaklar) are string theorists, and given the extremely heated controversy of the last few years between the LQG and string theory communities over who has the most promising approach to quantum gravity, one wonders how even-handed their discussion is likely to be. They identify various technical problems with the different approaches to finding a non-perturbative theory of quantum gravity that are often referred to as “LQG”. I’m not an all an expert in this subject, so I have no idea whether they have got these right, and whether the problems they identify are as serious as they seem to claim. Their main point, which they make repeatedly, is that

*
.. the need to fix infinitely many couplings in the perturbative approach, and the appearance of infinitely many ambiguities in non-perturbative approaches are really just different sides of the same coin. In other words, non-perturbative approaches, even if they do not `see’ any UV divergences, cannot be relieved of the duty to explain *in detail

*how the above divergences `disappear’, be it through cancellations or some other mechanism.*

What they are claiming seems to be that LQG still has not dealt with the problems raised by the non-renormalizability of quantum GR. They don’t explicitly make the claim that string theory has dealt with these problems, but the structure of their argument is such as to imply that this is the case, or that at least string theory is a more promising way of doing so. Their one explicit reference to string theory doesn’t really inspire confidence in me that they are being even-handed:

*The abundance of `consistent’ Hamiltonians and spin foam models … is sometimes compared to the vacuum degeneracy problem of string theory, but the latter concerns *different solutions* of the *same* theory, as there is no dispute as to what (perturbative) string theory *is*. However, the concomitant lack of predictivity is obviously a problem for both approaches.*

While they are being very hard on LQG for difficulties coming from not being able to show that certain specific constructions have certain specific properties, they are happy to state as incontrovertible fact something about string theory which is not exactly mathematically rigorous (the formulation of string theory requires picking a background, causing problems with the idea that all backgrounds come from the “same” theory, and let’s not even get into the problems at more than two loops).

The article is listed as a contribution to “An assessment of current paradigms in theoretical physics”, and I’m curious what that is. Does it contain an equally tough-minded evaluation of the problems of string theory?

It should be emphasized again that I’m no expert on this. I’m curious to hear from experts what they think of this article. Well-informed comments about this are welcome, anti-string or anti-LQG rants will be deleted.

**Update:**

There’s a new expository article about spin-foams by Perez out this evening.

either accept it or propose an alternative background independent quantization and do the work to show it is consistent.Such a proposal was discussed in the end of this thread, in particular in this post. For details, see e.g. this forthcoming book The manuscript is not available online for copyright reasons, but similar material can be found in hep-th/0411028, hep-th/0501043, and hep-th/0504020.

Today I was rereading gr-qc/9903045, since the more philosophical papers by Rovelli belong to my favorite literature. The key message I got out of it is that one should believe both in QM in GR, or rather in the key ideas of both theories. To me, the key property of QM is encoded in lowest-energy representations. Consequentially, anomaly freedom must be given up, but we knew that from 2D gravity anyway.

Hi Lee

Lubos has posted a lengthy critique of your comments on his site. There’s a lot of blogging there, but I’d be interested in your response to this point (which I think is the key thing in NP):

You: The freedom to specify spin foam amplitudes does not map onto the freedom to specify parameters of a perturbatively non-renormalizable theory.

Lubos: Of course it does. Take all spin foam Feynman rules that lead to long-range physics resembling smooth space and assume that the space is not empty. This may be a codimension infinity set but its dimension will still be infinity. The parameters of higher-derivative terms at low energies will be functions of the parameters defining the spin foam Feynman rules. There is a one-to-one correspondence between them.

“Zorq, the distinction between a finite theory and a uv fixed point is the following: a lattice QFT or condensed matter physics model with a fixed lattice spacing is a finite theory.”

No wonder you have trouble making yourself understood, when you use standard terms in nonstandard ways.

A lattice QFT or condensed matter physics model with a fixed lattice spacing is a cutoff theory. All physical quantities are finite (that’s what having a cutoff means), but explicitly cutoff-dependent.

A finite theory is one where physical quantites have no explicit cutoff-dependence. This is much more restrictive than saying that the theory has a cutoff.

At least, that’s what everyone else in the world means by “a finite QFT.”

“The point in LQG is that AFTER carrying out the regularization procedure and defining the diffeo invariant states and operators from the limit of the regulator removed, there remains a theory with a fixed, but spatially diffeo invariant cutoff.”

And I am asking, precisely, about what happens when you vary this latter cutoff. How do you need to change the couplings in the LQG Hamiltonian in order to maintain invariance of physical quantities? That’s the RG, applied to LQG.

Go back and read my previous comments, now that (I hope) we have the definitions straight.

“And I am asking, precisely, about what happens when you vary this latter cutoff.”

You do realize that this “cutoff” does not arise from the regulator, that is, it is not put in by hand. Physics shouldn’t be invariant under it’s variation anymore then under the variation of, say, the speed of light.

The problem I have observed with the LQG terminology is that it seems to take the point of view that many theorists had in the relatively early days of QED renormalization. That viewpoint was that all the infinities of the theory were somehow “real,” meaning for example, that the bare electron charge was really infinite and the bare mass was, in some real sense, zero. The regularization procedures were simply mathematical necessities that allowed us to parameterize all the fundamentally infinite quantities.

Nowadays, thanks in large measure to the work of Wilson on the RG and the lattice regularization of gauge theories, we tend to have a different view. While it is possible that the infinite constants that arise in renormalization are really infinite, we tend to think of them as simply large but finite (or in some cases, not even large). We expect that new physics at some high scale will change the structure of the theory. This new physics might or might not be a renormalizable QFT, but we still expect that the low-energy effective theory will be renormalizable (although one must be careful about applying this RG rule of thumb too generally).

In the first viewpoint, a theory with a fundamental physical cutoff is “finite,” because, if we know that cutoff, all the amplitudes of the theory are finite numbers. If the operators are defined appropriately, no renormalization is required. However, the modern viewpoint recognizes that whether the cutoff used corresponds to somthing physical at the Planck scale or is merely a mathematical device is pretty much irrelevant. In either case, one can look at the low-energy effective theory, using essentially the same kinds of tools.

One can always deal with the problem of having a nonrenormalizable theory by setting a finite cutoff and describing the theory at that cutoff scale. There are then unambiguous predictions for all processes. This is a valid deductive procedure; however, it is no help in the inductive process of working backward from observation to get the fundamental theory, simple because there are an infinite number of parameters in the original high-energy theory.

There is no such thing as an “infinite constant”. \infinity + exp(-x^2/2) is just as infinite as \infinity.

Lee wrote:

“Lubos, the finiteness is achieved kinematically, once the procedure just described is done the theory is uv finite WHATEVER the dynanmics, again just as in a lattice QFT with fixed lattice spaceing.”

Dear Lee, the comparison with the lattice QCD is very apt because the finiteness of lattice QCD is obviously completely fake. You can put any bad-behaved theory on a lattice and assume that this makes everything finite. The problems of the original theory, if there are some problems, immediately re-appear if one tries to take the continuum limit as a dependence on all the details of the lattice theory – i.e. the cutoff-dependence – which is a manifestation of the same problem in a regulated description.

Putting a UV-sick theory on a lattice does not solve the problems; it is just either a translation of the problems to different variables, or – if you want to claim that the problems are gone – it is hiding one’s head into the sand because one does not want to *see* the problems. But the same problems are still there. You can’t simply solve such basic physics problems in such cheap ways – like saying that you make a cutoff.

…

I am confused how Prof. Kowalski-Glikman and Prof. Smolin and others may disagree about the basic question whether DSR predicts an energy-dependent speed of light. Is it just a terminological disagreement or a physical one? My understanding is that DSR violates the constancy of speed of light and it simultaneously violates locality in a brutal way, by any distance, because locality requires a linear representation of the energy-momentum vector transforming under the Lorentz group.

(DSR is deformed special relativity whose symmetry group is the contraction of the q-deformation of the (anti) de Sitter group, which introduces nonlinearities to the [J,P] commutators.)

But do you agree that GLAST should detect something that will look like energy-dependent speed of light? DSR is all about the speed of light. It’s almost like string theorists disagreeing whether there are any strings in string theory.

Anonymous asked “… how do you expect the matter degrees of freedom to emerge? …”.

With respect to exceptional spin foams based on structures like those of N=8 supergravity, here is a quote from Peter G. O. Freund’s book “Introduction to Supersymmetry” (Cambridge 1986, at pages 117-119) about N=8 supergravity structure:

“… Einstein … repeatedly emphasized the conceptual imbalance between the two sides of his gravitational equations. On the left-hand side sits the Einstein tensor … a genuinely geometrical construct, whereas on the right-hand side we find the energy-momentum tensor totally unspecified by the theory. …

The situation in N=8 supergravity … is radically different. … All basic forces AND all basic forms of matter now appear in the SAME supermultiplet. … Einstein’s ‘complaint’ is answered. … No previous physical theory haas exhibited anything like this degree of self-containedness and completeness. …”.

Maybe this is one of Lee Smolin’s “reasons” that he “… always discuss[es] dynamics in the spin foam picture …”,

and

maybe it is also a reason that Lee Smolin said in hep-th/0104050 “… Supersymmetry appears to be related to triality of the representations of Spin(8) …”.

Tony Smith

http://www.valdostamuseum.org/hamsmith/

The inner product of Fock space depends on the background metric, hence this quantization will never arise in a background independent formalism. So we do not avoid it because we are stupid, it is simply not an option.What I don’t understand is that I can write down the GR + Standard model Lagrangian, and it looks completely diff-invariant and (other than the need for a spin structure) background independent to me. What you seem to be telling me is that, when I want to quantize this thing, I should picture that R over on the left as completely different from all the other things and quantize it in a completely different manner than everything else. First of all, it’s not clear to me that we can do this because everything interacts with gravity. But, I also don’t see why shouldn’t apply your ‘background independent quantization’ to the whole shebang and, thus, never see the beta-function in QCD, for example.

I don’t see why you focus on Fock space, either. Fock space is a way to quantize freeish theories that have a particle interpretation, but there are plenty of other types of QFTs — certain CFTs, for example.

TS – I solved the energy problem without SS. It’s not necessary. The sources are derivatives of the gauge field along the (not assumed small) extra dimensions.

-drl

Dear Brett,

I completely agree with your observation that our LQG friends often happen to speak about the infinities in QFT in the obsolete, pre-RG, pre-1970s way. If they applied their ideals to all disciplines of physics, they would also reject the Standard Model, because it has “infinities”, but accept Fermi’s four-fermion interaction because they don’t recognize renormalizable and non-renormalizable theories as long as they have infinities and they are proud about it.

Of course, we have known for quite some time that this understanding is flawed, despite the fact that this ignorance did not prevent people from calculating QED loops before the 1970s. Infinities that are regulated or parameterized with a cutoff are just a technicality, and one must do a lot of other work afterwards to figure out whether these infinities are lethal, and if they are not, what are the physical predictions. We can make physical predictions for renormalizable theories where the low-energy physics is independent of the details of the short-distance physics.

My understanding is that the LQG friends just feel to throw up when they see a divergent integral and it stops them – much like others who feel anxious when they’re told that space and time can mix. Many things in physics however can’t be done with such a phobia. There is a lot of thinking required after we see the first infinity.

More generally, I feel that our LQG friends prefer form over content. Formalism over formalism-invariant physical insights and predictions. Technicalities over profound principles. Lee has unfortunately said it too many times for me to think that I just misunderstood something. He likes the idea that all LQG people are using the same formalism(s), the same methods etc. I think it is a bad sign that says something about the intellectual breadth of the community, and this observation of limited technical resources has nothing to do with the physical coherence of the actual theory or theories.

Formalism, much like the Greek alphabet, is not yet physics even though some people may find it difficult enough to believe that it must be. Carlo Rovelli has informed us that he has not yet digested quantum mechanics – well, yes, quantum gravity and renormalization group are slightly more complicated than quantum mechanics and maybe one should first digest quantum mechanics before he or she starts to solve more difficult matters. Of course that my feeling is that people tend to over-emphasize formalism if they learn it and they feel that it is about the maximum they can learn.

In real physics, people work with many formalisms (often subconciously) whose equivalence they often understand well, and they parameterize UV divergences using many methods – dim reg, brute cutoffs, Pauli-Villars, lattice – whose equivalence for the actual physical predictions should be comprehensible for a physicist, and this equivalence is based on the logic of RG. Making an “infinity” finite by one of the methods is not yet a solution of a problem. It’s just a translation of the same problems to different variables.

In LQG, one randomly picks one of these regulators – something like a lattice – and promotes is to a principle. That’s of course not very deep and it does not solve any problems. Lee has even explicitly said that he solves all UV problems independently of dynamics. If he does so, then we can be sure that his solution is independent of physics because the nature of UV problems much like 99% of other things in physics DOES depend on dynamics. If an argument does not depend on it at all, then it’s surely wrong.

I feel that the focus of LQG is not on the things that can actually be extracted from a theory but on the limitations what we are actually allowed to think, believe, or use, instead of physical predictions. We are not allowed to use integrals involving UV divergences because they’re bad. We are not allowed to do perturbative expansions or other expansions because they’re bad. Instead, we should like spin networks that have never been connected with anything remotely similar to observable physics because they are a nice formalism that a community uses.

We are not allowed to consider theories where geometry is just an approximate concept arising from a deeper or broader structure because any additions to GR are bad; GR is essentially a holy scripture and the real goal is to show that we don’t have to modify it at all because of “unimportant” novelties such as quantum mechanics. All these things that we are “banned” from doing may be bad for our LQG friends, but they are absolutely necessary for doing theoretical physics at a decent level. Sorry, Lee, if I am the first one who says it to you.

All the best

Luboš

“You do realize that this “cutoff” does not arise from the regulator … Physics shouldn’t be invariant under it’s variation …”

To return to the *point*, there are an infinite number of arbitrary coupling constants in the LG Hamiltonian. These are not fixed by diffeomphism invariance or background independence.

Without a principle to fix them, LQG would be utterly unpredictive. I offered and RG fixed point as one mechanism for fixing them.

You don’t like that? Fine. What principle *does* fix them?

A few examples of the focus of LQG on the form instead of the content:

Preference over the way to regulate infinities – lattice instead of dim. reg. or Pauli-Villars or other regulators.

Preference over the way how calculations are done – dislike for perturbative methods.

Preferences over Hamiltonian vs. Lagrangian descriptions – it is very important for the LQG people which one should one choose.

Needless to say, neither of these questions is an important question in physics these days. They’re mere superficial technicalities. College kids may talk which regularization they prefer but when they grow up, they will eventually understand why it does not matter. The same holds for Lagrangians and Hamiltonians – in any working theory with both of these approaches, they’re simply equivalent. Of course, LQG is probably inconsistent so it can’t even be shown that the two approaches are equivalent, but that’s another reason to abandon such a theory, not a reason to start medieval debates which formalism is “better”.

Another example of the preference of form over content are the statements about “background independence”. Physics of string theory is manifestly background-independent and the third chapters of all major textbooks explain why. A modification of the background we start with is equivalent to a condensation of physical particles in the original background. That’s why we know that there is just one theory and not many theories that depend on a background.

But what our LQG friends find very important is how we actually write down the calculations of the physics predictions. The best observables in quantum gravity is the S-matrix – no one, not even our LQG colleagues, has invented better ones. So we evaluate the S-matrix. The S-matrix makes it necessary to choose a background around which we expand, in order to know what the Hilbert space is – so that we can calculate the scattering amplitudes. This is just about a calculational method, a superficial feature of our strategy to approach physical questions. It does not change anything about the fact that dynamics of string theory is background-independent – and that there is just one theory.

But Lee seems to prefer the form so he would tell us that we are not allowed to expand, and just because we expand things in this way, string theory *itself* fails to be background-independent, which is of course nonsense. I subscribe wholeheartedly that we would be happier if we had a language for string theory that “sees” all the backgrounds simultaneously as solutions of some universal rules. But this dream is not a real necessity. It is just about the mathematical methods that Nature and the world of mathematics can offer us. It is not really a physical question. Nevertheless, it seems to be more important for the LQG friends than physics.

In string theory, we have many contexts in which various descriptions are easier and “default”. In AdS/CFT, we really find the physical states of the graviton only. For example, we can go to the light-cone gauge in the pp-wave limit. The unphysical components of the metric are not there. Still, we know that the physics is exactly equivalent to physics that we can also write down covariantly, modulo diff invariance. I am afraid that it is fair to say that any technical step or detail of this kind would just distract our LQG friends far too much so that they could not concentrate on physics – which is what is left from the calculations after we divide by all unphysical details associated with individual formalisms.

Physics is non-trivial and it rules while formalisms are superficial and trivial and they suck. Sorry, Lee.

“We expect that new physics at some high scale will change the structure of the theory.”

That is precisely the case in LQG style QFT, which is why people who try to attempt to understand LQG in terms of effective field theories like the QFTs that make up the standard modell and their perturbative expansion, or in terms on analogies to lattice QFT which are often misleading and nevermore then analogies, (like Prof Motl) have such a hard time.

LQG introduces a genuinely new class of QFTs (background independent), with genuinely new structures at high energies and so on. Wether these are physical or not is an open question but the observation that they are different from ordinary QFT is, well, obvious. Most of the tools and intuitions from background QFT do not carry over.

Freidel showed however that QFTs in the ordinary sense CAN arise in a certain limit from this new class of QFTs. This is of course crucial, if that was not the case then the objections that these new QFTs can’t have anything to do with reality since they do not incoporate the old QFTs would be valid and severe (Freidel said in his presentation at Loops05 that he personally thought of this as a clear branching point of LQG, had this failed then he would have given up on LQG), however this is not the case, and it is hence worthwhile and ever more promising to study this new class of QFTs.

I should note that I am just a beginner in this field, either. But the above is basically why I choose to study it.

Thanks everyone for the many comments. Ill try to answer as many as I can:

To Boreds: Take the case of 2+1 QG coupled to some matter QFT, with arbitrary couplings, treated as a spin foam model. This was solved in the recent work of Freidel and Livine. The answer is the same matter QFT on a non-commutative geometry which is kappa-Poincare. None of these are in the class of perturbatively non-renormalizable QFT’s on smooth Minkowski spacetime. So when you quote Lubos as saying, “Take all spin foam Feynman rules that lead to long-range physics resembling smooth space…” the answer is that there are no theories in this category because all such spin foam theories correspond to QFT’s on kappa-Minkowski spacetime which is not smooth.

So there is no correspondence, because the two classes of theories have different low energy symmetry algebras.

To Zorq, Im sorry if I “use standard terms in non-standard ways.” Part of the problem is that the properties of LQG with regard to finiteness are different than previous examples and its hard to discuss this with people who have not looked into the actual calculations and proofs. The point is that diffeo invariant QFT’s are a new type of QFT and have features not shared by either perturbatively renormalizable QFT’s or lattice QFT’s. Now,you ask, what is the relationship between the microscopic Planck’s length that serves as the cutoff of the diffeo invariant theory and the macroscopic Newton’s constant. To answer this you need to refer to a computable macroscopic quantity. One we have control of is the black hole entropy. By using this we see that the ratio of the macroscopic Newton’s constant G_{macro} defined by the black hole entropy formula (Area/4 G_{macro} hbar ) to the bare L_{Pl}^2 that comes into the microscopic theory is a parameter of order 1, proportional to the Immirzi parameter. This is how we know there is no infinite renormalization. As I said, there are other calculations that lead to the same conclusion.

To Brett: because of what I just said, measuring Newton’s constant is, up to a computable dimensionless constant of order unity, a measurement of the microscopic planck scale. If you like, from a RG point of view because the only coefficient of an invariant term that can contribute to microscopic physics-besides the cosmological constant- is an irrelevant operator, and because you can compute the black hole entropy in closed form, a measurement of a macroscopic quantity determines a microscopic cutoff. This is a very different case from that of marginal couplings that we are used to in perturbatively renormalizable theories. The point of view in LQG is then not like the early days of QED, it is nothing but the modern Wilsonian RG point of view applied to a case where there is no marginal coupling so the low energy limit is dominated by an irrelevant coupling.

To Lubos, how is it that there can be disagreement about a prediction of DSR is that there can be different identifications of elements of the kappa-Poincare algebra with the observed energy and momenta. In 2+1 this ambiguity is fixed by the coupling to gravity. Jurek has one expectation as to how this will work in 3+1, I and others have another. Mine comes from a semiclassical calculation that shows directly an energy dependent speed of light (hep-th/0501091).

To Aaron, yes, when the standard model is coupled to gravity, the whole thing must be quantized using LQG technology. You cannot quantize a theory half by background independent methods and half by background dependent methods. This has been developed in detail, for example in Carlo’s and Thomas’s books and reviews for all the standard model fields. The result is not surprising and the limit in which gravity is turned off can be check and is reasonable. In the Hamiltonian picture, if you fix a gravitational spin network (setting to zero the terms in the Hamiltonian constraint coming from gravity) the result looks like a lattice theory for the matter on that fixed graph.) Or, in the spin foam picture, if you set the gravitational constant to zero, the spin foam for the remaining matter degrees of freedom is nothing but the perturbative expansion of the matter QFT (see Freidel et al again for this.)

Lubos, I am afraid I can’t understand your comments, they do not reflect any detailed understanding of how the actual calculations and proofs are done. Let me just start with the first line of one of your lengthy comments: “Preference over the way to regulate infinities – lattice instead of dim. reg. or Pauli-Villars or other regulators.”

For the millionth time, LQG does NOT USE LATTICE methods. It is not a matter of “preference”, the methods of background dependent theories such as dim reg or Pauli Villars CANNOT BE USED, because if the whole metric is an operator there is no background metric, which is needed to define them. Nor can lattice regularization be used as that also depends on a background metric. We had to invent NEW METHODS for regularization, suitable for gauge theories whose dynamics is defined on a manifold with no metric and we did so (This was why it took some years of careful work to develop LQG). They are roughly like point splitting regularization but somewhat more complicated because one has to ensure that in the limit in which the regulator is removed the resulting states, inner products and operators are spatially diffeo invariant. This is more intricate and constraining than the regularizations defined by background metrics.

I could continue to correct each comment you make line by line, but as almost nothing written there corresponds to what is actually done it would take a book. I would ask you once again to please study the details and understand them. There are valid criticisms and limitations to LQG which are worth discussing, but its hard to argue with someone who is unwilling to understand the actual results and claims.

Zork, What principle does fix the couplings? First, we don’t know that there are an infinite number of consistent couplings, either at the Hamiltonian level or the spin foam level. What we know is that there is one ordering of the Hamiltonian constraint that is consistent with diffeo invariance, which has the standard two parameters. It is not known how many others there are. But as I said, none of the regularizations of the Hamiltonian constrains I know lead to the exchange moves, and so none are acceptable.

While there is some uncertainty, I believe this is because in the Hamiltonian approach one regulates by point splitting in space and not time. So to get a good evolution amplitude we regulate with spin foam methods. At the spin foam level we know of one evolution amplitude that has been proved to have uv finite sums over labels, we do not know how large the space of amplitudes with that property is. It would be very worth knowing this, but it just hasn’t been done.

Second, I personally did not reject your proposal of using the RG to find one. A theory with a cutoff can still have a non-trivial topology of RG flow, the different universality classes of which define the possible low energy behaviors. After all this is what happens in the standard theory of 2nd order phase transitions in 3d condensed matter systems. I have for years strongly encouraged the development of RG ideas to the problem of selecting the good spin foam theories. It is more complicated, see papers of Markopoulou on the RG for spin foams to understand why (gr-qc/0203036, hep-th/0006199) but some progress was made and I believe much more progress could be made on this.

To fh: sorry to repeat some of your remarks,

Thanks, Lee

The result is not surprising and the limit in which gravity is turned off can be check and is reasonableNow I’m confused. Let’s do the harmonic oscillator coupled to gravity. I thought we had established that quantizing the harmonic oscillator (or QCD or whatever) alone by LQG techniques gives an experimentally incorrect answer. Now are you claiming that, instead, if do the coupling to gravity, quantize the entire system and then take G->0 instead, we get a different result than if we had never turned on gravity in the first place, and that that result is the correct one?

“[W]e see that the ratio of the macroscopic Newton’s constant G_{macro} defined by the black hole entropy formula (Area/4 G_{macro} hbar ) to the bare L_{Pl}^2 that comes into the microscopic theory is a parameter of order 1, proportional to the Immirzi parameter. This is how we know there is no infinite renormalization.”

As I explained above, you learn nothing of the kind. Stop using the words “infinite” and “infinities” and start using the language of modern Renormalization theory.

Once you do, you will see that the relation between the low-energy Newton’s constant and the bare coupling in the LQG Hamiltonian is exactly what one expect of any theory with a cutoff at L_p. It is not some deep breakthrough, but a trivial observation.

“Zork, What principle does fix the couplings? First, we don’t know that there are an infinite number of consistent couplings, either at the Hamiltonian level or the spin foam level.”

Yes, we do.

There are an infinite number of polynomials in the Riemann curvature and its covariant derivatives that can be added to the spin-foam action. A similar infinite set of couplings exists, also, in the LQG Hamiltonian.

Neither diffeomorphism invariance, nor background independence fixes those couplings.

What does?

One other thing. You refer to a ‘uniqueness theorem’ a number of times. How does this reconcile with the well-known fact that there are a number of inequivalent quantizations of gravity in 2+1 dimensions?

Lee,

Here’s my attempt to make some progress in this conversation while keeping the discussion civil. It appears to me that you and some of the others in this debate are speaking two different languages. And while I can understand that you want people to look at the details of LQG, I think you would do well to formulate a response in a language others can understand

withoutlooking at these details. Otherwise, we will naturally tend to be skeptical. I really think that if you want to be taken seriously you have to be able to answer this question at a rough, hand-waving level, without appealing to any theorems. Otherwise your claims seem to run counter to basic intuition.So let me try to formulate a question in a clear way, and hopefully you can give at least a heuristic response that does not require knowing the technical details.

In ordinary QFT we know that there are numerous irrelevant operators involving gravity that one can add to the Lagrangian. Their infinitely many undetermined coefficients at a given scale account for the nonrenormalizability of gravity. Furthermore, none of them in any way violate basic principles of background independence or diffeo invariance.

So, a field theorist wanting to understand your work naturally will ask, are you saying there is a UV fixed point? If not, what principle relates the various couplings?

As far as I can tell, your answer to the first question is no, there is a cutoff. So the next question becomes important: why are there not infinitely many choices of theory in your cutoff theory?

At this point you seem to appeal to the details, but a good heuristic answer would be a great help to an outsider wondering if it’s worth looking at those details. If some deeper principle fixes the whole set of infinitely many coefficients (or reduces them to a smaller finite-dimensional set), what is that deeper principle like? It can’t just be background independence, because these terms don’t inherently violate background independence.

You stress that these theories reduce not to usual QFTs but to some kappa-deformed theories, but it doesn’t seem that that resolves the problem either. Such a deformation doesn’t make gravity renormalizable.

Whatever happens in your theory, there must be some way to give us at least a rough explanation of how it fits with these facts. Just saying “it’s not an ordinary QFT” is a deeply unsatisfying answer. If it reduces to an ordinary QFT at some point below the Planck scale (up to whatever kappa-deformation you’re claiming there is, which must be a small effect to fit with observed data), then somehow it has to determine the usual coefficients for the Wilsonian RG to take over. And so there has to be something very special going on,

beyond just background independence. For instance, in string theory the sort of heuristic answer I am looking for is that the infinitely many fields of arbitrarily high spin constrain the interactions.(I’ve been saying things like “there has to be” a lot, but if you can explain why

thatis wrong, that would be interesting too.)Thanks!

Oops, I seem to be duplicating Zorq’s question in a much more long-winded way. I guess we were writing at the same time.

fh

**(Freidel said in his presentation at Loops05 that he personally thought of this as a clear branching point of LQG, had this failed then he would have given up on LQG),…**

http://www.math.columbia.edu/~woit/wordpress/?p=330#comment-7811

I remember that. I have the video (whereas you, I expect, were present).

It was impressive. I have not yet heard people in some other lines of research say, even in retrospect, an empirical or mathematical result that would make them abandon their theory and change fields. The way he said this convinced me he was absolutely serious about it. Then jokingly:

“But the fact that I am here talking to you shows that it worked out…”

or words to that effect.

I believe that Freidel has a new paper (co-authored with Baratin?) in preparation which looks into the 4D case. Again the question is, if i understand correctly, does the does the gravity theory specialize to give matter QFT if you let G go to zero. Do you know anything about this, fh?

Hi everyone, thanks, again Ill do my best to answer clearly.

First to Zorq: You assert: “There are an infinite number of polynomials in the Riemann curvature and its covariant derivatives that can be added to the spin-foam action.” No, we don’t know this because the spin foam amplitude is not expressed in terms of polynomials of the Riemann curvature. It is expressed in terms of invariants of quantum groups, such as q-15j symbols. I would not be surprised if there are an infinite number of such possible amplitudes, but I know of no result that shows so.

Further, as I thought I emphasized, there are conditions to be imposed. One, for sure, is the uv finiteness (in the sense that sums and integrals converge) of the sums over labels in a spin foam amplitude. This is quite restrictive. Only a few solutions are known, (see as usual hep-th/0408048 for exact references to the finiteness proofs for spin foams). We just do not know how large is the space of spin foam amplitudes that have this property. So I am making no claim here, either way. But you asked for restrictive conditions and this is an important one. OK?

You also say: “A similar infinite set of couplings exists, also, in the LQG Hamiltonian”-to which similar remarks apply. The LQG quantum Hamiltonian constraint is not expressed in terms of classical quantities and it has to satisfy some quite non-trivial consistency conditions coming from the operator constraint algebra. We do not know how large is the space of solutions to these. As I mentioned we know of only a few.

(Also, I thought I was using the language of the Wilsonian RG, and I agree I was using it to make a trivial point.)

Now, to Anonomous. Thanks for your helpful attitude. You ask, “In ordinary QFT we know that there are numerous irrelevant operators involving gravity that one can add to the Lagrangian….so why are there not infinitely many choices of theory in your cutoff theory?” I hope the above answers you clearly. The point is that the spin foam action is NOT made by writing classical spacetime diffeo invariant expressions in the continuum and then subjecting them to a regularization. Once you are in the spin foam or group field theory language the amplitudes are expressed in a different framework-that of quantum group invariants. This is consistent with the idea that there is no continuum below the Planck scale. And there are conditions imposed. The conditions of finiteness of sums over labels of spin foam amplitudes is pretty restrictive, so there could easily be only a finite space of solutions to it. But this has not been shown so I am not claiming it is true.

Now about 2+1: “You stress that these theories reduce not to usual QFTs but to some kappa-deformed theories, but it doesn’t seem that that resolves the problem either. Such a deformation doesn’t make gravity renormalizable.” First, the point of the argument was to directly show that an argument given by someone was wrong because, again, none of the DSR theories are in the class of perturbative QFT’s on Minkowski spacetime. Second, DSR theories can be uv finite because there are maximum energies in sums over momenta.

You want a heuristic explanation for why a diffeo invariant regularization procedure results in finite operator products. OK, here is one that was very helpful to us originally. (again see 0408048). In the absence of a background metric all operators are distributions and all distributions are densities. So when defining an operator product in the absence of a background metric you have to carefully keep track of density weights.

We regulate by point splitting, which means we introduce an auxiliary background metric q_0 just for the purpose of defining the distance between points. We have to define the product of two operator valued distributions to be one operator valued distribution. In general the result has inverse powers of the distance measured in units of q_0 times determinants of q_0 to soak up the density weights. If the resulting operator is to be diffeo invariant (under actions on the quantum fields) it cannot depend on q_0 because q_0 is not acted on by the operator that generates diffeos. It turns out this means it cannot depend either on the distance measured in units of q_0 (this is seen by an argument where q_0 is scaled.) Thus, a diffeo invariant operator extracted from the limit in which the regulator is removed cannot depend on q_0. Hence because all divergences are measured in units of q_0 there can be no divergences. This is borne out by the detailed calculations. But what kind of operator can appear? Only one that is a natural integral over an operator of density of weight one-as the density weight can in the limit only come from the operators. Since local field operators have density weight one, this can come from an n’th root of the product of n operators, each of density weight one. This is exactly how area and volume operators are defined.

There is another trick which is to represent the inverse of the determinant of the operator metric as a commutator of operators that can be defined in the regulated theory. This allows us to construct further finite operators including the Hamiltonian constraint.

To Aaron, about the limit G to zero in 2+1 see Freidel and Livine for how it works in detail. Remember I am talking here about a spin foam calculation, so the result comes from a path integral and not the Hamiltonian theory. You can ask, could this have been seen in the Hilbert space and how would the limit G to zero look there? I don’t know (I hope that is ok as these are new results.) It seems like a good research problem, perhaps you would like to work on it.

Also, what inequivalent quantizations of 2+1 gravity are you referring to?

> what inequivalent quantizations of 2+1 gravity are you referring to?

I guess the ones described e.g. by Carlip ?

While there are different ways to quantize 2+1 (and Ponzano-Regge is among them) , they do not give the same results.

Lee,

let me use this opportunity to ask how the exact solution of Martin Pilati for strong-coupling gravity fits into your picture ?

http://prola.aps.org/abstract/PRD/v26/i10/p2645_1

Also, what inequivalent quantizations of 2+1 gravity are you referring to?See Carlip, here

You want a heuristic explanation for why a diffeo invariant regularization procedure results in finite operator products.That was not the question anonymous (or zorq) was asking. “Finiteness” is not the issue here. I think you’re conflating renormalization with the removal of infinities, and they’re not the same thing. I assume we all believe in effective field theories. I can write down a cutoff EFT for gravity, and there are an infinite number of free couplings in the Lagrangian. At a given value for the cutoff, does LQG tell me what the ‘correct’ values for those couplings area?

Dear Lee,

please accept my apologies in advance but I often find your comments to be a good reason to wisely smile rather than learn. It does not matter whether you call the spin network a “lattice” or “non-lattice”. What’s important is that it is a discrete regulator and it shares the same features with the lattice. More concretely, it translates the irrelevant couplings to cutoff dependence and infinitely many undetermined couplings at the cutoff scale – namely the Planck scale.

You argue that the regular methods are bad, and your (…) methods that you (…) had to invent are better ones, exactly confirming my last comment about the focus on form instead of the content. Let me remind you that the standard ways to quantize general relativity have led to actual physical insights of a Nobel prize caliber such as black hole thermodynamics, unlike yours (…). Expanding quantum fields around a classical background is actually what gives a control over physics of quantum gravity, at least to the leading and subleading orders. Denying the partial success of this approach and the relative failure of these discrete approaches seems as far too crazy an attitude for me to discuss about it in length.

You also give this answer to Zorq:

“Zork, What principle does fix the couplings? First, we don’t know that there are an infinite number of consistent couplings, either at the Hamiltonian level or the spin foam level…”

Maybe you don’t know it, but those of us who have studied and who actually know LQG at the technical level, not just the popular one, DO know that there are infinitely many couplings that are equally consistent as long as any continuum limit is possible and as long as you can write at least a single term in the Hamiltonian, whatever definition of consistency we pick. For example, if you allow us to introduce “moves”, we can include diff and gauge invariant terms with multiple (N) moves. (If you don’t allow moves, you will likely end up with an ultralocal theory where different points are decoupled forever and where the speed of light is zero.)

These couplings are nothing else than the spin-networkization (this obscure word is to prevent your vacuous criticism for my using the usual word “latticization” – and be sure that it is my terminology and not yours that is standard) of the low-energy effective couplings. The infinite degeneracy of these couplings is an obvious fact and we can write infinite families of these couplings for you in the case that you really don’t know them. You will find this answer in all expert papers about the question whether the number of possible LQG Hamiltonian couplings is finite.

If you think that you have found the unique Hamiltonian for LQG, everyone will be happy to read a paper about it.

Your sleight of hand with the “non-smooth” kappa-Minkowski spacetime cannot circumvent the actual theorems showing that what you say is not possible and that our effective field theory arguments apply to LQG much like any other theory of physics. Kappa Minkowski spacetime is moreover smooth – what it lacks is locality, not smoothness (by which I mean the continuous character and unboundedness of momenta).

Any physical theory of the type we look at must be approximated by effective field theories (fine, not necessarily Lorentz invariant, but almost exactly Lorentz invariant) at low energies. There is no way out and trying to generate obscure terms containing Greek letters with a “special status” just in order to avoid conclusions that can be made does not lead anywhere – because you don’t actually address our questions, you just avoid them by pompous terminology.

Whenever someone says that XY cannot work because 2+3=5, you invent an explanation why rational thinking and 2+3=5 cannot be used in your context because your context is surely “better” and “above” these dirty theorems that mortal human beings actually use.

The theorems we know have been designed exactly to deal with possible theories like LQG and LQG is in no way a counterexample of these theorems.

Another method that unfortunately resembles fast commercials from cheap TV stations is your format of a sentence “this has been fully solved / answered … in a recent paper by Livine / Rovelli / Dreyer / …”. Whenever one actually looks at these papers, one either finds incoherent noise, or wild speculations full of wishful thinking, or a discussion about a simple and different topic that does not imply anything whatsoever for the question that was discussed. This may be a good method to fool the badly informed laymen for five minutes but I am afraid it is not a good starting point for a serious discussion among physicists.

This has been strikingly the case of the statements about the black hole entropy in LQG. As we know, the initial papers about it were just wrong because they incorrectly neglected the higher spin punctures that do contribute, as we know today. This has started a whole industry attempting to show that the prediction of LQG is actually correct and the Immirzi parameter should be proportional to log(3) or log(2) or something like that. That would be great, people thought, because having computed the entropy, LQG could compete with string theory on this important front.

Newer papers have revealed that the entropy predicted by LQG is proportional neither to log(3) nor to log(2). Moreover, it has also been shown that the result from quasinormal modes of a generic black hole is not proportional to log(3) or log(2) either but instead, to yet another number or a function of the BH parameters, more precisely. In summary, all conjectures about the correctness of the LQG BH entropy or even its relations to quasinormal modes – much like the general conjectures about the quasinormal modes themselves – have been shown to be obviously false (note that it was not just one side that was wrong, it was both sides as well as the conjectured link) and everyone who has worked on these things has known this conclusion at least since 2003. The magnitude of black hole entropy is completely obscure from the LQG viewpoint (even if you accept the strange assumption that the black hole interior does not contribute) and all known calculations lead to contradictions.

Still, I’ve seen a much newer comment of yours claiming that the coefficient of the LQG BH entropy has been verified or something like that which I frankly find rather incredible. It is even hard to decide whether you actually believe what you’re saying because it seems really difficult after 100 papers or so that show that the statement is false.

Best

Luboš

Hi, I hope the following is useful.

First, with respect to strong coupling limit of quantum gravity in the context of LQG, see THE G (NEWTON) —> INFINITY LIMIT OF QUANTUM GRAVITY, Viqar Husain , Class.Quant.Grav.5:575,1988. This was early days, probably the results Viqar got on the strong coupling limit could be much improved with what we have learned since.

I am very happy to talk about effective field theories, “I assume we all believe in effective field theories…” Yes, BUT we must be careful to remember that the class of effective field theories are labeled by the symmetries and gauge symmetries of the ground state. There are separate classes of effective field theories for Poincare invariant, kappa-Poincare invariant and broken Poincare invariant theories. This is an elementary fact, but it is the key point I have been trying to make. If you have two theories and one describes perturbations around a ground state that has symmetry P and the other has a ground state with symmetry Q and P is not equal to Q then they are not described by the same class of effective field theories. Becauuse each term in the effective action should be separately invariant under either P or Q and they are not the same. Is that clear?

Having said this I don’t understand what we are arguing about. I agree there is some set of possible amplitudes of spin foam models. I mention that it is likely that these will be restricted by the condition of finiteness of sums over labels. I mention that we do not know how large the space of such good spin foam models is. Some of you would have bet there were no such theories, so I’m surprised if you are now cavelierly insisting there are infintie numbers of them.

I do not deny it is possible it may be infinite, but I also will not be surprised if the condition of finiteness is restrictive and there are only a few parameters. But the bottom line is we don’t know. I also agree that these will map to effective field theories. I only insist that if Poincare invariance is q-deformed, or the geometry is non-commutative as is the case in 2+1 this is NOT the same class of effective field theories that are constructed by perturbation theory around flat Minkowski spacetime. I agree it would be very interesting to know how the parameters of the finite spin foam models map to the parameters of the effective field theory with the appropriate ground state symmetry. I don’t say more because as I’ve said already, we don’t know the answer for 3+1. It is only very recently we know the answer for 2+1 with matter. What is not clear about this?

If someone thinks it is known or obvious that the space of spin foam amplitudes which lead to convergent sums or integrals over labels for any spin foam diagram is infinite dimensional, please provide details as this would constitute an important new result. Otherwise, do not presume to know the answer to an open question. By the way, let me stress sincerely it would be very important to characterize the space of finite spin foam amplitudes, and I hope someone will take this on.

As for black hole entropy, the only thing I need for the argument I made is that the ratio of area and entropy is a finite number. So the technical issue of the right way to count the states is irrelevant because all proposals lead to finite ratios.

“If you have two theories and one describes perturbations around a ground state that has symmetry P and the other has a ground state with symmetry Q and P is not equal to Q then they are not described by the same class of effective field theories. Becauuse each term in the effective action should be separately invariant under either P or Q and they are not the same.”

I’m not at all clear what you mean here. We are talking about pure gravity theories, right? I thought your mantra was that General Relativity (and, the infinite generalization thereof, involving higher powers of the curvature and its covariant derivatives) is background-independent.

So, you claim (or hope) that spin-foam theories will not have an infinite number of independently adjustable couplings. (Why? There are an infinite number of quantum-group invariants that you could, in principle, write down. Why stop at 15j-symbols?)

Let’s assume you’re correct.

That means that, when passing to the effective continuum field theory, it picks out unique (or nearly unique) values for all the infinite number of couplings.

Computing the coefficients of the R^2 and R^4 terms in the supergravity effective action has been an illuminating activity in String Theory. What are the R^2 corrections to the Einstein-Hilbert action, predicted by spin-foam theories?

Zorq asks “There are an infinite number of quantum-group invariants that you could, in principle, write down. Why stop at 15j-symbols?”

Because, to repeat myself, not all of them are likely to have the following property: that the sum over labels on a fixed spin foam converges. This good property implies uv finiteness of a spin foam model. Some models have it (gr-qc/0104057,gr-qc/0508088, gr-qc/0512004) and I hope you agree that it is a reasonable selection principle. We don’t, to my knowledge, yet have a classification of which spin foam models have this property.

He asks also: “I thought your mantra was that General Relativity (and, the infinite generalization thereof, involving higher powers of the curvature and its covariant derivatives) is background-independent.”

The poiint is that once we are in the quantum theory., there is no metric or manifold. Hence background independence cannot be implemented by choosing a diffeo invariant classical action, for there is no manifold or metric that classical action can be a function of. The classical manifold, metric and curvature themselves can only emerge in the low energy limit.

So the effective field theory analysis will not be background independent, it will only describe excitations of a particular ground state. Hence it must take into acocunt the symmetry of the ground state one is studying excitations of.

Finally, we cannot assume that any term that appears in the effective action of a poincare invariant graviton theory will appear in the effective action of a kappa-poincare invariant theory. We just don’t yet know what the combination of diffeo invariance and kappa-Poincare invariance will allow for possible terms in the effective action.

As for whether there are R^2 terms in the effective action and what their coefficients are, this information should I agree be extractable for each particular spin foam model. For the Barrett-Crane model, this may soon by possible by extracting it from the calculation of the propagator done by Rovelli et al. (gr-qc/0508007) in that model. They are working steadily towards this goal.

Thanks, Lee

“Because, to repeat myself, not all of them are likely to have the following property: that the sum over labels on a fixed spin foam converges. This good property implies uv finiteness of a spin foam model.”

And I will repeat, for the unwary, that this usage of the phrase “uv finiteness” has nothing to do with the notion of uv finiteness found in the textbooks.

“So the effective field theory analysis will not be background independent, it will only describe excitations of a particular ground state.”

Will the effective field theory, at least, be writable as a generally-covariant local functional of the metric? (In other words, an Einstein-Hilbert action + higher corrections.)

If so, then all that can happen is that the coefficients of the various terms in the action can change.

So you’re claiming that the coefficients one extracts for the (generalized) Einstein-Hilbert action will be different in different backgrounds?

Interesting …

“Finally, we cannot assume that any term that appears in the effective action of a poincare invariant graviton theory will appear in the effective action of a kappa-poincare invariant theory. We just don’t yet know what the combination of diffeo invariance and kappa-Poincare invariance will allow for possible terms in the effective action.”

What do those words mean? What *other* constraints are there on the (generalized) Einstein-Hilbert action above and beyond locality and general coordinate invariance?

Surely mr. motl is joking when he promotes Nicolai et al. to quantum gravity experts, with no offense intended to them.

Sure it is fun to do physics but it doesn’t mean that this is not a serious activity where it takes much more to become an expert in a field than writting an incomplete review on a field as it was a few years ago, writting a research paper that adress and solve a problem is for instance an example of what it at least takes. I hope the next paper of Nicolai will be a resarch paper adressing some of the issues he cares about and i am sure knowing his capability that it will be interesting.

I will try to answer peter’s initial request hoping but not feeling totally sure that it might help the debate at this point.

The latest review of Nicolai et al. is much more satisfactory than the previous one, which essentially was describing the field as it was circa 1998 ignoring most of the work that was done since namely on spin foams exactly with the motivation to adress some of the issues he mentionned their.

He tries in the new review to include some of the more recent material and some of the problem he point out are problems recognised in the community for some time and some of them being already adressed in the litterature. I don’t think that was their intention (Nicolai is a genuine skeptic i think, and we need skeptiscism in science, its healthy) but sometimes in the presentation it looks like they are dicovering the issues they talk about and edged on making the impression that people working on this are not aware of the issues or concerned. Yes making a deeper relationship between spin foam and LQG is important (see the recent work by perez on this and on ambiguity in LQG, the recent work of Thiemann on the master constraint and some older and important work by Livine and alexandrov who made key progresses in this direction) and Yes adressing the semi classical limit is a necessary and key step (more remark later on that).

There is however a certain number of imprecisions, omitions and misunderstanding in their review. I will talk only about the spin foam section.

For instance when they present the riemannian spin foams they confused what is done in the litterature namely a quantisation of riemmanian quantum gravity with some hypothetical and to be defined hawking-like wick rotated version of Lorentzian gravity.

The purpose of spin foam is to construct the physical scalar product, and this means that we sum over history with exp i S. No direct relation is therefore a priori expected between Lorentzian and Euclidean theories. That’s why both Lorentzian and Euclidean model are studied, lots of the techniques are similar the lorentzian case involving non compact groupsis technically more challenging.

When they discuss the Barrett-Crane weight they confused the 15j symbol prescription (which describes a topological field theory) with the 10j symbol prescription which deals with gravity and present this as an ambiguity.

They also make the wrong statement that the spin foam approach is plagued with the same amount of ambiguity as LQG. This is not correct, the ambiguity in LQG amounts to ambiguities in the choice of the vertex amplitude (like different spin regularisation) whereas there is a large consensus on the form of the 10j symbol (in fact the intertwinners that needs to be chosen are shown to be unique).

There is an ambiguity in the choice of edges amplitude but this amounts into a different choice of normalisation of spin network vertices.

If one chooses the canonical normalisation that comes from LQG this edge amplitude is fixed uniquely. The possibility to have less natural normalisation was introduced later as an exploration of these models especially in order to have finite spin foam models when loop correction (bubbles) are included (gr-qc/0006107). This attracting possibility was later dismissed by showing that if one insists on preserving spacetime diffeomorphism invariance at the fundamental level and it was argued that the spurious divergence that arises in these higher loop amplitudes are signature of residual diffeomorphism (gr-qc/0212001).

They forgot to mention that the Hilbert space on which LQG and Barret-Crane model are isomorphic in the riemannian case, they forgot to mention that there are many different and independent derivations of this weight from the dynamic of GR.

They present as an other ambiguity the restriction to tetrahedral weight. This restriction is perfectly consistent with the fact that 4-valent spin network are enough to construct states with non zero volume and that any LQG dynamics act within this subspace which should be though as a superselection sector of the theory.

So this means that the line of thought that starts from a classical action and construct a quantum gravity weight as singled out one prefered possibility.if one singled out the canonical normalisation.

This doesn’t mean that this model is definitely the right one and having the correct semi-classical dynamic is the key issue, but it shows that

by adressing the problem of the dynamic in a covariant way and focusing on the implementation of space-time diffeo, one proposal stands out from microscopic derivation and adress in a satisfactory way some of the LQG issues, which is by itself an important result.

I don’t want to give the wrong impression and claim that everything is settled down for this model, there are still some study and questionning going on about this proposal which is not totally free of potential problems which are not really the ones presenting in NP (except that we are still lacking a strong physical argument for the canonical choice of edge amplitude ), but clearly the presentation NP is very far from fair and accurate.

They mention some problems with 2D regge triangulations having to do with spiky configurations without mentioning and knowing (even if they cite the relevant paper in a footnote) that this problem is now fully understood in the more relevant and interesting three d case (these spiky configuration are just an overcounting due to redundant gauge degrees of freedom, a issue that was overlooked in the first works ).

Also they completley miss the point about how we reached triangulation independence and the fact that there is a auxiliary field theory called group field theory which naturally gives the prescription allowing to compute triangulation independent spin foam amplitude. This is one of the main line of developpement of spin foam model it started a while ago now ( hep-th/9907154) and not mentionning this line of work which explicitely adress and solve one of the issue they worry about is not a small ommission.

For instance they say `A third proposal is to take a fixed spin foam and sum over all spin’ and cite ( hep-th/0505016) which exactly show on the contrary how to consistently sum over spin foam in order to get a finite triangulation independent positive semi definite physical scalar product and allows for the first time computation of dynamical amplitudes. This proposal is uniquely fixed by the microscopical model and if one stick to Barret-Crane with canonical normalisation this gives

a uniquely defined scalar product (so far for riemmanian gravity).

This is very far away from the picture they draw. The best I can do understand how they came up with this false understanding, is that they haven’ read the paper because its pretty clear that what done there is not what they describe.

Concerning the semiclassical issue they don’t mention at all the new line of developpement which consist of coupling quantum gravity to matter and integrate out the quantum gravity field in order to read out what is the effective dynamics of matter in the presence of quantum gravity.

this allowed to solve in a completly unambiguous way this issue in 3D.

They don’t mention all the other work in this directions involving coupling to matter field which were presented at Loop2005 (work of lee starodubtsev, baratin …)

I could continue but i think i can stop here for some detail criticism of their work. If i was referee of this paper i would at least suggest them to go back to their drawing board before submitting it.

Hi Lee and Zorq

“To Boreds: Take the case of 2+1 QG coupled to some matter QFT, with

arbitrary couplings, treated as a spin foam model. This was solved in

the recent work of Freidel and Livine. The answer is the same matter

QFT on a non-commutative geometry which is kappa-Poincare. None of

these are in the class of perturbatively non-renormalizable QFT?s on

smooth Minkowski spacetime. So when you quote Lubos as saying, ?Take

all spin foam Feynman rules that lead to long-range physics resembling

smooth space?? the answer is that there are no theories in this

category because all such spin foam theories correspond to QFT?s on

kappa-Minkowski spacetime which is not smooth.”

I don’t know much about QFTs on kappa-Minkowski spacetime. But I

presume that if you try to perturbatively quantize GR around this

background that it will be non-renormalizable, and that there will be

an infinite set of counter-terms. Is Lubos correct in saying that this

infinite number of parameters corresponds to ambiguities in the

spinfoam amplitudes? (Even if he was wrong on saying Minkowski instead of kappa-minkowski?)

I think you are saying that it does, *but* that it is not 1-1 because of

restrictions on the spinfoam side. Is it known what restrictions

`finiteness’ in LQG terms places on the low energy couplings?

Zorq, I’m not completely certain what you’re referring to when you

talk about corrections to supergravity, but…

the R^4 corrections etc you’d normally talk about in sugra are not the

same beast as the higher order terms in a wilsonian effective

action. they are really giving you equations of motion for the vevs of

the graviton and other fields, so in qft terms are more like a quantum

effective action.

You’re right, string theory does (also) constrain the infinite number of

parameters

in the wilsonian action—but my (imperfect) understanding of this is

that the calculation is done in SFT, is hard and is not

equivalent to calculating sigma model beta-functions (or whatever).

Sorry, but i continue being skeptic about LQG capabilities.

The only good thing about LQG i can say is that their believers are often less arrogant and more honest that stringers and we may admire they after of decades of hard work. For example, i newer saw a LQG beggining a talk on quantum gravity with the (in)famous “there is only an approach to quantum gravity” so characteristic of arrogant stringers talks.

I (as others) consider that string theory is pure nonsense in many ways (and expressed several points) but I would say (as others also) that LQG is not consistent. None new paper or talk can convince me, since only way to convince me that LQG is a good approach would be changing LQG by another

NEWapproach.I am sure there will be quantum gravity research still for a few years…

—

Juan R.

Center for CANONICAL |SCIENCE)

Concerning renormalisation group issues i am not sure that all the experts that covered this subject always have in mind that we are talking about quantum gravity and that some of the major results in this fields needs qualification when applied to this subject ( the notion of scale, scaling in background independent theory is way more subtle).

It doesn’t mean that this cannot apply of course and there are beautiful news results and research going along this line recently, namely in the work of Reuter et al. and Percacci et al. Niedermaier et al. etc… they have by the way revived (not proven, there is a difference) the asymptotic scenario which is another way to go around non renormalisability.

This is not free of difficulty either (being sure that the statements made are really diffeo invariant being one of the major one).

An other key and special point about renormalisation group in GR is the fact that G_{N} is at the same time a coupling constant and a wave function normalisation which can be used to fixed your set of unit. The natural unit we work with in quantum gravity is planck unit, in this setting the notion of fixed point is not well defined because the renormalisation group flow vector field (the beta function) depends on the cut-off. If you try to work in cut-off unit (which is the usual scheme but much more delicate concerning diff invariance) a very important subtltety arise that the change of unit is not really invertible, as an introduction see for instance \hept-th0401071.

By the way an interesting concidence is that having asymptotic safety is realised then in Planck unit the cut-off parameter have a finite value and the effective anomalous dimension at the non gaussian fiexed point is two dimensional. These renormalisation facts resonates strickingly with the picture arising from background independent approaches, wether its LQG, dynamical triang or spin foams.

I don’t know if this is just accidental or something deeper is going on, one should be careful but it is interesting. I just wanted to make sure that our local experts on the renormalisation are really tune up to apply it to gravity.

Also 2+1 gravity when treated as a perturbation theory around flat space is non renormalisable but however finite and unambiguously defined. Of course i don’t want to imply that what happens there apply to the 4D case, but this exemple should be applied to most of the statement made here in order to make sure that it doesn’t provide a counterexample

Concerning the very nice work of Carlip and the potential ambiguities in 2+1 (did you know that the world revolved since then?), lets recall that the ambiguities that are described in the work of Carlip refer to the quantisation of pure three d gravity on the torus. A case that we can now do in the back of the envelopp.

This case is far too simple and especially singular (it is a non stable RSurface)to be generic, in order to chose the right quantisation you have to show that it is possible to consistently quantised the theory on all types of background while respecting the symmetry. This means that you have to give the prescription to glue amplitudes and extend the quantisation to higher genus surfaces and include topology changes while respecting the diffeomorphism symmetry of the theory.

The Ponzano-Regge model properly understood does exactly the job and pick one particular candidate available (Maas operator of weight 1/2 if i remenber correctly) in the torus as being consistent and anomaly free, I don’t know of any proof or evidence that an other inequivalent but consistent quantisation scheme exists.

I don’t have a proof either that that the other possibility are necessarilly inconsistent an interesting but difficult open problem. The bottom line is that there is only one full quantisation of three d gravity known today where everything can be computed: the spin foam quantisation, which is also shown to reduce to the t’hooft quantisatisation of the theory when the later apply and to the hamiltonian chern-simons quantisation, when the later apply namely if you restrict to cylinder.

Concerning those kappa-Minkowski spacetimes: The deformed symmetry group is the

globalsymmetry group of these spacetimes, right? These gadgets are still manifolds whose tangent spaces carry a Minkowski metric, right?Bords:

No, I *am* asking about the Wilsonian effective action. You are right that, at 1-loop, this differs from the 1PI generating functional. The latter includes loops of light particles, whereas the former includes only loops of heavy particles (the stringy modes we are integrating out).

So, to extract the Wilsonian effective action, one needs to subtract off the contribution of loops of light particles (cut off, as necessary, at the String scale).

At tree-level, the Wilsonian and 1PI effective actions coincide. And, even there, one has nontrivial R^2 and R^4 corrections, due to the exchange of massive string states.

Note that the Wilsonian effective action determines, not just the vacuum, but also the *dynamics* of the light fields.

Lee claims (apparently) that, even in pure-gravity, the higher-order corrections to GR, encoded in the Wilsonian effective action, are different in different LQG vacua. (But not, for some reason, the values of G_N and the cosmological constant? Why don’t those differ from vacuum to vacuum, as well?)

“You’re right, string theory does (also) constrain the infinite number of

parameters in the wilsonian action …. ”

OK, let me address that, so that, at least we can have two paradigms for how the infinite number of a-priori independent couplings get fixed (the first is a UV fixed point).

As just discussed, the low-energy theory has an infinite number of coupling constants (as any effective theory must). They do *not* approach a fixed point as we go to the UV. Instead, at the String scale, the effective field theory breaks down, and is replaced by the full String Theory.

For present purposes, the best way to describe that theory is to use Zwiebach’s covariant closed string field theory. That is also a theory with an infinite number of coupling constants. But they are not independently adjustable. In fact, they are fixed by a new principle. The BV Master Equation recursively determines all of the couplings.

Now, before Lee starts complaining, Zwiebach’s covariant closed string field theory is not manifestly background-independent. One needs to choose a nilpotent derivation, “Q”, of the *-algebra, and a choice of Q singles out a background. Moreover, we only know how to solve the BV Master Equation in perturbation theory, so that, too, cannot be done in a background-independent fashion.

But, since Lee says that the effective action extracted from spin-foam theories isn’t background-independent, either, I would say, “people in glass houses …”

To Boreds,

You ” presume that if you try to perturbatively quantize GR around this

background (kappa-Minkowski that it will be non-renormalizable, and that there will be an infinite set of counter-terms.” -To my knowledge its not known if this is true or not, perhaps someone else knows. Otherwise it would make a good research project. The reason it may be false is that some DSR theories incorporate maximum energies and momenta.

To Zorq, “Will the effective field theory, at least, be writable as a generally-covariant local functional of the metric?” Again, not known, but a good research project. One conjecture is it will be writable as a function of an energy dependent metric as in our work on rainbow gravity with Magueijo.

btw by uv finiteness we mean you sum over the labels on intermediate states and, rather then diverging as is the case in perturbaive QFT when the labels are momenta and the sums are unbounded, the sums are convergent. The fact that this doesn’t coincide with textbook meanings in perturbative background dependent QFT is obvious, but again, our point is you have to learn a new kind of QFT.

If i can make a remark, the questions being raised are good ones, what is confusing is the adversarial tone. The discovery by Freidel and Livine that in 2+1 quantum gravity with matter there is an effective field theory on kappa-Minkowski is very recent. We don’t know that this is the case in 3+1, although we know how to try to show it and it is in progress. We don’t know whether there will be any version of effective field theory besides this one, which describes the excitations of a ground state with deformed poincare invariance. Since the fundamental theory is not formulated in spacetime and spacetime geometry is emergent, it is not obvious or known whether there will be a diffeo invariant effective field theory of the kind which is assumed to exist in formal treatments of the path integral. Why isn’t it good-and exciting-when research indicates a new way of thinking about things that might succeed in a case where the older approaches have been unproductive?

To Juan, Thanks, indeed many of us see these as models which allow certain ideas and hypotheses to be precisely explored. We are open minded and if you have a new approach that is good and not bad. But at the same time, rather than rejecting a whole research program you might try to take the attitude of learning from its successes. You can even contribute to it while keeping an open mind as to its ultimate success. This is after all science.

Thanks,

Lee

No, I am not joking that I consider Nicolai et al. experts, and probably leading experts, in LQG and all aspects that have and have not been achieved by LQG. And if you ask me whether they understand it better than some older colleagues of them in the field, my answer is Probably yes.

Zorq—yes, it is clear what you mean now.

I just wanted to check you weren’t equating the 1PI corrections with the wilsonian action.

sorry Lee didn’t intend for that to sound adversarial. I would be interested if someone can discuss (as you suggest) effective field theories on kappa-minkowski; I don’t have much intuition about what the differences would be.

L, congratulations on your results of the past year and best wishes for what is in progress.

I did not expect to see you posting here at Peter’s. Thanks for these helpful comments!

Be well.

Who

Concerning the very nice work of Carlip and the potential ambiguities in 2+1 (did you know that the world revolved since then?)You’ll have to take that up with Carlip, then — I’m certainly not an expert on the subject. From p. 42 of this from ’04:

Lee said:

“The fact that this doesn’t coincide with textbook meanings in perturbative background dependent QFT…”

Aside from my last comment to Boreds, *nothing* I have said in any of my comments in this thread have relied on perturbation theory. The Wilsonian Renormalization Group is a nonperturbative concept.

“‘Will the effective field theory, at least, be writable as a generally-covariant local functional of the metric?’ Again, not known, but a good research project.”

I’m surprised at this response. If the answer were “no,” then it’s clear that one ought to dismiss the whole approach for having violated general coordinate invariance (whatever protestations to the contrary).

“The discovery by Freidel and Livine that in 2+1 quantum gravity with matter there is an effective field theory on kappa-Minkowski is very recent. We don’t know that this is the case in 3+1, although we know how to try to show it and it is in progress.”

Integrating out gravity, to obtain an “effective theory” of just matter makes sense (barely) in 2+1 dimensions. In 3+1 dimensions, where gravity has local massless degrees of freedom, attempting to integrate out gravity (or any other massless degrees of freedom) is daft. The result will be a nonlocal mess.

Why would you expect anything nearly as “simple” as kappa-Minkowsi?

Lee and L.,

it is encouraging that quantization of gravitation in 3D seems possible,

but as far as I know there are enough open questions (e.g. on the question if length is quantized) and it is not clear if different approaches give the same result (but the earth keeps revolving).

We also know some results in 4D for the strong-coupling limit and it would be nice to compare e.g. LQG with earlier results to see if different approaches give the same result.

Wolfgang

Since you speculate about future results, the next development in 4D that I am expecting is reference [17] in hep-th/0512113

where they say

—quote Freidel-Livine—

We have shown how to write the 3d Feynman evaluations as expectation values of certain observables of a topological (abelian) theory. This (abelian) theory was then identified as a particular limit of the quantum gravity theory. This result suggests that

the Feynman evaluations of 4d QFT could be reformulated as expectation values of a 4d topological model. This is supported by the fact that 4d gravity becomes topological in the G → 0 limit [16]. This would be interpreted as the zeroth order of the spinfoam model for 4d quantum gravity [17].…—endquote—

Reference [17] is listed as “in preparation”. It seems to me that a lot depends on how that work goes. I am hopeful.

Aaron,

as far as I recall from last time we discussed this on Jacques’ blog, those LQG uniqueness theorems pertain to the kinematical Hilbert space only.

Who,

I am looking forward to this paper. Just one more time I would like to make myself clear: I agree with Lubos that different formalisms should give the same results for the same physics; 3D gravity and strong-coupling gravity are cases where this can be checked (by the way I would be interested to also see how string theory compares!).

So far different formalisms seem to give different results and some basic questions are un-answered.

This is a follow-up to Peter’s update; see the last section of gr-qc/0601095:

1.3.1 The UV problem in the background independent context(p. 19-21).Lee Smolin said:

The inner product of Fock space depends on the background metric, hence this quantization will never arise in a background independent formalism.This is a serious objection, of course. The absense of an invariant inner product was listed as one of the four most important open problems in the conclusion of hep-th/0504020. It now seems that this problem is solved as a by-product of eliminating the overcounting of states in the classical harmonic oscillator, reported in hep-th/0411028. However, this is work in progress and success is not guaranteed.

Note however a that diffeomorphism-invariant inner product can sometimes be defined without reference to a metric; for half-densities and, if the number of spacetime dimensions n is even, for n/2-forms. But this is not very relevant, of course.

Does kappa-Poincare instead of Poincare imply something for the Coleman-Mandula theorem?