An interesting paper appeared on the arXiv yesterday, by Hermann Nicolai and Kasper Peeters, entitled Loop and spin foam quantum gravity: a brief guide for beginners. It includes some of the same material as an earlier paper Loop quantum gravity: an outside view that they wrote with Marija Zamaklar.

Nicolai and Peters (as well as Zamaklar) are string theorists, and given the extremely heated controversy of the last few years between the LQG and string theory communities over who has the most promising approach to quantum gravity, one wonders how even-handed their discussion is likely to be. They identify various technical problems with the different approaches to finding a non-perturbative theory of quantum gravity that are often referred to as “LQG”. I’m not an all an expert in this subject, so I have no idea whether they have got these right, and whether the problems they identify are as serious as they seem to claim. Their main point, which they make repeatedly, is that

*
.. the need to fix infinitely many couplings in the perturbative approach, and the appearance of infinitely many ambiguities in non-perturbative approaches are really just different sides of the same coin. In other words, non-perturbative approaches, even if they do not `see’ any UV divergences, cannot be relieved of the duty to explain *in detail

*how the above divergences `disappear’, be it through cancellations or some other mechanism.*

What they are claiming seems to be that LQG still has not dealt with the problems raised by the non-renormalizability of quantum GR. They don’t explicitly make the claim that string theory has dealt with these problems, but the structure of their argument is such as to imply that this is the case, or that at least string theory is a more promising way of doing so. Their one explicit reference to string theory doesn’t really inspire confidence in me that they are being even-handed:

*The abundance of `consistent’ Hamiltonians and spin foam models … is sometimes compared to the vacuum degeneracy problem of string theory, but the latter concerns *different solutions* of the *same* theory, as there is no dispute as to what (perturbative) string theory *is*. However, the concomitant lack of predictivity is obviously a problem for both approaches.*

While they are being very hard on LQG for difficulties coming from not being able to show that certain specific constructions have certain specific properties, they are happy to state as incontrovertible fact something about string theory which is not exactly mathematically rigorous (the formulation of string theory requires picking a background, causing problems with the idea that all backgrounds come from the “same” theory, and let’s not even get into the problems at more than two loops).

The article is listed as a contribution to “An assessment of current paradigms in theoretical physics”, and I’m curious what that is. Does it contain an equally tough-minded evaluation of the problems of string theory?

It should be emphasized again that I’m no expert on this. I’m curious to hear from experts what they think of this article. Well-informed comments about this are welcome, anti-string or anti-LQG rants will be deleted.

**Update:**

There’s a new expository article about spin-foams by Perez out this evening.

Last Updated on

Just to repeat it in plain English: When Peter says that string theory is not predictive he refers to the fact that there is likely a very large number of possible low energy effective theories (say in terms of particles and couplings). This is in contrast to LQG which according to the experts can be coupled to _any_ particle content (and possibly even anomalous) with any set of couplings. But this is not the problem that Nicolai and Peters address.

Robert,

indeed. The remarkable thing is that the landscape problem is a problem other theories currently don’t see, not because they somehow solve it, but because they are not understood well enough to even formulate this problem.

For some reason the existence of many solutions to string theory is sometimes referred to a disadvantage over nonperturbative approaches. But, as I tried to say elsewhere on this blog, there is no indication that these non-perturbative approaches, once they work and are understood, will turn out to have a smaller number of configurations. Why should they? We don’t expect that of any theory.

In LQG, presumably many different Hamiltonians/spin foam models supposedly lead to the same low energy gravity – isn’t this something we know about from the Wilsonian renormalization ideas? – while in string theory the same high energy theory has 10^500 low energy worlds.

Robert,

I wasn’t the one writing here about the lack of predictivity of string theory, that was a quote from Nicolai and Peeters. But while LQG lacks predictivity about particle physics (which is why I’ve never tried to become expert in it), the claim has always been that it can be consistently coupled to the standard model, producing a unified theory of gravity and particle physics. Such a theory would be predictive about quantum gravitational effects, although it is unclear that one has any hope of ever measuring these. Nicolai and Peters are suggesting that this claim is not true, that LQG inherently contains ambiguities even in the purely gravitational sector. About that it would be interesting to hear from experts who don’t have a pro-string axe to grind.

Urs,

For the 10^500’th time, the problem with the landscape is not the number of configurations, it’s that in such a scenario you can’t predict anything at all. If that’s really the way the world is, and any unified theory will ultimately run into this problem, thinking about unified theories is completely pointless and the part of theoretical physics that studies these should just be shut down. Some of us would like to see some actual evidence for this before agreeing to it.

Arun,

you are confusing theories and their solutions.

That is: the same low energy gravity theory.

These are solutions to such a theory.

Peter said “… Nicolai and Peters [ the authors of hep-th/0601129 ] … are string theorists … it would be interesting to hear from experts who don’t have a pro-string axe to grind. …”.

hep-th/0601129 states that it is a “Contributed article to “An assessment of current paradigms in theoretical physics”.”.

What is “An assessment of current paradigms in theoretical physics” ?

Is it a conference or some sort of program that might eventually lead to redistribution of funding and jobs for theoretical physics ?

Are there other “Contributed article”s that might present other points of view, including those of “experts who don’t have a pro-string axe to grind” ?

Tony Smith

http://www.valdostamuseum.org/hamsmith/

Hi,

I am not a LQG theorist either, but I am quite close to these guys, attending their meetings etc. So while I could not really comment on detailed technical questions, I am rather well aware of the general situation.

The first paper of Nicolai, Peeters, and Zamaklar was widely considered by experts to be rather unfair; this one I think is much better in this respect. What they present is an outside view, with obvious problems spelled out. As such it is much more important for LQG community than to others: it shows what should be urgently addressed/explained.

The most important one is of course that in both LQG and spin foam the “right’’ model is not known. I do not think this is a matter of ambiguities. It is not that there are many consistent models to choose between, we are just not able to perform the most basic consistency checks (like closure of the constraints algebra in LQG) on any of them (if I understand correctly.)

But I do not see what this may possibly have to do with non-renormalizibility of perturbative QG, which is what N&P claim.

For me the major problem with LQG is if the quantization of area and volume is the real thing, or just an artifact of the formalism. This is the main real result of LQG, and it is used explicitly, for example, in cosmological and black hole applications.

I would generally agree with N&P that LQG, (by which I mean the loop canonical quantization of gravity, with its particular Hilbert space, etc) has much more open problems than the solved ones and real solid predictions.

I think however that something extremely interesting happened last year with Freidel and Starodubtsev paper “Quantum gravity in terms of topological observables,” arXiv:hep-th/0501191 (built on some earlier ideas of Lee Smolin.) What they managed to show was that in 4d gravity has the structure of TFT + perturbations, with the TFT part very similar to 3d gravity and the coupling constant dimensionless and very small. Since gravity in 3d is reasonably well understood their formalism raise the hope that gravity can be perturbatively quantized, with manifestly diff invariant perturbative expansion. (of course gravitons would be extremely hard to get in this formalism, but who – apart from string theorists – cares about gravitons!)

Urs,

Yes, I was confusing two different things. Nevertheless, the nonuniqueness of the LQG Hamiltonians that are consistent with the low energy theory is to be expected on general grounds, and should not impact the ability to predict (if any!!!) in the experimentally accessible regime – it is just a lot of irrelevant operators in the effective low energy theory.

-Arun

JKG, I’m similarily studying in the (very) close proximity of LQG, and I concurr with most of what you say. However at Loops05 Perez presented first work towards a deeper study of the ambiguities, I think he said there was a relation to non-renormalizability.

Furthermore and even more excitingly Freidel has just gotten a further result, the non gravity limit of 2+1 gravity (not with G -> 0 but with gravitational degrees of freedom integrated out) is a non commutative field theory.

Also closure of the constraint algebra has been achieved (as a mathematically rigorous result) in a nonstandard way through Thiemann’s Master constraint program.

Giesel one of Thiemann’s students reported on a consistency check on the Area operator regularization, also.

So the list of problems there reads to me rather like “what people are currently working on and have been worried about for a while”. THese problems are all being systematically attacked.

this will sound like nit-picking

overall it’s a GREAT paper, very glad to see a broad spectrum of active non-string lines of QG research being discussed including CDT and Reuter’s QEG, and of course spinfoams (omitted from Nicolai’s paper of a year ago) BUT

look at this omission: Nicolai Peeters give three citations to work of Laurent Freidel—-their references [47], [71] and [73]—so they came within an inch of mentioning this other one by Freidel:

http://arxiv.org/abs/hep-th/0502106

Ponzano-Regge model revisited III: Feynman diagrams and Effective field theoryLaurent Freidel, Etera R. Livine

“We study the

no gravity limitG_{N}-> 0 of the Ponzano-Regge amplitudes with massive particles and show that werecover in this limit Feynman graph amplitudes(with Hadamard propagator) expressed as an abelian spin foam model. We show how the G_{N} expansion of the Ponzano-Regge amplitudes can be resummed. This leads to the conclusion that the dynamics of quantum particles coupled to quantum 3d gravity can be expressed in terms of an effective new non commutative field theory which respects the principles of doubly special relativity. We discuss the construction of Lorentzian spin foam models including Feynman propagators”BTW the fact that they found the model must use DSR connects with the work of Jerzy Kowalski-Glikman, who just posted here.

Arun,

see http://golem.ph.utexas.edu/~distler/blog/archives/000639.html for a reply to your argument.

Who says:

“so they came within an inch of mentioning this other one by Freidel” [and Livine].

This is an exciting paper, since it shows that the semiclassical limit of 3d quantum gravity is DSR, and not Special Relativity. This makes me even more excited about Freidel and Starodubtsev story, because it seems that the proof that DSR is a limit of quantum gravity seems to be within reach.

In my comment above I forgot to say that it seems to me that nobody has even slightest clue as to how to get semiclassical limit from “canonical” LQG (this was long stressed by Lee). So another consistency check of possible hamiltonians, provided by that they have to lead to consistent classical limit seems to be not available either.

Hi Jerzy,

like your work on DSR.

Maybe someone should email Nicolai and suggest he add a reference to

http://arxiv.org/abs/hep-th/0502106

before his article goes to publication.

the trouble is when you do a review paper there will ALWAYS be someone who claims something important has been omitted, but

perhaps in this case the work is important enough to warrant

a message.

Freidel just co-authored a paper with Shahn Majid. I don’t understand the significance of that paper—-way too technical for me. Can you please give a clue as to its bearing on QG issues?

Urs, the existence of a non Gaussian UV fixed point might or might not hold, but what is not clear to me is why the infinite constants of the perturbative expansion should neccesarily reappear in the nonperturbative framework.

We don’t know if the perturbatively treated effective field theory of the Einstein Hilbert Lagrangian plus higher order terms is a valid perturbative expansion of the nonperturbative theory, this is not certain by far. Particularly no field theory in the LQG Kinematical Hilbert space has infinities, Wilson renormalisation logic suggests that it will look like a renormalizable theory at low energies, but we don’t have a good implementation of Wilson renormalization in the context of LQG yet. The old tools do NOT carry over flawlessly.

Particularly if you consider the strong gravity regime the causal structure of the theory changes this is not captured in the graviton expansion around a fixed background AFAIK.

In a similar vein we also don’t know if for nonperturbative reasons (e.g. the statespace of the full theory, see the recent work by Perez) the couplings/ambiguities are fixed. The ambiguities appear in a rather nicely ordered way in the regularisation of the Hamiltonian constraint, it’s not unreasonable to assume that for many choices you get a trivial phasespace.

> Particularly if you consider the strong gravity regime the causal structure of the theory changes this is not captured in the graviton expansion around a fixed background AFAIK.

Many are not aware that there is an exact solution of quantum gravity

in the strong-coupling limit. Maybe this ref. helps to clear up some of the issues discussed here:

http://prola.aps.org/abstract/PRD/v26/i10/p2645_1

Who said

“Freidel just co-authored a paper with Shahn Majid. I don’t understand the significance of that paper—-way too technical for me. Can you please give a clue as to its bearing on QG issues? ”

I do not know if Peter Woit would not mind us going slightly away from the main theme, so just couple of sentences.

I did not have time to study this paper yet. As far as I understand it just refines the rather vague mathematics of the Freidel&Livine. On the other hand I suspect that given knowledge and insight of both Laurent and Shahn there might be something really exciting there.

Nicolai and Peeters, in hep-th/0601129, seem to me to be doing a hatchet job on alternatives to conventional superstring theory.

After dismissing conventional LQG and Regge and dynamical triangulation approaches on the grounds of Hamiltonian difficulties, they proceed to discuss “a fixed spin foam”.

Since (as Nicolai and Peeters admit) “a fixed spin foam … differs considerably from both the Regge and dynamical triangulation approaches”,

they do not attack it on the basis of Hamiltonian difficulties.

However,

they do attack “a fixed spin foam” by saying that it is afflicted by

“… the true problem of quantum gravity, which lies in the ambiguities associated with an infinite number of non-renormalizable UV divergences …”

and

that “a fixed spin foam … cannot be relieved of the duty to explain IN DETAIL how the above divergences ‘disappear’, be it through cancellations or some other mechanism …”.

However, as Peter discussed in his blog entry “Is N=8 Supergravity Finite?” at

http://www.math.columbia.edu/~woit/wordpress/?p=268 ,

“… If Bern is right, N=8 supergravity may be renormalizable because of a combination of supersymmetry and twistor geometry …

Yes, it’s hard to get the standard model out of N=8 supergravity, which is one reason people gave up on it. But this may be a much more fruitful starting point than string theory … working with a well-defined theory instead of a vague hope that a theory exists might also be a good idea. …”.

Even if N=8 Supergravity itself might be flawed with respect to getting the Standard Model,

it might be that some other structure with similar symmetries related to UV-finiteness could be used for nodes of a spin foam that could give the Standard Model.

It might even be that such a spin foam should be constructed in a dimension higher than 4, producing the Standard Model through a Kaluza-Klein process.

I wonder whether Nicolai and Peeters would admit that such a possibility is a reasonable alternative to both conventional superstring theory and the approaches that they dissed in hep-th/0601129, and therefore should be given funding and jobs,

or

would they point to their emphatic “duty to explain IN DETAIL how the above divergences ‘disappear’, be it through cancellations or some other mechanism”

and

thus put anyone proposing to work on such a possibility in the Catch-22 position of having to do the cancellation calculations in order to get funds and jobs to work on the cancellation calculations.

As to the severity of their “IN DETAIL” requirement, consider Peter Woit’s remark about N=8 Supergravity:

“… divergences in N=8 supergravity don’t occur until at least 5 loops. Ever tried to do a 5-loop calculation in N=8 supergravity? …”.

Tony Smith

http://www.valdostamuseum.org/hamsmith/

Hi Peter, Thanks for mentioning this. After a quick look I can say that there are some statements they make that I agree with (such as about the difficiulties of relating the spin foam to the Hamiltonian constraint theory) and others with which I disagree, such as their statements about uv finiteness.

In particular, the point you quote, “the need to fix infinitely many couplings in the perturbative approach….” seems to disagree with old, well understood results. (Not to mention they don’t seem to make a detailed argument for their claim.) In fact, there is a well understood and detailed explanation for how the theory is cutoff. I won’t repeat the argument here (see hep-th/0408048) but the key points are that, as a result of the finitess of area and volume, one can show that the Planck length cannot suffer an infinite normalization if the theory is to reproduce the finiteness of black hole entropy,and the appearence of gravitons in perturbations around weave states. Thus, the theory is uv cutoff and the divergences do not have to cancel as they were never there in the first place. The key issue is then how is this finite cutoff length compatible with the symmetry of the ground state, which leads to the expectation that the symmetry is DSR.

But honest criticism based on detailed study is always welcome and I will read the paper carefully before seeing if I have any more substantial reponse to make.

In case anyone is interested, I am teaching a course on LQG, and video’s of lectures will be available as they are given at http://streamer.perimeterinstitute.ca:81/mediasite/viewer/FrontEnd/Front.aspx?&shouldResize=False

Thanks,

Lee

This is such an absurd comment, Peter. For example, your ideas about “experts”. Nicolai himself is a much bigger expert in loop quantum gravity than anyone whom you call an “expert”. Their previous paper is, for example, by far the most cited loop quantum gravity paper of 2005.

Lubos I see is still arguing by counting footnotes.

“find author einstein”

“read author einstein”

-drl

The fact that you’ve cutoff the theory doesn’t obviate the need to understand the infinite series of couplings compatible with diffeomorphism invariance (and background independence for that matter). Even if you tune them all to zero at some scale, they’ll show up when you flow. Nicolai and Peeters’s claim, it seems to me, is that it is exactly this amibguity that is at least a part of the infinite ambiguity in the choice of a LQG Hamiltonian.

Myself, I still am rather skeptical about the LQG-like quantization procedure given how far it is from how we quantize, well, everything else, but that’s a different story.

Its also troubling, b/c the second you play with your matter content all those infinite couplings that you’ve cut off, would affect whether or not you can even *find* the correct and fundamental master interacting field theory.

Its just another statement about the RG flow, I just dont see how you can escape from the statement that LQG needs to find a nontrivial fixed point at some stage to make sense.

On reading NP I am grateful for the hard work that they put in, but I end up feeling that they still miss the point, because they have prejudices about what a quantum theory of gravity should do coming from old expectations. They appear to evaluate LQG and spin foam models as if they were proposed as a unique theory which was a proposals for a final theory of everything. This is in my view a misunderstanding. One should understand these as a large set of models for studying background and diffeo invariant QFT’s. These are based on quantization of a set of classical field theories which are constrained topological field theories. There are three key claims: 1) these theories exist, rigorously. i.e. there are uv finite diffeo invariant QFT’s based on quantization of constrained TQFT’s. 2) there is a common mathematical and conceptual language and some calculational tools which are useful to study such models and 3) there are some common generic consequences of these models, which are relevant for physics.

Nothing NP say questions these key claims. Unfortunately, they do not mention key papers which support these key claims, such as the uniqueness theorems (gr-qc/0504147, math-ph/0407006) which show the necessity of the quantization LQG uses. And while they mention the non-seperability of the kinematical Hilbert space they fail to mention the seperability of the diffeomorphism invariant Hilbert space, (grqc/ 0403047). It is unfortunate that they omit reference to such key results which resolve issues they mention.

A second misunderstanding concerns uv divergences. NP do not discuss the results on black hole entropy, so they miss the point that the finiteness of the black hole entropy fixes the ratio of the bare and low energy planck length to be a finite number of order one. Calculations on a class of semiclassical states they do not discuss-the weave states-lead to the same conclusion (A. Ashtekar, C. Rovelli, L. Smolin, Weaving a classical metric with quantum threads,” Phys. Rev. Lett. 69 (1992) 237.). So there can be no infinite refinement of spin foams and no infinite renormalization. These theories are uv finite, period. This is one of the generic features I mentioned.

Thus, their main claim, that the fact that there are many LQG or spin foam models is the same as the problem of uv divergent is just manifestly untrue. The freedom to specify spin foam amplitudes does not map onto the freedom to specify parameters of a perturbatively non-renormalizable theory. For one thing, few if any spin foam models are likely to have a low energy limit which is Poincare invariant, a property shared by all perturbative QFT’s, renormalizable or not, defined in Minkowski spacetime. In fact, we know from recent results that in 2+1 none do-the low energy limit of 2+1 gravity coupled to arbitrary matter is DSR. So their argument is false.

They do get a number of things right. The following are open issues, much discussed in the literature: 1) whether there is any regularization of the Hamiltonian constraint that leads to exchange moves, 2) whether thus there are any links between the spin foam amplitudes and Hamiltonian evolution, 3) whether the sum over spin foam diagrams is convergent or, more likely, Borel resummmable (although they miss that this has been proven for 2+1 models, hep-th/0211026). I don’t agree with all the details of their discussion of these issues, but these certainly are open issues.

NP seem to argue as if one has to prove a QFT rigorously exists in order to do physics with it, by which standard we would believe no prediction from the standard model. They mention that there are no rigorous constructed, semiclassical states, which are exact solutions to the dynamics, but this is the case in most QFT’s. This does not prevent us from writing down and deriving predictions from heuristic semiclassical states (hep-th/ 0501091), or from constructing reduced models to describe black holes or cosmologies and likewise deriving predictions (astro-ph/0411124), Nor does it prevent Rovelli et al from computing the graviton propagator and getting the right answer, showing there are gravitons and Newtonian gravity in the theory (gr-qc/0502036).

But, someone may ask, if LQG is the right general direction, shouldn’t there be a unique theory that is claimed to be the theory of nature? Certainly, but should the program be dismissed because no claim has yet been made that this theory has been found? To narrow in on the right theory there are further considerations, all under study:

-Not every spin foam model is ir finite.

-Not every spin foam model is likely to have a good low energy limit.

-The right theory should have the standard model of particle physics in it.

In addition it must be stressed that there can in physics be generic consequences of classes of theories, leading to experimental predictions. Here are some historical examples: light bending, weak vector bosons, confinement, principle of inertia, existence of black holes. All of these observable features of nature are predicted by large classes of theories, which can be as a whole confirmed or falsified, even in the absence of knowing which precise theory describes nature, and prior to proving the mathematical consistency of the theory. LQG predicts a number of such generic features: discreteness of quantum geometry, horizon entropy, removal of all spacelike singularities, and I believe will soon predict more including DSR, emergence of matter degrees of freedom.

One reason for this is of course that most of the parameters in such classes of such theories are irrelevant in the RG sense, and do not influence large scale predictions. Since we know the theory is uv finite this does not affect existence. The lack of a uv unique theory does not prevent us from testing predictions of QFT in detail, and it is likely to be the same for quantum gravity. The old idea that consistency would lead to a unique uv theory that would give unique low energy predictions was seductive, but given the landscape, it is an idea that is unsupported by the actual results.

Having said all this, I hope that NP will put their hard won expertise to work, and perhaps get their hands dirty and do some research in the area.

Sorry to go on so long,

Thanks, Lee

ps to Haelfix, sure, why not work on RG flow in LQG?

Dear Lunsford,

the database does not claim to cover the early 20th century physics. Nevertheless, you will still see that Einstein has 33 papers in it – which may in fact be close to the total – and they have over 1150 citations which is more than some of us.

I am certainly using different criteria but sorry to say, having at least some well-known papers is a necessary condition for someone to be expected to have something relevant to say about science.

All criteria I can imagine imply that Nicolai and Peeters are LQG experts.

Best wishes

Lubos

Dear Lee,

how are you? Sorry to say but I don’t quite understand how Haelfix can work on something that violates the laws of physics. If you would have said “write an upbeat paper that combines the buzzwords from LQG and RG”, then it would be a realistic task. And the paper would make no sense much like when one combines LQG and quantum computing or noiseless information theory or any two decoupled pieces of jargon. There are already many papers of this type around.

In physics, it is impossible to make progress just by combining two random buzzwords.

Haelfix has, on the contrary, explained why he already knows that certain things cannot work and why. LQG is not a local field theory and by its very construction and the discrete philosophy, it does not have a UV fixed point (because distances below 0.1 L Planck do not exist). Consequently, the extreme UV physics is not determined (in a theory with a UV fixed point, it could be determined by conformal symmetry). The physics starts at the Planck scale, if ever, and because there is no other organizing principle than the existence of fundamental metric tensor (a wrong assumption for quantum gravity, by the way), it is clear that all higher-derivative and other terms must be considered. That’s not surprising because they are of the same order at the Planck scale.

Because there are no “more fundamental” degrees of freedom and no other organizing principle, the continuous coefficients of all these couplings can be anything, rendering LQG infinitely unpredictive – exactly as unpredictive as the perturbative nonrenormalizable GR written as effective field theory.

Maybe you meant that LQG should try to obtain a long-distance limit. Hundreds of people have tried, have not they? I find it manifest today that no one will every find one because it does not exist. Gravity is not lattice QCD, and the failure to choose the correct UV starting point not only destroys the “correctness” of physics. It destroys the very existence of low-energy physics. In string theory, we have a toy model of this possible problem because in the UV, one must choose the right (or even fine-tune by discrete choices) the vacuum energy for large space to exist in the first place.

In LQG, you also have the cosmological constant problem but you also have infinitely many similar problems associated with ever higher-derivative tems that are expected to crumple the space altogether, unless you fine-tune infinitely many terms in the Hamiltonian constraint. The task you are trying to give to Haelfix is not well-defined and even if the definition were completed, it would be guaranteed to fail.

“One reason for this is of course that most of the parameters in such classes of such theories are irrelevant in the RG sense, and do not influence large scale predictions. Since we know the theory is uv finite this does not affect existence. The lack of a uv unique theory does not…”

If you read these lines of yours rationally, they’re equivalent to saying that we know the classical limit of general relativity (given by the Einstein-Hilbert term). But the whole point of *quantum* gravity is that we can also say something about higher energies and/or precision experiments at low energies that get loop corrections from quantum phenomena. You can’t do it. Moreover, it is not true that you can even derive the leading term of classical general relativity.

What you’re saying, Lee, is completely equivalent to saying that nonrenormalizable field theories exist as quantum theories, and it’s just manifestly wrong. Whether or not the numbers “look” finite is completely irrelevant. If we choose a cutoff of any sort we like in quantum field theory, we will also get finite numbers. Finiteness is not the problem. The problem is the presence of infinitely many undetermined parameters.

Another approach that can never lead to realistic science is the permanent promotion of complete rubbish papers, being satisfied by the fact that they were written. Rovelli’s “graviton propagator” is an example. He incorrectly assumes that physics is dominated by nearly flat space, and then he “derives” that physics is dominated by nearly flat space. The reasoning is completely circular and has nothing to do with LQG whatsoever. Even if there were some truth in LQG, one could never start to make progress in revealing it before the papers are read rationally and critically and before patently false papers start to be neglected.

All the best

Lubos

I love the circularity of Lee’s argument that LQG must be a finite theory because, otherwise it gets the BH entropy wrong.

But, even accepting the statement that the low-energy G_N and the lattice G_N are related by a finite multiplicative factor does not demonstrate what he thinks it demonstrates. He has imposed a cutoff at the Planck length. G_N is itself of order l_P^2. So an ordinary quadratically-divergent correction to G_N just looks like a finite multiplicative renormalization. Only if he attempted to take the cutoff to zero (which he doesn’t) would he be able to distinguish the two.

Besides, there’s also the infinite number of other diffeomorphism-invariant, background-independent couplings to deal with. Saying that almost all values of those couplings will not lead to sensible low-energy physics (one of his arguments) is not the same as saying they are IR-irrelevant couplings, so that low-energy physics is independent of what values you choose (another one of his arguments).

Lee and Lubos:

I clearly see your discussions forming a Loop of Quantum Loops since they are not going any where. Each of you accuse the other camp for failure of making predictions. Let’s set the score fair and sqaure: Both have failed to make any meaningful prediction so far. So let’s start a fair and equal debate that neither one is better than the other so far in predictability.

Lee claims that LQG has made an experimentally verifiable prediction by predicting dispersion of light speed (light speed is not exactly constant, but has a small variation at high energies, depending on the energy level). I disagree. Predictions need to be definite, quantitative and precise. Tiny dispersions may be observed and maybe not, one of the other. it’s a 50/50 chance if you happen to pick the right choices out of the two. Unless you have predicted the exactly amount of dispersion, quantitatively, and show that experimental result matches your numerical calculation precisely, it really can’t be counted as a credible verifiable prediction just because there happen to be a small dispersion. Especially consider that the space of the universe is not exactly vacuum, but contains some imtermediate material at extreme low concentration, so dispersion of some sort due to condensed matter physics MAY some how occur.

The super string theory camp is equally guity of making such ambiguious “predictions” that amounts to nothing more than chasing a shadow. One case being the CSL-1 business. Lubos made more than a few enthusiastic hypes cosmic string about that two fuzzy little dots in a telescope image, until it’s shown by Hubble they are nothing more than two little tiny galaxies far away. Had hubble be slightly less in instrumental precision, they would now all be celebrating with champane that super string theory has made a prediction that is verified by observation. Be a little bit more specific, precise and exact when you make predictions, OK?

There is a fundamental philosophy difference between Lee and Lubos that Lee believes the universe is inheritantly discrete, but Lubos believes it is inheritantly continuous, beyond the discrete phenomenas that we observe in QM. Maybe we can have more discussions between the discrete picture and the continuous one. That would be a more interesting discussion.

Quantoken

I liked Zorq’s points.

The Immirzi parameter can indeed be considered as a multiplicative renormalization of Newton’s constant between the Planck scale and low energies. By construction, it is finite as long as G_{Newton} stops running at very long distances – which is so if classical GR is reproduced. The finiteness is no consistency check. The finiteness was put in by construction. The precise value is unknown and there are contradictory results for the values which is why Lee will never tell us a trustworthy value of the Immirzi parameter.

However, there are still constants at low energies – starting from Newton’s constant – that do depend on the exact higher-derivative terms (or their equivalents, in whatever language we use). If something is IR-irrelevant, it does not mean that the numerical values in the IR won’t be affected by these irrelevant terms. They will be affected because the IR-relevant and marginal parameters will run as a function of the irrelevant couplings.

More generally, the obsession with finiteness obscures the fact that the finiteness of some intermediate results is not really difficult to achieve. Any regularization or any scheme of cutoffs will do. The problem is to make a theory whose predictions are independent of the cutoff – which is a part of renormalization governed by the rules of renormalization group (which is NOT the same thing as regularization).

If the predictions depend on the cutoff, then we have just parameterized our ignorance about the singular physics in terms of the details of the cutoff.

In renormalizable QFTs, we obtain physics independent of the cutoff because we can organize the terms into a small number of IR-relevant and IR-marginal operators and argue that the irrelevant ones are suppressed by a higher energy scale. In perturbative string theory, we may obtain the same or better control because the fundamental theory is really living on the worldsheet and it must be conformal which is a strong constraint (and somewhat analogous to the constraint of renormalizability of QFTs mentioned previously).

In AdS/CFT, there are other constraints of the possible form of the physical laws, and all of us would like to know the most general type of constraint that can give us any consistent background of string theory – something that generalizes the conformal symmetry above (and that probably does not allow us to write the full theory in terms of any local quantum field theory or any worldsheet or boundary). But in LQG, we know for sure that these organizing principles are missing.

To Zorq, I apologise that the one argument I gave is far from the whole story. I agree that to define a theory you have first to define a regulated theory and then study the limit as the regulator is removed and show that the expressions for observables that result have the symmetries and gauge invariances of the classical theory. The physical parameters are parameters of the resulting theory. But this was exactly what was done in the hamiltonian construction of LQG, the observables such as area and volume are constructed through a limit of regulated operators and then the limit is taken. The result is finite, diffeomorphism invariant observables. The parameters of the finite, diffeo inv. theory include the bare Newton’s and cosmological constant-not as coefficients in the dynamics but as coefficients in the algebra of observables. The diffeo invariance is proved restored in the limit. And don’t be confused, the bare planck scale was not the regulator, there was another regulator, that has already been taken to zero. Please study the details of the construction.

So what you ask for has been done-the logic is not circular and the theory is finite. I was addressing the proposal to still take a limit of the parameters vanishing, even after the limit of the regulator to zero is taken. This would be non-standard, and it leads to results in disagreement with the semiclassical theory.

To quantoken: obviously it would be better to have precise predictions than generic ones. But generic ones can still distinguish between classes of theories. If GLAST sees an energy dependent, polarization independent speed of light, thus confirming that the symmetry of the vacuum is DSR, this obviouosly kills theories that predicted either broken or naive Poincare invariance and supports the plausibility of theories that predict-even generically DSR.

I can’t agree with your equivalanece of LQG and string theory just because of the vast imbalence in how much work has been put into each. For each point about string theory there are dozens of papers, so the ins and outs of each issue have been often thoroughly explored. Key facts about LQG still rest, in many cases on one or a few papers. There are many obvious things to do that have not been tried for lack of people. So there is a lot still do to, for example to turn generic predictions into precise predictions.

And I don’t “believe” nature is discrete, we showed this is a generic property of a large class of diffeo invariant QFT’s.

I’ll reply to the rest later,

Lee

Lumos has a long list of publications about speculation on unobservables. So I guess he’s well qualified to make vacuous assertions. What I’d like to see debated is the fact that the spin foam vacuum is modelling physical processes KNOWN to exist, as even the string theorists authors of http://arxiv.org/abs/hep-th/0601129 admit, p14:

‘… it is thus perhaps best to view spin foam models … as a novel way of defining a (regularised) path integral in quantum gravity. Even without a clear-cut link to the canonical spin network quantisation programme, it is conceivable that spin foam models can be constructed which possess a proper semi-classical limit in which the relation to classical gravitational physics becomes clear. For this reason, it has even been suggested that spin foam models may provide a possible ‘way out’ if the difficulties with the conventional Hamiltonian approach should really prove insurmountable.’

Strangely, the ‘critics’ are ignoring the consensus on where LQG is a useful approach, and just trying to ridicule it. In a recent post on his blog, for example, Motl states that special relativity should come from LQG. Surely Motl knows that GR deals better with the situation than SR, which is a restricted theory that is not even able to deal with the spacetime fabric (SR implicitly assumes NO spacetime fabric curvature, to avoid acceleration!).

When asked, Motl responds by saying Dirac’s equation in QFT is a unification of SR and QM. What Motl doesn’t grasp is that the ‘SR’ EQUATIONS are the same in GR as in SR, but the background is totally different:

‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

What a pity Motl can’t understand the distinction and its implications.

Lee,

You seem to make a strange (and nonstandard) distinction between a “finite theory” and an “RG fixed-point” theory. What is the distinction?

N=4 SYM is a finite QFT. It is also a fixed-point of the Renormalization Group. Moreover, it is a fixed point for any value of the gauge coupling.

You claim that LQG, with all (the infinite number of) couplings except for the cosmological constant and Newton’s constant set equal to zero is a finite theory (an RG fixed point). Do you want to claim that this is true for ANY values of Newton’s constant and the cosmological constant, or just for some special values (as claimed by Reuter et al)?

Dear Lee,

what you write about the regulators is internally inconsistent. If the limit depends on the regulator in a diff invariant theory and when the regulator contains as many parameters as a generic effective field theory or more, which is easy to see in LQG, then you cannot guarantee that the 4D results will be diffeomorphism invariant simply because you can see that the 4D nondiff invariant results will be generated.

A subtlety may be that you don’t quite distinguish the 3D diffeomophisms from the 4D diffeomorphisms. The latter include the Hamiltonian constraint. In the Hamiltonian treatment, the failure to obtain diffeomorphism-invariant physics will be manifested as the failure of the constraint algebra to close on the kinematical Hilbert space. This failure of the LQG algebra, including the Hamiltonian constraint, to close in LQG was discussed in the previous paper by Nicolai et al.

(This statement has its known counterpart in string theory: in the light-cone gauge, the critical dimension arises from the correct commutators of the Lorentz group.)

When you say that you obtain diff invariant results, you surely mean the 3D diffeomorphisms only, but that’s just not enough for gravity. The difficulties to extend some 3D results to the full 4D spacetime is of course related to the general problems with dynamics in LQG and its failure to reproduce any symmetry between space and time, especially not the Lorentz symmetry.

Finally. this statement is really cute:

“And I don’t “believe” nature is discrete, we showed this is a generic property of a large class of diffeo invariant QFT’s.”

What you have shown is that a class of discrete theories (that have no continuous limit or any other relation with physics) has the property that its elements are discrete theories. It is a completely vacuous and circular statement, and it in no way suggest that these theories are interesting as theories that physicists should study.

Best

Lubos

Dear Zorq,

surely no one wants to claim that gravity with a nonzero Newton’s constant is a conformal field theory. It has a scale because Newton’s constant is dimensionful. The same thing applies to the cosmological constant.

Your equivalence between finiteness and UV conformal symmetry is only valid for QFTs in continuous spaces. LQG is not a QFT in this sense. It can be in some sense finite as any other discrete, regulated, or latticized theory. But of course, its continuum limit must behave as an effective field theory for which having a UV fixed point is equivalent to finiteness.

Strictly speaking, we know that LQG can’t have a UV fixed point exactly because the spectrum of its areas and eigenvalues of other dimensionful observables is discrete, and therefore it is not scale-invariant. This pretty much kills the hopes of a UV fixed point that are plausible in other, more general candidate theories of quantum gravity.

Because of these reasons, the discrete character of spacetime in LQG is on the same level as with any other generic regulator that someone may invent. The Hamiltonian formalism can guarantee that the regulator is invariant under 3D diffeomorphisms, but it has been shown that it is not invariant under 4D diffeomorphisms. In the Hamiltonian treatment, this fact is manifested as a non-closure of the constraint algebra if it acts on the original full kinematical Hilbert space (see the Nicolai et al. paper 1 year ago).

All the best

Lubos

“Your equivalence between finiteness and UV conformal symmetry is only valid for QFTs in continuous spaces.”

I didn’t say anything about (UV) conformal invariance.

A finite theory gives cutoff-independent answers. An RG fixed-point is one where the coupling constants don’t vary as you vary the cutoff. With the normal meaning of these terms, these are the same concept.

As you say, LQG is abnormal in many ways. But if these terms mean the same thing to Lee that they usually mean, then they are still synonymous in LQG. If they mean something different to him, I’d like to know what that is.

Dear Zorq, while I agree with nearly everything you write, your usage of the word “cutoff” confuses me. Cutoff independence is something different from conformality. The Standard Model – or a GUT theory to be better – is cutoff-independent but it is not a conformal field theory. It has a conformal UV fixed point (at least in the case of asymptotically free Yang-Mills) but still, one should recognize the dependence on the cutoff from the dependence on the chosen RG scale.

As a non-expert could anybody clarify to me this doubts?

1.If I understand well, LQG is the quantization program applied to GR (so I don’t even know why it has a different name). Right?

(this seems to me the most natural thing to do in order to quantize gravity)

2. why a theory with infinite number of constant cannot be the correct theory?

thanks

In reference to

http://arxiv.org/abs/hep-th/0502106

Ponzano-Regge model revisited III: Feynman diagrams and Effective field theoryJerzy K-G says (20 jan 1:57P)

http://www.math.columbia.edu/~woit/wordpress/?p=330#comment-7742

——-quote from Jerzy in this thread——

“so they came within an inch of mentioning this other one by Freidel” [and Livine].

This is an exciting paper, since it shows that the semiclassical limit of 3d quantum gravity is DSR, and not Special Relativity. This makes me even more excited about Freidel and Starodubtsev story, because it seems that the proof that DSR is a limit of quantum gravity seems to be within reach.

————end quote——————–

This illustrates Smolin’s point about classes of theories making TESTABLE generic PREDICTIONS

It’s an important point and to recognize and emphasize it would help clarify this discussion

As Smolin uses the term, LQG is a CLASS of theories including spin foam models. There is current uncertainty about whether this class generically requires DSR. If it does, then it would tend to refute LQG if the GLAST satellite does not see DSR, when put in orbit next year or so.

It seems to me that to some extent LQG critics waste their time unless they not address LQG as a class of theories (including spin foam models) because when blanket statements are made one often has the impression that the critics lack a clear idea of what they are talking about. They should also be talking about upcoming possibilities for tests—that is how to dispose of a scientific theory, not by quibbling.

Anyway, LQG is a class of theories where in recent years a major focus of research has been spin foam models, I would guess that has been the main focus. One goal the researchers have is to NARROW DOWN the class and extract further generic predictions so that one can TEST. Here is what Smolin says along these lines:

http://www.math.columbia.edu/~woit/wordpress/?p=330#comment-7757

—-Smolin 21 jan 9:51A—

But, someone may ask, if LQG is the right general direction, shouldn’t there be a unique theory that is claimed to be the theory of nature? Certainly, but should the program be dismissed because

no claim has yet been made that this theory has been found? To narrow in on the right theory there are further considerations, all under study:-Not every spin foam model is ir finite.

-Not every spin foam model is likely to have a good low energy limit.

-The right theory should have the standard model of particle physics in it.

In addition it must be stressed that

there can in physics be generic consequences of classes of theories, leading to experimental predictions.Here are some historical examples: light bending, weak vector bosons, confinement, principle of inertia, existence of black holes.All of these observable features of nature are predicted by large classes of theories, which can be as a whole confirmed or falsified, even in the absence of knowing which precise theory describes nature,and prior to proving the mathematical consistency of the theory. LQG predicts a number of such generic features: discreteness of quantum geometry, horizon entropy, removal of all spacelike singularities, and I believewill soon predict more including DSR, emergence of matter degrees of freedom.—-end quote—

In sum, I think LQG critics’ effort would be better spent asking what progress the LQG community is making in narrowing down and making the class of theories testable. It is a waste of time for a critic to pick one representative theory which he imagines to be representative and to quibble over it, and to try to give the impression that “it” (whatever he thinks it is) will never work.

Incidentally I don’t think Nicolai and Peeters make this mistake. They are making a constructive effort to shed light on the LQG field of research.

arnold,

1. In general, one doesn’t expect what it means to “quantize” a classical theory to be unambiguous. LQG chooses different variables to work with and tries to find a non-perturbative quantum theory whose classical limit is GR. The whole discussion here comes about because this is not the standard quantization one uses in perturbing around flat space, using the metric components as dynamical variables. Because this is a new set-up, there is still a lot that is not well-understood.

2. Unless the infinite number of constants have special properties, one would expect their existence to ruin the predictivity of the theory, since no matter how many constants you fixed by experimental measurements, you still wouldn’t be able to predict the results of new experiments, since they would still depend on unknown constants.

Thanks Peter,

1. I agree with you that there might be ambiguities. It would be interesting to hear something about that from LQG people.

2. I am not sure I agree on that for several reasons:

2.1

how can we know that the correct theory has to be also predictive? Maybe nature knows the value of the parameters, but maybe we humans will never know them all…

2.2 are we sure that a theory with infinite # of paramters is not

predicting anything?

For example by making assumptions on these parameters one could

be able to predict something (and then check if the assumptions are correct by experiments…).

2.3 In some sense the Standard Model has the same problem…but one assumes that all the higher order operators are negligible(or zero)…and then checks the assumption in the lab (and for now the assumption is confirmed)

I never used the word “conformality.” You did.

All I did was repeat the following standard argument. Consider some physical observable, f. When you compute it in the cutoff theory, it depends both explicitly on the cutoff and on the bare couplings, g_i, of the cutoff theory.

f= f(a,g_i)

The RG tells you how to change the g_i when you vary the cutoff, in order to assure that f does not change.

0 = a df/da = a \partial f/\partial a + \sum_i b_i \partial f/\partial g_i

where

b_i(g) = a \partial g_i/\partial a

are the beta-functions of the theory.

Finiteness is

\partial f/\partial a = 0

An RG fixed-point is b_i(g_*)=0 for all i. These are the same.

Only in a global (non-gravitational) field theory are they related to dilatation (conformal) invariance.

Dear Arnold,

you ask:

“1.If I understand well, LQG is the quantization program applied to GR (so I don’t even know why it has a different name). Right?”

It is because pure GR in spacetime with dimensions above 3 cannot be quantized. GR is a non-renormalizable effective field theory. A different word is needed for GR and for this direct attempt to quantize it simply because classical GR is serious physics while the attempts called LQG are not serious physics and all serious theoretical physicists understand why it cannot work.

“(this seems to me the most natural thing to do in order to quantize gravity)”

It may seem natural to the laymen but it is impossible in science once the “details” are actually investigated.

“2. why a theory with infinite number of constant cannot be the correct theory?”

It cannot be a correct theory because it cannot be a theory at all. A theory means a finite system of ideas and assumptions that can in principle be determined, written down and used to make predictions, at least approximate ones. A spiritual system with infinitely many important unknowns does not satisfy this definition of a theory. A spiritual system with infinitely many continuous unknowns is just a way to parameterize (complete) ignorance, not a way to gain knowledge.

Best

Lubos

Dear Zorq,

sorry for my being slow but I am afraid that you have not yet explained what you mean by an RG flow that is unrelated to a dilatation – regardless whether the theory has gravity ir not – and consequently, how can a fixed point fail to be scale-invariant.

Best

Lubos

Peter said, about LQG / Spin Foam / etc models:

“… Unless the infinite number of constants have special properties, one would expect their existence to ruin the predictivity of the theory …”.

Isn’t and example of such “special properties” the “combination of supersymmetry and twistor geometry” mentioned by Peter in his blog entry at

http://www.math.columbia.edu/~woit/wordpress/?p=268 about Zvi Bern’s work on finiteness of N=8 Supergravity ?

It seems to me that it would be useful to direct substantial funds and jobs to work on models using exceptional structures related to the exceptional structures of N=8 Supergravity in the hope that:

1 – such models might be finite;

2 – at least one such model might include gravity plus the standard model; and

3 – due to the exceptional nature of the building blocks of such models, such a realistic model might be seen to be unique.

An example of a specific proposal for one such model would be a J3(O) exceptional Jordan algebra high-dimensional spin foam. It is just one example so that readers can see that this proposal is neither vacuous nor limited to supergravity models. Of course, I would advocate pursuing many types of models that use such exceptional structures in various ways.

Tony Smith

http://www.valdostamuseum.org/hamsmith/

PS – As I stated earler, I remain (to use Peter’s words) “… curious what … “An assessment of current paradigms in theoretical physics” … is …”.

Does anyone reading this blog know? Would they care to share their knowledge here?

Just to clarify one thing.

Lee Smolin said:

“If GLAST sees an energy dependent, polarization independent speed of light, thus confirming that the symmetry of the vacuum is DSR, this obviously kills theories that predicted either broken or naive Poincare invariance and supports the plausibility of theories that predict-even generically DSR.”

Well, there is a long lasting discussion in the DSR community as to whether DSR predicts energy dependent speed of light. I personally do not think so.

Basically the reason being that there are two effects which should be taken into account when calculating the speed: the modified dispersion relation and the modified phase space structure. In the DSR models based on non-commutative space-time both these effects cancel neatly, so that the speed of massless particles is exactly 1, though the expression for velocities as functions of energies for massive particles differ from the standard one. (more details can be found in hep-th/0405273.)

I must stress however that some experts like Lee, Joao Magueijo, and I think also Giovanni Amelino-Camelia do not agree with this.

Before deciding whether to wade into the rest of this discussion, I just wanted to address this:

In general, one doesn’t expect what it means to “quantize” a classical theory to be unambiguous. LQG chooses different variables to work with and tries to find a non-perturbative quantum theory whose classical limit is GR. The whole discussion here comes about because this is not the standard quantization one uses in perturbing around flat space, using the metric components as dynamical variables. Because this is a new set-up, there is still a lot that is not well-understood.

The situation is much “worse” than this in my opinion. LQG used a quantization techinque which is fundamentally different than the approach we use to quantize all the field theories we understand. If we apply the LQG-type quantization to theories like QCD, we get an experimentally wrong answer. Now, Lee will say that because he wants thing to be “background independent”, the only choice is to use this new quantization technique and that experiment is the only way to resolve the question. The latter part is certainly correct, but that doesn’t change the fact that LQG is not the same as quantization-as-we-know-it.

The latter part is certainly correct, but that doesn’t change the fact that LQG is not the same as quantization-as-we-know-it.You don’t need to be a string theorist to be worried about this. As I see it, one must cope with two key lessons:

1. The key lesson from GR may be manifest background independence, which I interpret as a action of the

full space-timediffeomorphism group on the dynamical objects alone.2. The key lesson from QM is that, well, it is QM in the conventional sense. In particular, the Hilbert space carries a representation of the one-parameter time evolution group, and this representation must be of lowest-weight type, cf. the harmonic oscillator.

How these lessons should be combined is left as an exercise for the reader.

Lee, how do you expect the matter degrees of freedom to emerge?

TL – See, you really can get everything on the back of an envelope!

-drl

BTW, it’s easier to use the front of the envelope, especially with a fountain pen.

-drl

Zorq, the distinction between a finite theory and a uv fixed point is the following: a lattice QFT or condensed matter physics model with a fixed lattice spacing is a finite theory. You can still apply the RG to study the infrared behavior. The point in LQG is that AFTER carrying out the regularization procedure and defining the diffeo invariant states and operators from the limit of the regulator removed, there remains a theory with a fixed, but spatially diffeo invariant cutoff. So it is like a lattice theory in being intrinsically finite, except that all the states are spatially diffeo invariant. So the physical theory has a fixed cutoff. One place this is explained in detail is Rovelli’s book, pages 280-282, another is my review hep-th/0408048. To see how this is done rigorously see Thiemann’s gr-qc/0110034 or Ashtekar-Lewandowski GR-QC 0404018.

Lubos, the finiteness is achieved kinematically, once the procedure just described is done the theory is uv finite WHATEVER the dynanmics, again just as in a lattice QFT with fixed lattice spaceing. But, to continue with my example of 2+1, which you misunderstood, is that the symmetry of the groundstate is not put in, it is determined dynamically. In 2+1 it turns out to be kappa-Poincare. This cannot agree with perturbation theory carried out as an expansion around Minkowski spacetime, which order by order assumes ordinary Poincare invariance.

I agree there is an issue in the Hamiltonian theory with full spacetime diffeo invariance, which is one of several reasons I always discuss dynamics in the spin foam picture.

Thomas and Aaron, this has been discussed before. For QFT the harmonic oscillator kind of quantization leads to Fock space. The inner product of Fock space depends on the background metric, hence this quantization will never arise in a background independent formalism. So we do not avoid it because we are stupid, it is simply not an option. The question is then to find an inner product for states which are functionals of a connection mod spatial diffeos. The only known way to do this is to first construct a kinematical Hilbert spaced that carries an exact non-anomalous rep of the spatial diffeos and then use that unitary rep to mod out by the diffeos . The uniqueness theorems tell us the result is unique. If you don’t like this please at least acknowledge the argument just described and either accept it or propose an alternative background independent quantization and do the work to show it is consistent.

Anonomous: See my talk at the loops 05 conference, details are in a paper under preparation.