Is the Standard Model Just an Effective Field Theory?

An article by Steven Weinberg entitled On the Development of Effective Field Theory appeared on the arXiv last night. It’s based on a talk he gave in September, surveys the history of effective field theories and argues for what I’d call the “SM is just a low energy approximation” point of view on fundamental physics. I’ve always found this point of view quite problematic, and think that it’s at the root of the sad state of particle theory these days. That Weinberg gives a clear and detailed version of the argument makes this a good opportunity to look at it carefully.

A lot of Weinberg’s article is devoted to history, especially the history of the late 60s-early 70s current algebra and phenomenological Lagrangian theory of pions. We now understand this subject as a low energy effective theory for the true theory (QCD), in which the basic fields are quarks and gluons, not the pion fields of the effective theory. The effective theory is largely determined by the approximate SU(2) x SU(2) chiral flavor symmetry of QCD. It’s a non-linear sigma model, so non-renormalizable. The non-renormalizability does not make the theory useless, it just means that as you go to higher and higher energies, more possible terms in the effective Lagrangian need to be taken into account, introducing more and more undetermined parameters into the theory. Weinberg interprets this as indicating that the right way to understand the non-renormalizability problem of quantum gravity is that the GR Lagrangian is just an effective theory.

So far I’m with him, but where I part ways is his extrapolation to the idea that all QFTs, in particular the SM, are just effective field theories:

The Standard Model, we now see – we being, let me say, me and a lot of other people – as a low-energy approximation to a fundamental theory about which we know very little. And low energy means energies much less than some extremely high energy scale 1015−1018 GeV.

Weinberg goes on to give an interesting discussion of his general view of QFT, which evolved during the pre-SM period of the 1960s, when the conventional wisdom was that QFTs could not be fundamental theories (since they did not seem capable of describing strong interactions).

I was a student in one of Weinberg’s graduate classes at Harvard on gauge theory (roughly, volume II of his three-volume textbook). For me though, the most formative experience of my student years was working on lattice gauge theory calculations. On the lattice one fixes the theory at the lattice cut-off scale, and what is difficult is extrapolating to large distance behavior. The large distance behavior is completely insensitive to putting in more terms in the cut-off scale Lagrangian. This is the exact opposite of the non-renormalizable theory problem: as you go to short distances you don’t get more terms and more parameters, instead all but one term gets killed off. Because of this, pure QCD actually has no free parameters: there’s only one, and its choice depends on your choice of distance units (Sidney Coleman liked to call this dimensional transvestitism).

The deep lesson I came out of graduate school with is that the asymptotically free part of the SM (yes, the Higgs sector and the U(1) are a different issue) is exactly what you want a fundamental theory to look like at short distances. I’ve thus never been able to understand the argument that Weinberg makes that at short distances a fundamental theory should be something very different. An additional big problem with Weinberg’s argument is its practical implications: with no experiments at these short distances, if you throw away the class of theories that you know work at those distances you have nothing to go on. Now fundamental physics is all just a big unresolvable mystery. The “SM is just a low-energy approximation” point of view fit very well with string theory unification, but we’re now living with how that turned out: a pseudo-scientific ideology that short distance physics is unknowable, random and anthropically determined.

In Weinberg’s article he does give arguments for why the “SM just a low-energy approximation” point of view makes predictions and can be checked. They are:

  • There should be baryon number violating terms of order $(E/M)^2$. The problem with this of course is that no one has ever observed baryon number violation.
  • There should be lepton number violating terms of order $E/M$, “and they apparently have been discovered, in the form of neutrino masses.” The problem with this is that it’s not really true. One can easily get neutrino masses by extending the SM to include right-handed neutrinos and Dirac masses, no lepton number violation. You only get non-renormalizable terms and lepton number violation when you try to get masses using just left-handed neutrinos.

He does acknowledge that there’s a problem with the “SM just a low-energy approximation to a theory with energy scale M=1015−1018 GeV” point of view: it implies the well-known “naturalness” or “fine-tuning” problems. The cosmological constant and Higgs mass scale should be up at the energy scale M, not the values we observe. This is why people are upset at the failure of “naturalness”: it indicates the failure not just of specific models, but of the point of view that Weinberg is advocating, which has now dominated the subject for decades.

As a parenthetical remark, I’ve today seen news stories here and here about the failure to find supersymmetry at the LHC. At least one influential theorist still thinks SUSY is our best hope:

Arkani-Hamed views split supersymmetry as the most promising theory given current data.

Most theorists though think split supersymmetry is unpromising since it doesn’t solve the problem created by the point of view Weinberg advocates. For instance:

“My number-one priority is to solve the Higgs problem, and I don’t see that split supersymmetry solves that problem,” Peskin says.

On the issue of quantum gravity, my formative years left me with a different interpretation of the story Weinberg tells about the non-renormalizable effective low-energy theory of pions. This got solved not by giving up on QFT, but by finding a QFT valid at arbitrarily short distances, based on different fundamental variables and different short distance dynamics. By analogy, one needs a standard QFT to quantize gravity, just with different fundamental variables and different short distance dynamics. Yes, I know that no one has yet figured out a convincing way to do this, but that doesn’t imply it can’t be done.

Update: I just noticed that Cliff Burgess’s new book Introduction to Effective Field Theory is available online at Cambridge University Press. Chapter 9 gives a more detailed version of the same kind of arguments that Weinberg is making, as well as explaining how the the Higgs and CC are in conflict with the effective field theory view. His overall evaluation of the case
“Much about the model carries the whiff of a low energy limit” isn’t very compelling when you start comparing this smell to that of the proposals (SUSY/string theory) for what the SM is supposed to be a low energy limit of.

This entry was posted in Uncategorized. Bookmark the permalink.

45 Responses to Is the Standard Model Just an Effective Field Theory?

  1. I am just here to give my usual speech: The “naturalness” problems in the standard model are not scientific problems. They are aesthetic problems. They come about because physicists claim an unobservable number that temporarily appears in the math is “unlikely”.

    There are two problems with this. First, the debate about the supposed singularity at black hole horizons should have taught physicists that fretting about non-observable issues in mathematical calculations is a waste of time. Second, one can’t speak about probabilities without probability distributions, and we will never be able to obtain an empirically supported probability distribution over unobservable parameters*.

    Add to this that “naturalness” arguments haven’t worked with the axion (the original one), haven’t worked with the cosmological constant, haven’t worked with supersymmetry.

    (The charm quark prediction btw wasn’t a naturalness argument, it was a good old-fashioned argument from Occam’s razor. It’s just that people at the time used the word “natural” in their argument.)

    The bottom line is, naturalness should go out of the window.

    For what the SM is concerned, well, it doesn’t contain gravity, so of course the short-distance physics isn’t fundamentally the right one.


    * Same problem with multiverse arguments.

  2. Mitchell Porter says:

    I’m trying to understand what the central thesis of this post is.

    On the one hand, we seem to have the suggestion that the fundamental theory really could be just a QFT which consists of the standard model coupled to a new form of gravity. Well, it’s a striking idea, and one always has the 2009 prediction of the Higgs boson mass by Shaposhnikov and Wetterich, that was premised on exactly such an assumption.

    On the other hand, there is the criticism of naturalness, on the grounds that naturalness predicts new weak-scale particles and they haven’t turned up.

    I guess I don’t understand whether this constitutes grounds for criticizing the very idea of treating the SM as an effective field theory. Yes, if it’s SM all the way to the Planck scale, then the SM is not just an EFT. But that’s an extremely bold hypothesis that may or may not be true.

    Meanwhile, for now, it’s surely legitimate to still be interested in the possibility of new particles or forces. Is it claimed that EFT is a wrong way to model this or a wrong way to be systematic about it?

    If naturalness is a bad guide, to me that implies, not that the use of EFT is overall misguided, but that you need to be ready for the coefficients to be larger or smaller than expected.

  3. Lattice QCD is the exact opposite of what you describe: It’s exactly because of the ambiguity in the action that you have “non-universality”. This shows up when trying to take the continuum, or small-coupling, limit. This is a result of the running of the coupling (asymptotic freedom), & shows up in arguments for analyticity in the complex coupling-constant plane (e.g., see ‘t Hooft). It’s also related to the inability of constructive quantum field theorists to prove the existence of theories that aren’t @ least superrenormalizable (besides the instanton problem, also a difficulty on the lattice). These problems also show up in (resummation of) perturbation theory, as renormalons, which require (an infinite number of) new couplings as energies rise (like nonrenormalizable theories), appearing as vacuum values of (color-singlet) composite operators. (Thus, contrary to the belief of loop quantum gravity people, lattices don’t eliminate problems seen in perturbation theory.)

    The only known solution is (perturbatively) finite theories, which require something you hate — supersymmetry.

  4. Peter Orland says:

    Hi Warren,

    If I understand you (and perhaps I don’t) lattice QCD doesn’t suffer from any of the problems you describe (at least according to conventional wisdom. Not theorems). The ambiguities you mention concern semiclassical methods (perturbative, with resummation, summing over saddles points).

    It is expected (but not proved) that the only real couplings where physical quantities are not analytic are 1. zero (at $\theta=0$) and 2. some nonzero value (at $\theta=\pi$).

    QCD is (presumably) a completely finite theory, with very few parameters (one if there are no quarks and $\theta=0$).

  5. Peter Orland says:

    Just to be technically precise, there can be other nonanalyticities, but these are not universal and depend on irrelevant terms in the lattice action.

  6. lun says:

    Sabine, actually this post explains clearly why naturalness is not an esthetic argument but a mathematical one.
    QFT is formulated in terms of functional integrals, which are in general divergent and irresolvable. However,
    (i) they can be expanded, in various ways: perturbing around the coupling constant (perturbation theory), around scale separations (EFTs), saddle points and so on.
    (ii) The divergence can also be regulated in such a way that it does not affect physical quantities, which are generally correlators measured at a certain scale, provided we use some experimental data as input, the bare minimum (which as Peter explains is sufficient for QCD) is the scale.

    The most used expansion that includes (i) and (ii) will have a few normalizeable and super-renormalizeable terms (where there are no factors dependent on scale separation) and infinitely many non-renormalizeable and trivial ones that go away if the scale separation is large enough (this last point is where people most often use natuarlness arguments).

    Saying axions, ccs, Higgs mass etc are not natural is a shorthand for saying
    that if you take this theory and try to do the functional integral with (i) and (ii) in mind
    (i) and (ii) do not quite work for these observables. That is a rigorous mathematical statement about a physical theory, which might indicate problem with the theory, or problems with the approximation (see the discussion after your comment). But it is not just aesthetic. If would be aesthetic if we realized Feynmans dream
    ————————————————–
    https://arxiv.org/abs/2006.08594 “This makes me dream,
    or speculate, that maybe there is some way, and we are
    just missing it, of evaluating the path integral directly”
    ——————————————————-
    and STILL had to use naturalness. But we did not as yet, so its a legitimate maths question.

  7. Peter Woit says:

    Cliff Burgess here
    https://twitter.com/CburgesCliff/status/1349579929462198273
    characterizes this by
    “Very early 70s take on things in that blog post, it seems”

    Not quite right, since until 1973 and asymptotic freedom the consensus was that QFT was no good at short distances, or for describing strong interactions. From about 1974 -1984, it’s true that the point of view of this posting may have been the dominant one. Post 1984 things went back to “QFT no good at short distances (need string theory)” and that’s been the prejudice the past 30 years. I think the failure of the field to make any progress during this period argues for going back to before the wrong turn.

    One aspect of the history of the subject is that it was only for a short ten year period (1974-1984) that grad students entering the field were being told that maybe QFT works at all distance scales. At every other point during the nearly 100 year history of QFT they’ve been told it fails at high energy.

  8. Peter Woit says:

    Michell Porter,
    Nothing wrong with EFT. Lots of QFTs are EFTs. All QFTs used in condensed matter are EFTs. I don’t think though that the idea that the SM is not just an EFT is an “extremely bold hypothesis”. All expected failures of the theory based on the idea that it’s just an EFT have not worked out. It agrees with all experimental results and (modulo Higgs +U(1) problems) seems to make perfectly good sense at all distance scales. It should be the baseline conjecture that it can describe all distance scales, until someone comes up with something better. No one has.

  9. Peter Woit says:

    Warren Siegel,
    I won’t try and argue that issues of the perturbation and semi-classical approximations to a putative rigorous non-perturbative lattice-regularized QCD are well-understood. But all evidence I’m aware of is that (keeping things simple) lattice pure gauge theory is a well-defined non-perturbative theory, with expected infrared and ultraviolet behavior if you take the continuum limit appropriately. And when you do this, the limit is insensitive to the definition at the cutoff scale as I stated.

    In any case, even if you can find a problem with this, the larger point is that this is as close as there is to a well-defined 4d theory that makes sense at short distance scales and gives behavior we observe in nature. For no well-defined X is there any evidence that “we should be doing X instead of QFT at short distances”.

  10. @ Peter & Peter:

    ‘t Hooft’s arguments are based entirely on the running of the coupling, & are not tied to perturbation theory. But the same problems are seen in perturbation theory, & in constructive quantum field theory, which is rigorous & entirely nonperturbative.

    Lattice QCD is an alternative approach to perturbation theory that provides results for different observables. But it does not avoid any of the fundamental problems of the theory. The conclusion is that QCD is a low-energy effective theory. Of course, the problem is worse with the Standard Model because of the U(1) gauge group, which is seen to be a problem even @ finite coupling on the lattice. (Problems with asymptotically free couplings show up only near vanishing coupling, as proven by Tomboulis, but in agreement with ‘t Hooft’s argument.)

  11. Peter Orland says:

    Hi Warren,

    But ‘tHooft’s arguments are based on renormalons/instantons. Although these don’t emerge from perturbation theory per se, they are not pure running-coupling constant arguments.

    He also had a program for constructing large-N theories in four dimensions, but I don’t know what came of this (maybe this is what you mean by constructive FT).

    By the way, there is a long paper by Magnon, Rivasseu and Seneor, written almost 30 years ago, claiming a construction of SU(2) Yang-Mills in 4 Euclidean dimensions, in finite volume (not the infinite volume limit). If correct, this work goes a long way towards overcoming the problems you raise; in a finite volume, all the same features are present.

  12. Peter Orland says:

    … and Peter Woit will tell us soon to shut up and conduct this technical discussion elsewhere.

  13. Peter Woit says:

    Peter Orland,
    No, technical discussions that are relevant to the topic are encouraged! The question of whether QCD is just an effective theory or not is highly relevant.

    Warren Siegel,
    My understanding is that there is good evidence (much of it numerical) for the conjecture of the existence (at all distance scales) of a well-defined non-perturbative version of QCD, as specified precisely in the Millenium prize document
    https://www.claymath.org/sites/default/files/yangmills.pdf
    If Tomboulis or anyone else has a solid argument for non-existence (can you give a reference?), they should be putting in their claim for the \$ 1 million. My suspicion is that what you’re discussing is a different problem involving the subtleties of the perturbation expansion or semi-classical approximation for QCD, for which I’m willing to believe there are all sorts of issues.

  14. Thomas says:

    I think the misunderstanding in Warren Siegel’s comment is this:
    “Lattice QCD is an alternative approach to perturbation theory that provides results for different observables.”

    The lattice can of course be used as a regulator in perturbation theory, and this theory has the sames issues as weak coupling perturbation theory using other regulators. (The lattice has a convergent strong coupling expansion, albeit with a finite radius of convergence). However, the main point is that the lattice provides a fully non-perturbative definition of the theory. There is indeed no proof, but we have strong physical arguments, and plenty of numerical evidence, that the continuum and infinite volume limits exist, and that they define the theory we observe in nature.

    This means that QCD is a perfect theory, one that can be extended to arbitrarily short distances. But in practice this does not help, because QCD is embedded into electroweak theory, and we have an equally strong expectation (and, again, numerical evidence) that the $U(1)$ and scalar sectors cannot be extended to arbitrarily short distances. This means that the SM, even without gravity, must be viewed as an EFT.

    What is maybe somewhat unusual is that the only estimate of the breakdown scale that we have right now is from RG running of the scalar sector. It would be nice to directly observe a higher dimension operator. It is possible that neutrino mass comes from a higher dimension operator, but until we observe a Majorana mass we don’t know (which is why double beta decay experiments are so important).

  15. Peter Woit says:

    Thomas,

    To be clear, since the SU(3) and SU(2) gauge theories are asymptotically free, the remaining problem is the U(1) and Higgs. The U(1) “Landau pole” problem is at scales way above the Planck scale. The Higgs problem is intriguing, indicating borderline instability up near the Planck scale.

    I agree that the SM cannot just stand on its own, ultimately one wants to unify it with a quantum theory of gravity. To me, the simplest scenario would be a unification with a gravity QFT, that would resolve the issues of the high energy limit of the U(1) and the Higgs. Then, yes, the SM decoupled from gravity would not be fully consistent, but on the other hand characterizing it as just a low energy effective approximation would be misleading (since to a large degree the theory would work at all distance scales).

  16. Peter Orland says:

    Lun,

    Although it is true that renormalizable “unnatural” theories are not fully consistent at high energies, those energies are EXTREMELY high.

    For example, QED is unnatural, but its cut-off cannot be predicted from the theory itself. If we didn’t know about SU(2)$\times$U(1), we would not know where (below $10^19\;eV$) QED breaks down.

    In this sense, Sabine is completely correct. There is no ability to predict where the theory breaks down, and naturalness is an attempt to do the impossible (to predict what cannot be predicted). We only know that such theories must break down, below some very large momentum.

  17. That a QFT has a Landau pole is a problem only for approaches that regulate the theory with a noncovariant cutoff. This includes lattice approximation.

    However, a Landau pole is not a problem for covariant regularizations.

    In particular, in causal perturbation theory, the renormalization is done covariantly at an arbitrarily chosen renormalization energy parameter. The theory is well-defined for any choice far away from the Landau pole and, by the Peterman-Stückelberg renormalization group, is independent of this choice — only the quality of approximation depends on the choice.
    The Landau pole only says that one cannot choose the renormalization energy parameter close to the pole without getting meaninglessly inaccurate results.
    (This is unlike renormalization through cutoffs, where the covariant theory is only obtained in a limit, a process that suffers from a UV induced Landau pole.)

    Relevant in this context is also the not widely known fact that QCD has like QED a Landau pole. But while for QED the UV induced Landau pole is at physically inaccessible energies (far larger than the Planck energy), the IR induced QCD Landau pole is at physically realized energies! Nevertheless the pole does not invalidate predictions at these energies. Thus arguing for inconsistency based on Landau poles is a relic from ancient times where QFT was not yet well enough understood.

    For further details see my article on Causal perturbation theory at https://www.physicsforums.com/insights/causal-perturbation-theory/
    and the discussion at https://www.physicsoverflow.org/32752/ and
    https://www.physicsoverflow.org/21328/

  18. Peter Orland says:

    The Landau pole is an artifact of the one-loop (or finite loop) approximation. The pole is present in a region of momentum space where this approximation can’t be used.

    I don’t think it is helpful to think of triviality/unnaturalness in connection with Landau poles. In Wilson’s discussion (see Sections 12 and 13 of Wilson and Kogut’s Physics Reports article on the renormalization group) concerning (non)triviality, the Landau pole is not mentioned at all. Corrections to Landau’s mean field theory (an entirely different development) and Ginzburg’s criterion are, however, relevant.

  19. lun says:

    Peter Orland, for sure “naturalness” is not a very useful tool to understand what exactly is wrong (the last decades of theoretical physics have conclusively shown this). Rather, lack of naturalness is a symptom of a problem, and its wrong to say the problem is aesthetic, it is mathematical consistency rather then aesthetics.

  20. JE says:

    I cannot add much to the technicalities some of you have quite accurately exposed regarding the shortcomings of QFTs in general, the SM in particular, and most notably QG. To me, wondering whether or not the SM is just an EFT pertains more to psychological aspects of the scientific process than to the purely physical or mathematical ones. The SM very successfully predicts almost all (if not all) results we can currently measure. We know that there are some mathematical inconsistencies and the U(1) and scalar sectors fail at very short distances, but we can still safely use the SM, Higgs sector included, because it matches our experimental results, so these shortcomings should not be an everyday worry except because we still lack a valid theory of QG.

    I have raised the psychosocial factor because Weinberg himself is appealing to herd mentality by saying that, “The Standard Model, we now see – we being, let me say, me and a lot of other people – as a low-energy approximation to a fundamental theory about which we know very little. And low energy means energies much less than some extremely high energy scale 1015−1018 GeV.”

    Saying “Me and a lot of other people” (true or not) seems to me like a clear attempt at concocting herd mentality based on his recognized authority.

    Let’s be frank. The SM cannot be the final theory, because it lacks something. But it’s probably much closer to the final theory than e.g. string theory, because it has much more pros than cons (unlike ST). So the farther you depart from it, the less likely you will find a final theory. And ST really goes very far away from it. Herd mentality has been a problem for HEP for decades. Had we devoted 1% of the effort we have invested in trying to square the circle of considering the SM a correct, but only effective, field theory, and searching for an entirely new theory that can replace it, instead of trying to fix its shortcomings, we would probably be there by now.

  21. Just curious: I’m assuming the discussions going on here could easily have occurred 5 years ago. How about 10? Or 20? How far into the past could one go and these debates would have still been by-and-large possible? Not that there’s anything wrong with that … just curious.

  22. Peter Woit says:

    Geoffrey Dixon,
    The point of view Weinberg is arguing for was worked out in detail by him and others over 40 years ago (some of the basic papers about this are his from 1979-80). The discussion here is much the same as one could have had back then. The only difference is that lots of evidence against this point of view has accumulated over the last 40 years: no violations of the SM (+Dirac neutrino mass terms), existence of the Higgs with its hierarchy problem, CC with its hierarchy problem, failure of all attempts to come up with a plausible theory of different physics at the GUT/Planck scale.

    Back in 1980, the EFT point of view was not dominant, with lots of people (for example see Hawking on N=8 supergravity) taking the point of view that physics at high energy was likely to be some QFT extending the SM (so the SM is a piece of the final theory, not an approximation to a different kind of theory). The EFT point of view on the SM has now been dominant for decades, with no sign of influential physicists like Weinberg re-evaluating things based on what has been learned in the last 40 years. There seemed to be some hope post LHC results, but people mostly seem intent on trying to not draw conclusions based on those.

  23. @ Peter & Peter,

    QCD has a running coupling because it has a divergence. If Yang-Mills theory were finite, it would be conformal; its coupling wouldn’t run. So ‘t Hooft’s argument holds outside of perturbation theory.

    Tomboulis only showed the problem could only occur near vanishing coupling, not that it did occur.

    Constructive quantum field theory, which is a nonperturbative approach, also has problems with instantons & renormalons.

    To my knowledge, lattices have never solved any problem of principle that had been discovered in perturbation theory.

    Perturbative finiteness is the only known solution to renormalons. & the only such theories known are supersymmetric.

  24. André says:

    Peter,

    can you explain how your point of view and that of Weinberg differ in practice?
    Weinberg says that the standard model is valid (maybe) up to 10^18 GeV. Up to which energy do you think that the standard model is valid? 10^19 GeV? More?

  25. Peter Orland says:

    Hi Warren,

    Finiteness of QCD means that the renormalized theory exists and is finite. It is not conformal invariant.

    The massless limit of the regularized theory, after removing the regulator, is Weyl and conformal invariant. That’s not what Peter and I are writing about.

    Again, instantons and renormalons appear in trying to ressum perturbation theory (in the case of instantons summing over saddle points). There was a lot of optimism that some sort of resummation of perturbation theory over saddle points would yield an analytic solution of QCD, and people are still working on this. It doesn’t mean the theory does not exist apart from these methods.

  26. Peter Orland says:

    Just to make it clear what is meant by finiteness:

    The lattice theory is ultraviolet regulated and well defined. Calculate physical quantities (cross sections, string tension, maybe some Green’s functions multiplied by anomalous powers of the lattice spacing). Now, fixing one of these quantities, take the lattice spacing to zero. This is what Peter W. and I are arguing exists (there is no theorem to this effect. It is a conjecture).

    At no point are there any divergences. Nor does Weyl or conformal invariance appear (there are scaling violations).

  27. Peter Orland says:

    “To my knowledge, lattices have never solved any problem of principle that had been discovered in perturbation theory.”

    There are examples where such problems of principle are solved on the lattice, but they are asymptotically-free models in lower dimensions. These models can also be solved by other methods and the solutions agree.

  28. Warren Siegel said: ”Perturbative finiteness is the only known solution to renormalons.”

    This was only true long ago. Another solution to renormalons, known since 2015, are resurgent transseries. For recent results, see, e.g., https://arxiv.org/pdf/2007.01270.pdf

  29. More Anonymous says:

    Peter

    I was pondering about something which lead to hopefully a relevant line of thought. I am under the impression the continuum limit of QFT is equivalent to micro causality. The Lieb-Robinson bound in lattice QFT does become the usual (micro-)causality property in the continuum limit see (arxiv.org/abs/2006.10062). That paper stops short of establishing my this, because they don’t take a strict continuum limit. They show that the exponentially-small tails are small enough to be unimportant on scales much larger than the lattice spacing, with some technical definition of “unimportant,” but they leave the strict continuum limit as an exercise for the reader.

    Thus, from this point of view micro causality is a derived property of the lattice spacing. Now in a theory of gravity one would expect the light cone (of micro causality) to be deformed. But since the the propagation of information is dependent on the lattice spacing. The Large scale behaviour does affect the small scale and visa-versa.

    P.S: I am far from an expert.

  30. Peter Woit says:

    Warren Siegel,
    It still seems to me you’re just pointing to problems with the perturbation series and the semi-classical approximation, not the non-perturbative theory, defined as Peter Orland states.

    In particular, as far as problems with instantons go, how this works in lattice QCD is something I spent a lot of time working on long ago. Instantons are classical solutions, and their role in the full theory is an issue about the semi-classical expansion. They’re irrelevant to the question of whether the lattice sums have the expected continuum limit. I haven’t followed recent developments, but when I was involved, the problem was to identify not instantons on the lattice, but a sensible notion of net topological charge of a lattice configuration. One wants something that agrees with the continuum formula for configurations near classical solutions, and such that ambiguities due to the lattice regularization become unimportant in the continuum limit. Problems doing this might in principle cause trouble for non-zero theta parameter, but for QCD at theta=0, issues about assigning topological charges to lattice configurations shouldn’t be relevant.

  31. Peter Woit says:

    All,
    I don’t want to try and have a discussion here of general issues about proposals for making sense of QFTs at short distances, so only issues directly about the SM are on topic.

    André,
    Neither I nor Weinberg have any idea where the SM breaks down or how. I am arguing that some sectors of the SM show no signs of breakdown at any scale, but that doesn’t mean I know what happens at inaccessibly high energy scales. Much of what I object to here is making pure speculation about what’s happening at scales we know nothing about sound like it has some solid evidence.

    Weinberg and others pushing this point of view don’t claim to know what the higher scales are, they are just setting up a framework in which there can be higher energy scales with different physics, with the SM a low energy approximation. For instance, in some models of neutrino mass of this kind, one explains the low values of neutrino masses by the existence of some high scale, still below the GUT or Planck scales. But, that’s just one kind of model. Once can avoid the higher scales by just assuming neutrino masses are Dirac, with Yukawas small for some reason we don’t understand (we don’t in any case actually understand anything about why Yukawas take the values they do).

  32. (a different) André says:

    I have a different perspective, but perhaps the same conclusion as Weinberg (and many others) on the SM being an EFT of whatever is a more complete theory. My background, just to provide some perspective for my comments, is that I utilize lattice QCD and EFT to determine various properties of nucleons and their interactions which are required with various levels of precision, in order to interpret current bounds, and hopeful signals, of beyond the SM signatures in low energy experiments (such as an EDM in a nucleus or the observation of neutrinoless double beta decay or signatures of non V-A decays of neutrons etc.). Interpreting the current limits as bounds on new physics requires some quantitative understanding of BSM matrix elements in nucleons and nuclei, some of which need to be propagated to more complex systems with EFTs of nuclear physics.

    Now – about the SM being and EFT of some more complete theory. My perspective could be summarized by the question, what right do we have to expect that the SM is correct to arbitrarily short-distance/high-energy? It seems far more probable that we are ignorant of some very short-distance physics than the SM being The theory that can take us to the Plank scale. Assuming there is something we are missing, and assuming that this new physics is short distance, describing the SM as an EFT of more complete theory is the most general framework to have this discussion with our current understanding of how to describe physics (using QFT). It also provides a framework to combine constraints from low-energy precision tests of the SM and signatures in colliders like the LHC, and thus helps the community significantly reduce the parameter space of possibilities.

    If the presumed BSM physics is light (dark photons etc.), then this EFT description is less useful too useless, depending upon the details of the new physics. The community of people using this EFT language to make these constraints is also generally quite aware of this limitation as well.

  33. Peter Woit says:

    (a different) André,
    There’s nothing wrong with conjecturing that there’s new physics at a high energy scale and using EFT as the appropriate framework for deriving the implications of such a conjecture. But you need to then acknowledge that
    1. Despite decades of intensive effort, we haven’t seen any of the expected implications of such new physics.
    2. What we have seen of the Higgs and CC is in strong contradiction with the conjecture of such new physics.

    Given these facts, “SM all the way up to Planck” is also a perfectly valid conjecture.

    The problem with “It seems far more probable that we are ignorant of some very short-distance physics than the SM being the theory that can take us to the Plank scale.” is that it’s really just a historical prejudice (in the past we kept finding new things at higher energy scales). We don’t at all know one way or the other what the truth of the matter is, so one should look at all possibilities. The “SM just a low energy approximation” possibility has gotten a huge amount of attention, become a bit of a fixed ideology, and led to finding nothing. I’m just arguing that the alternate possibility is every bit as worthy of being taken seriously and its implications explored.

  34. (a different) André says:

    Hi Peter,

    Sure, it should be considered. But, in order to be viable, there are a few significant challenges that such a proposal would have to address. For example, as we understand things now, the CP violation in the SM is orders of magnitude too small to generate the observed abundance of matter over anti-matter in the universe. So one would have to come up with some new explanation how the SM only could circumvent this otherwise failure to explain the observation. Or, we would have to decide this is not a question worth trying to answer. Assuming we want to find some explanation, and without clear ideas how to generate the excess with just the SM, I stand by my statement that it “It seems far more probable that we are ignorant of some very short-distance physics than the SM being the theory that can take us to the Plank scale.”

  35. Peter Woit says:

    (a different) André,
    Sure, although the matter/anti-matter problem brings in the question of one’s model of the early universe, which adds another source of complexity to the issue.

  36. WTW says:

    Peter,
    No one has mentioned the Muon g-2 experiment investigating the apparent anomaly in the anomalous magnetic moment of the muon, and the very detailed theoretical work going on in parallel that includes extending calculations down to extremely fine precision by the Muon g-2 Theory Initiative. (See, for example, https://news.fnal.gov/2020/06/physicists-publish-worldwide-consensus-of-muon-magnetic-moment-calculation/, and https://arxiv.org/abs/2006.04822) That precision is required in order to differentiate between SM prediction and experiment, and it includes non-perturbative lattice QCD calculations as well as dispersive.

    As some of those folks have said, this is a very highly technical and specialized field, and I have no insight into what hoops they are having to jump through to get low enough error bounds. Perhaps someone can comment on how relevant such work is to this discussion. (But it does illustrate that state-of-the-art investigations into the validity/applicability of the SM at extremely short distance scales does not necessarily require Planck-scale experimental energies, and that such work is active and on-going — even if relatively rare.)

  37. Peter Woit says:

    WFW,
    The Fermilab muon g-2 experiment is supposed to be reporting results soon, which will be interesting. The problem though with this is that if they do find a deviation from the SM, it will be a frustrating situation. This just gives you one number, and all sorts of new possible non-SM degrees of freedom could contribute to that number, as part of some complicated higher loop calculation. So, that number, if non-zero, will tell us there’s something going on we don’t understand, but give very little to go on about what this might be.

  38. WTW says:

    Peter,
    “So, that number, if non-zero, will tell us there’s something going on we don’t understand, but give very little to go on about what this might be.”

    Yes, but — to your point, above — it could give us an indication of a limit (if there is one) on just how “effective” an EFT the SM actually is. Something we don’t now have.
    And the lattice QCD and other techniques being developed there could potentially/hopefully be useful in other contexts as well, in helping to identify anomalies in other experimental data that could give further clues.

  39. @ Peter & Peter,

    Lattice QCD is exactly as finite as renormalized perturbation theory.

    I’m not claiming QCD doesn’t exist, only that it’s nonperturbatively nonrenormalizable, due to renormalons, a consequence of only the running of the coupling, which can be seen from its divergences, which appear even in lattice quantization if one tries to take the continuum limit w/o renormalization of the bare couplings by giving them singular dependence on the lattice spacing.

    Instantons are a separate problem from renormalons. Various solutions have been proposed, such as the 1/N expansion.

    Perhaps the lower-dimensional theories with other solutions to which you refer are the superrenormalizable ones treated by constructive QFT. Those people never had success in 4 dimensions.

    @ Arnold,

    I’m not familiar with your solution of “long ago” 2015, but thanks for the reference. My experience with claimed UV fixed points away from the origin is that their position & even their existence is strongly prescription dependent. Also, it would be nice to see an example of a gauge theory, since theories that are not asymptotically free are known to have difficulties even in lattice field theory. Furthermore, such treatments of improved resummation in the literature tend to focus on instantons rather than renormalons, & may even fail to distinguish the two.

  40. Peter Orland says:

    Hi Warren,

    Yes, renormalons appear as a class of diagrams which are clearly not Borel summable, even if the regulator used is a lattice. Maybe resurgence, or another scheme can cure this, or maybe not.

    If you could solve this problem via rigorous methods, this might give a proof that the lattice theory is well-defined. Or maybe a proof would have nothing to do with summing graphs (including saddle points).

    So renormalons may mean zip, zero, nada for the existence of the theory.

    As an illustration, renormalons exist in field theoretic models in lower dimensions. I am thinking of the principal chiral model (which I have some experience with). Resummation methods are not fully successful for this model, but there is no doubt it exists. There are even exact results for the S-matrix and (at large N) correlation functions.

  41. Peter Orland says:

    PS I mentioned above the constructive field theory paper about SU(2) Yang-Mills in a finite volume, by Magnon, Rivasseu and Seneor. They claim that the continuum theory exists, for a finite volume. They can’t take the thermodynamic limit, but (assuming their paper is correct. I guess I’ll have to slog through it), this means NO fundamental ultraviolet problems with renormalons.

  42. @ Orland,

    As I said, constructive QFT can prove existence if they are SUPERrenormalizable, not just renormalizable, hence <4 dimensions.

    Also as I said, the examples I’ve seen for defining resummation for (very) low dimensions have been for "instantons", not renormalons, i.e., corresponding to finite-action solutions to the classical field equations.

  43. Peter Orland says:

    Hi Warren,

    “As I said, constructive QFT can prove existence if they are SUPERrenormalizable, not just renormalizable, hence <4 dimensions."

    Yes, but that is NOT proving the renormalizable asymptotically-free theory does not exist. You are only saying that constructive field theory methods have not been successful (unless Magnon, et. als. is correct) for non-superrenormalizable theories. There is no no-go theorem.

    Please read my other remarks above. I think I addressed your assertions quite adequately.

  44. Erickson says:

    Hi Peter,

    If gravity QFT is one of the simplest options, wouldn’t asymptotically safe gravity be the most promising track to go with? There’s even attempts to make just the non-gravitational forces asymptotically safe (removing Landau poles). I believe this is along the lines of works by Eichhorn et. al., and Donoghue seems to have a fair number of things to say along this direction.

  45. Dieter Van den Bleeken says:

    Dear Warren, Peter & Peter,

    recently we found a very simple model with renormalons where perturbation theory can be explicitly re-summed using a particular contour prescription in the Borel plane. The model is simple enough that is non-perturbatively defined and can also be solved non-perturbatively and exactly. The S-matrix obtained that way matches perfectly to the one obtained by resummed and renormalized perturbation theory. The model has no supersymmetry.

    Now, this might sound too beautiful to be true and you are already wondering “where is the catch”? The catch is that this model is not a relativistic QFT but rather a simple one-particle quantum mechanics model. The advantage this offers is that we have a well defined Hilbert space with self-adjoint Hamiltonian and everything is under perfect mathematical control. Although it is a very poor version of QCD it has a logarithmically running coupling that induces exactly the same renormalon ambiguities (at least the UV ones) as its QFT cousins.

    The actual details can be found in our paper arXiv:1906.07198 .

Comments are closed.