There’s a new paper out on the Formalization of QFT, purporting to formalize a QFT using Lean/Mathlib. I’ve been trying to avoid spending time on the hot topics of AI and theorem proving, but in this case there’s a lot of overlap with what I have been thinking about recently (and written about here, see this blog posting).
The authors have been working at this since last July, and you can consult section 5 of the their paper to see the methods they used. The theorem they prove is that if you start with the usual two-point Schwinger function of a free scalar field
$$K(x,y)=\int_{\mathbf R^4}e^{ip(x-y)}\frac{1}{p^2+ m^2}d^4p$$
then you get a Euclidean quantum field theory that satisfies a certain set of axioms.
A basic problem with any claim to have “formalized QFT” is that we don’t actually have an accepted definition of what a QFT is. The authors take a definition from the book by Glimm and Jaffe: a probability measure on distributions satisfying five axioms labeled OS0-OS4. They need to do a lot of work to formalize the proof that you can construct a measure with these properties from $K(x,y)$. Such a proof though is quite standard and well-understood long ago. This is not a topic where there’s any concern that it might have some sort of subtle problem.
An obvious question is, why use this Glimm/Jaffe definition of what a QFT is? The use of “OS” refers to the original work on characterizing Euclidean QFTs of Osterwalder-Schrader. If you look at their paper, they characterize a Euclidean QFT as a set of Schwinger functions satisfying a set of axioms E0-E4 (they intended to show that these imply, after Wick rotation, the usual Wightman axioms). The OS paper doesn’t have anything to say about measures and while their axioms correspond to the Glimm/Jaffe ones, they are different.
Why not formalize the free scalar QFT using the Osterwalder-Schrader definition of a Euclidean QFT? The reason I think is that this would be too easy: the only axiom that the free Schwinger functions don’t obviously satisfy is the positivity one (recognizing the significance of this was the great contribution of OS). The argument for OS positivity of the free theory is straight-forward, but shows that something quite interesting is going on (I spent quite a bit of time trying to understand this in terms of hyperfunctions).
Just about all of the work in this particular formalization goes into putting the free field QFT into a specific measure-theoretical straight-jacket. This straight-jacket only fits the real scalar field QFT, possibly with self-interactions, doesn’t apply to the QFTs of relevance to fundamental physics (gauge fields, fermionic spinor fields, gauge interactions). The interesting thing about this class of QFTs is that they are basically statistical mechanical systems and can be studied with the techniques of probability theory. The problem is that they not only don’t correspond to any fundamental real world QFT, but that the only non-trivial examples are in two and three spacetime dimensions. We now have proofs that if you try and do this in four or more spacetime dimensions, you can only get free field theory.
The authors are aware of this, claim as one motivation formalizing Yang-Mills theory. By now there’s a 50 year history of people trying to follow this program of constructing a measure of some sort on gauge fields, showing a mass gap and getting a million dollars, but I don’t see any evidence this is going anywhere. New, very different ideas are surely needed. They also claim as motivation producing a measure on metrics that would give quantum gravity, but again fifty years of people trying to do this indicates that this is not at all a good starting point.
What’s most likely needed for further progress is the same thing that almost always ends up being needed for a big breakthrough when a field has been stuck for decades: a change in the question. In this case, one needs a different definition of QFT than as a measure on a space of distributions. Finding that new definition is the interesting problem, not reproving well-known implications of the definition that you know is not the right one.
The AI and formalization program that is being sold to us as the future of fundamental physics seems to me to come with a big danger. It may turn this field into nothing but an endless investigation of the blind alleys that are what we know about now and have been stuck at for decades. This particular “formalization of QFT” is an expedition headed down such a blind alley.


Peter,
”The problem is […] that the only non-trivial examples are in two and three spacetime dimensions. We now have proofs that if you try and do this in four or more spacetime dimensions, you can only get free field theory.”
I don’t think this is true. The only non-trivial examples proved to satisfy the Wightman axioms are in two and three spacetime dimensions. Nothing is known in 4 dimensions. Triviality arguments make very strong assumptions about constructability from a lattice.
See the discussion at https://physicsoverflow.org/32752/what-the-evidence-against-the-triviality-pure-qed-and-pure-phi?show=32752
If you know more, you should give precise references that ”if you try and do this in four or more spacetime dimensions, you can only get free field theory”!
Arnold Neumaier,
See references 4 and 5 of the preprint. I don’t want to discuss this technical point here.
The important point is that these probability based constructions of real scalar QFT are now over 50 years old, and the question of their relevance to 4d fundamental physics seems to me to have been settled very long ago. A big problem with the OS stuff is that the papers are very hard to read. It’s very hard to understand what they are doing and the limitations of their methods. What would be helpful would be some new expository material aimed at humans making more accessible what was understood 40-50 years ago about this subject. What it doesn’t need is formalization: there’s no serious question about the proofs and the formalization does nothing to clarify to a human being what the important ideas are.
I think “changing the question” has happened several times (and has been useful). Say the most fundamental one is the Wightman axioms for QFT (up to extending them to gauge theories) on the Minkowski space. It was (and is) difficult to construct such QFT directly. Then OS did their axioms, did the general reconstruction and people studied statistical mechanics, from which massive Euclidean interacting QFTs in d=2,3 have been constructed. Something similar happened with 2d CFT, people studied VOA. More recently with SPDE.
But if the goal is to understand realistic relativistic QFT, one has to explore the path towards Wightman as well, because Wightman axioms are almost minimal. I don’t think it’s ok to give up any of locality, Poincaré covariance, stability, vacuum or the continuum Minkowski space or Hilbert space.
The problems is specific to 4d. It’s the problem of taming the UV divergence which gets harder in higher dimensions (I know the Millenium problem asks to prove mass gap, but really to take the continuum limit is already extremely hard. I can tell you because I’m reading Balaban, right now). It’s a technical problem. We don’t have the Standard model as Wightman fields not because Wightman axioms are wrong. It is because we haven’t understood the Standard model enough.
”See references 4 and 5 of the preprint.”
-Which say that the attempted construction from a lattice by the methods in Glimm and Yaffe must fail in dimension $d\ge4$ – nothing more. There is not a single argument that would show that a QFT has to come from a lattice in the way assumed.
”the formalization does nothing to clarify to a human being what the important ideas are.”
– here I fully agree.
Yoh Tanimoto,
I think it’s important to have some historical perspective here. The Balaban papers are over forty years old. It’s not plausible that anyone is going to get anywhere by pushing his program harder. The question has to change.
For Yang-Mills in particular, the problem isn’t really the UV behavior, which we think we understand, it’s the running coupling constant getting large. An immense amount of effort has gone on for more than fifty years to try to deal with this, still unsuccessfully. Formalizing the huge literature of failed attempts would be a terrible idea, what would be a good use for AI would be if it could produce human-understandable summaries of this literature, explaining everything that has been tried and why it doesn’t work. A huge defect in the literature is that it is full of optimistic claims, rarely does anyone write a paper explaining why the idea they promoted heavily doesn’t work.
Arguably, instead of formalizing 1970s constructive field theory, Michael Douglas could more usefully be writing papers explaining why his program to use the string theory landscape to make predictions about physics didn’t (and can’t) work.
I’m not sure what you mean by “it’s the running coupling constant getting large”. That’s exactly a part of the UV problem Balaban attacks…?
https://projecteuclid.org/journals/communications-in-mathematical-physics/volume-109/issue-2/Renormalization-group-approach-to-lattice-gauge-field-theories-I-Generation/cmp/1104116842.full
I would imagine the bulk of the work here was done by the younger participants, who may sensibly be educating themselves about modern developments (with the not-so-modern topic being one they had a desire to learn about).
Regarding the current mathematical interest in the Clay problem (despite the lack of particular promising approaches), well, this is what dangling a big prize is going to do.
To this outsider, who is not motivated primarily by fundamental physics in 4d, fermionic spinor fields etc. don’t seem like the real issue — a (the?) central difficulty is already present in the classical Heisenberg model in two dimensions. If there is compelling evidence that the more complicated 4d gauge theory setting is in fact simpler, references would be appreciated.
Yoh Tanimoto,
My comment was just that “UV problem” seems to me the wrong way to describe it. Unlike phi^4, I’d say an asymptotically free theory like Yang-Mills has no UV problem, it has an IR problem.
Sorry, this has gotten off topic, the paper I was posting about has zero to do with issues about Yang-Mills.
Jon,
Yes, this kind of work is being given to graduate students, and that makes the whole thing much worse. Grad students are at a crucial formative part of their career, what they learn at this time will critically determine what they can (and can’t) do later on.
What the students working on this are becoming experts in is not “modern developments”, but uninteresting technical issues from the early days (possibly before their parents were born…) of a research program which stalled forty years ago. Doing this to the grad student population is an effective way to destroy the future of the field.
This really has nothing significant to do with the Clay problem, which is very interesting (and yes, related to issues in 2d stat mech models). But that’s a very different topic.
“The Balaban papers are over forty years old. It’s not plausible that anyone is going to get anywhere by pushing his program harder. The question has to change.”
This is not a good argument; Hard problems need their time.
Fermat’s problem and the Kepler conjecture had been open for several hundred years before they were successfully solved – without changing the question!
Also, nobody considers changing the meaning of the Riemann hypothesis, just because it resisted solution for well over 100 years.
Wightman field theories in 4 dimensions are a mathematically very natural class of objects, well deserving (for the case of Yang-Mills) a Millennium price. The difficulties of existence proofs for them is typical for good and hard problems in mathematics.
Perhaps I was too terse — I mean they are learning/experimenting with the use of proof assistants / formalization / AI tools. I think it’s only fair to respect that they have some agency in their choice to do this along with other work. Opinions on if this is a good or bad idea may of course vary.
I agree this work has nothing to do with the Clay problem. My last comment was regarding your commentary, which seemed close to suggesting that ‘statistical mechanics’ minded approaches were somehow irrelevant for the Clay problem.
Apologies for jumbling together multiple (unrelated) comments into one post without distinguishing them.
Hi Peter,
“Yes, this kind of work is being given to graduate students…Doing this to the grad student population is an effective way to destroy the future of the field.”
This is off topic and probably won’t be posted, but a curious observation I can’t help but point out. On the one hand you are noted for pointing out serious failures with the establishment, failures in string theory and high energy physics. Yet in another aspect, you are a staunch defendant of the establishment, believing the elitism of the big institutions, believing in young people should work on trendy and popular topics (in doublespeak this is usually framed as “promising approaches”), and believing in the construct of “important questions” and in this particular case denigrating old hard problems as “technical and of no wide interest.”
I see the above as contradictory, because they fundamentally come from the same source. To survive in a highly competitive environment young people flock to “quick and easy” problems that are simultaneously promoted as “important” by important people, whose approach to these problems are lauded as “promising”. I would argue this process in itself is harmful to the research progress in the long term. Old and hard problems are forgotten, denigrated as “unimportant” when the generation that remembers them dies off, and nobody works on them. Hard technical constructions are no longer being pushed forward, because you can’t write a paper about it in 1 or 2 years. Everyone then moves on to the next “new big thing” and works on them superficially until all the low hanging fruits are gone too. It’s like a locust swarm really. And we are encouraging and training young people in these kind of behaviours, and in my opinion this is why progress in my own field has stalled. I see a loose analogy of this with a common problem faced by entrepreneurs, and companies, after the initial founding and establishment the engineers are eventually out competed and replaced by sales people, and after a while the company eventually reaches the end of its life cycle and dies.
Sorry for this quasicoherent post, just wanted to get this off my chest and have at least one person read it.
clueless_postdoc,
I understand and sympathize with what you are saying. Yes, the current training of graduate students often involves too much emphasis on some trendy topic, too little on really fundamental ideas and central, difficult problems.
My comments here were aimed at a very specific topic (constructive field theory of scalar fields) and a very specific proposed research program (formalize the Jaffe/Glimm approach to the subject). I’m all in favor of people working on long-standing fundamental problems. But yes, I am an elitist, in the sense that I think if you’re going to approach something that difficult, you need to understand what the history of the problem is and have a new idea. “I’m going to go back and go over exactly the same ground as was done in the 1970s, but I’m going to use proof assistants and formalize things” is an excellent idea for getting cash and attention, but a terrible idea for how to make real progress.
This specific topic has been something I’ve been very interested in since my undergraduate days. In 1979 I took a graduate class on the topic from Jaffe. A few years later I was working in lattice gauge theory, doing both computer calculations and trying to understand this and all other approaches to pure Yang-Mills. Recently I’ve spent a lot of time going over the OS papers as well as the Glimm/Jaffe book, one thing I’ve been trying to do with only partial success is find a new approach to these problems using hyperfunctions. When I say I don’t believe you can make serious progress by just following the path being laid out in the 70s, but need something new, I’m not saying this off the top of my head.
One thing that strikes me about the AI/formalization program now taking up all jobs, funds and attention is that these new tools clearly work well for some kinds of problems, but not for others. They are going to lead to a lot of real progress, while also at the same time a lot of worthless hype. I hope someday to see such progress on ideas I care about, but this particular field has enough problems already without having AI hype layered on top.
Hi Peter,
Thanks for a thoughtful reply! I am certainly not an expert on the relevant physics, but here are some thoughts I have reading some very old (may 30 years old) math papers and old techniques. I don’t know if you had the same feeling re-visiting old problems.
In any case I have been reading some old papers on some very powerful but difficult techniques, and in its heyday it produced spectacular results. However the techniques are too difficult and the papers are so long (it exceeds what you can expect a graduate student to read in a year or two, which is how much time a grad students realistic has, so fewer and fewer people are well versed in them), so these days very few people are fluent in them anymore. I want to understand a modification of these techniques to do something interesting, but there is no one left I can ask. The field has moved on, sadly.
When it comes to hard problems and hard papers, I stopped believing in “you must have a new revolutionary idea before you tackle them” philosophy. My take is that there are people looking at these things/being trained in these things *at all* that is the crux of the matter. People pondering them, people who understand them, people putting these questions on the back-burners of their minds, for decades at a time. I’ve rarely seen anyone immediately get a good idea that changes the world, good ideas come about by wrestling with concrete things, poking around, kicking the problem from different angles. But the issue these days seems to be when the competition for jobs and prestige is very high and time is short, many things are simply not followed up, the classic “oh one day someone should do this and see if it leads anywhere” and then nobody does it, despite the fact even for hard problems/techniques usually there’s always incremental steps to go forward and explore. People stop paying attention and move to new and more prestigious things of the day.
Peter,
of course you have a point in stressing that the Glimm-Jaffe axioms are much more restrictive than the Wightman or even the Osterwalder-Schrader axioms. The assumption of a probability measure for the Euclidean theory, which was called Nelson-Symanzik positivity in the days, is not motivated by physics, as is OS positivity. It excludes for instance finite chemical potential as well as theories with topological term.
I should add that fermions as such are not the killer, because in many cases the fermionic determinant is positive as long as the chemical potential vanishes.
Let me add another point, which is usually overlooked in the (mostly worthless) attempts at proving the Clay millennium problem about Yang-Mills withe help of AI, which keep pouring onto my inbox. Even the formulation of the problem by Jaffe and Witten forgets to mention that neither the Wightman nor the OS axioms are fully appropriate for gauge theories. They might be appropriate for gauge invariant composite fields like the energy-momentum tensor, but it is not clear that this is enough to capture the full physical content of, say, QCD. To deal with this problem, Wightman himself (with Strocchi) was advocating the use of indefinite metric in
gauge theories.
Another approach would use nonlocal observables such as Wilson loops and fermions connected by a gauge strings and try to formulate appropriate axioms for these. An attempt to do this, based on discussions with Fröhlich and Osterwalder, is contained in my 1982 monograph. The “axioms” look somewhat ugly and the approach was not pursued further, in particular no OS-like reconstruction theorem was proven.
A final remark for Jon: I fully agree with you that somebody who intends to make progress on the Yang-Mills millennium problem should first focus on the two dimensional classical Heisenberg model, aka the O(3) nonlinear sigma model. This model shows the same subtle interplay between infrared and ultraviolet behavior as Yang-Mills in 4 dimensions: perturbatively asymptotic freedom in the UV but growing coupling strength in the IR (it remains to be proven that these properties really hold beyond perturbation theory). This model has the advantage of not having the extra complications related to gauge invariance, so the OS axioms are perfectly appropriate for this model.
@clueless_postdoc:
I think there is another, quite important, reason for this reluctance to give graduate students “old and hard” problems rather than “quick and easy” ones. If you’re a tenured professor, you can spend seven years working on Fermat’s Last Theorem, and not suffer any truly debilitating conferences if you fail. Whereas if you’re a graduate student, and you work on the Riemann Hypothesis, you will likely fail to solve it, not have enough material to write a good thesis, and have to drop out of graduate school and take a job as a supermarket clerk.
In math, tenured professors (and occasionally even non-tenured ones) are nevertheless working on old and difficult problems, and are often succeeding in finding solutions. And they’re the right people to work on these problems, because they have a lot of knowledge and experience that might help solve them.
But this isn’t happening in physics, It may be something about the culture of the field, or there may be different reasons. But you really shouldn’t blame it on the graduate students.
clueless_postdoc,
I very much identify with the problems you point to. Yes, you don’t need a revolutionary idea to make progress on old problems, but you do need something new. It may very well be that 50 years has given us a different enough point of view that returning to these problems could be fruitful (the new “let’s just formalize” point of view though I don’t think is such). Yes, not having anyone around who understands the old papers is a huge problem.
An interesting question is whether AI can solve this. Will there someday be an AI agent that can absorb the old literature and be able to write expositions of it or answer questions about it the way an expert would?
Erhard Seiler,
Thanks so much for the enlightening comments!
Hi Peter,
The Millenium prize aside, I am failing to understand the overall importance of the whole topic of axiomatic foundations of QFT. Be it Wightman, or Osterwalder-Schrader, or Glimm-Jaffe, or whatever — QFT in Minkowski or Euclidean spacetime is simply not a fundamental theory. It takes just a single look into, say, the Birrell-Davies textbook, to note that all these axioms just fall apart in curved spacetime, and are therefore incompatible with gravity. No global symmetries, no invariant vacuum, etc.
So why all the fuss about the rigorous formulation of QFT? Today, the Standard Model is generally perceived as an effective theory, not fundamental, and the QFT paradigm just drops out (or at the very least requires a major rethinking of the axioms) as soon as one couples gravity to SM.
Of course, I do understand that YM theories can be studied in their own right and that, from a mathematical perspective, one may wish to put their consistency on a firm mathematical ground. And that’s fine. But this seems to be a question for mathematics, not physics, at least not fundamental physics anymore. One can argue that (as far as fundamental physics is concerned) we need to move past/beyond QFT in flat spacetime. Hence my question — why should this topic be considered important, for a physicist?
Best, 🙂
Marko
Marko,
Yes, the flat spacetime, no gauge theory, Wightman axioms are clearly inadequate for capturing even the SM in flat spacetime, much less curved spacetime. They were intended to capture the simplest examples of relativistic QFT, to see if these were well-defined (as opposed to ill-defined formal objects, or oobjects that nly made sense as effective theories at low energy). In the end they showed that some examples in low spacetime dimensions were well-defined.
The biggest problem with the whole program is that it never has been able to handle Yang-Mills theory, which is NOT an effective theory, but as far as we can tell, is well-defined at arbitrarily short distances. Personally I’ve been interested in it because of spinors, where it runs into interesting problems even for free field theory in flat spacetime. That the standard story of how QFT works falls apart if you look closely at free matter fields in flat spacetime is worth paying attention to.
The problems with the general attitude “QFT doesn’t make sense, it’s just an effective low energy approximation to X, which is well-defined” are that:
1. No one has a really viable proposal for what $X$ is.
2. Some subsectors of the SM (e.g. pure Yang-Mills) are well-defined at all scales. Maybe one just needs to work a bit harder to extend this good behavior to a full theory.
@Peter S.
Absolutely certainly no one should blame the graduate students, on this front we are in full agreement. Even though many examples I have in mind are postdocs, being a postdoc myself when it really comes down to it I don’t blame the other postdocs either. For more senior people, I have the great fortune that most of the senior people I know and talk to are genuinely caring and do their best to mitigate the evils of academia. It really seems like there is an invisible hand moving things around.
But I would like to point out somehow that something interesting happens in the transition from postdoc to tenure track, in that if you find your own success doing a certain style of work or chasing certain kind of problems, you are more likely to promote younger people working in similar styles. There is nothing sinister about this, because if you did X, you are in a much better position to appreciate the work done in direction X. However, when the communities are small and hiring committees are small, this will start to make a difference. I remember reading Jesper’s book “the ant mill’ and him pointing out the human composition of the academic communities matters. The “Who” matters. (if you are a fancy schmancy sociologist you can say if this counts as “institutionalism”).
In some weak sense the math community has developed some ability to talk about this (the US community is better than their European counterparts in this regard). In career panels at conferences geared towards younger people, more senior folks give concrete tips about how to play the game and what kind of things will help you in your career (or has helped them in their career). However, this is only the voice of the people who actually made it. The frustration, struggle and stories of people who did good work but never made it, we may never know.
@Peter W.
Indeed AI might help. My experience doing research these days is that there is so much work being produced these days that nobody can keep up with what’s happening, and everybody is an island. It’s not so much “the field has read your work and judged it to be of so and so quality”, but more like “nobody has read your work and they can’t be bothered to care, so see ya”. Maybe AI will help build bridges between these islands, and not just islands that exists now, but forgotten islands that has existed in the past. I’ve heard of mathematicians getting stuck on a problem, asking AI for it to find it has been solved in forgotten works. But on a large scale probably not any time soon, it would seem without proper fidgeting AI will just reproduce and exasperate the existing inequalities in the field. It can only learn initially from what’s currently out there, and I wouldn’t say what’s currently out there is the best sight to behold.
Hi Peter,
“The problems with the general attitude “QFT doesn’t make sense, it’s just an effective low energy approximation to X, which is well-defined” are that:
1. No one has a really viable proposal for what X is.
2. Some subsectors of the SM (e.g. pure Yang-Mills) are well-defined at all scales. Maybe one just needs to work a bit harder to extend this good behavior to a full theory.”
Ok, regarding point 1, I’d say that there are quite a few proposals for what X is, but there is no viable consensus in the community about any of them. In other words, everyone prefers their own idea of what X is, and ignores proposals of others (of course, I am guilty of this as much as anyone else 🙂 ). And in practice this is almost the same as having no viable proposals at all, so I tend to agree with you.
Regarding point 2, sure, models like QCD with its asymptotic freedom and similar are indeed well defined at all scales — but only in flat spacetime. If you add curvature, they also break as far as I know (no invariant vacuum state). I think no QFT model is viable once gravity is added in.
Best, 🙂
Marko
Hi Peter,
I’ve really enjoyed your recent series of posts on QFT foundations. They’ve inspired me to go back to read about this topic, which I hadn’t touched since grad school. Please keep going!
Quick question: how does Haag’s theorem (the absence of a single consistent Hilbert space when going from non-interacting to interacting field theory) play into such programs of combining spacetime symmetries with internal symmetries?
In connection with Marko and Peter’s discussion on curved spacetime, I thought I’d quote Steven Weinberg from p. 263 of the book edited by T. Y. Cao (Conceptual foundations of quantum field theory):
“a distinction that we have to make between the symmetries of laws and the symmetries of things. […] You can still talk about Lorentz invariance as a fundamental law of nature and live in a Lorentz non-invariant universe […] Lorentz invariance *is* incorporated in general relativity, as the holonomy group, or in other words, the symmetry group in locally inertial frames.”
After this, please try to stick to the topic of the posting, this is getting too far away. Also, after tomorrow I’ll be traveling for three weeks, so little time for blogging or moderating comments.
cy/Alex Gezerlis,
Haag’s theorem, and the axiomatic QFT program in general, have the huge problem that they don’t say anything much about symmetries, other than about global Poincare symmetry of flat spacetime. The structure of our most fundamental QFTs is deeply determined by symmetry principles, especially gauge symmetry, and this needs to be built into a successful formalism.
About gravity, you can formulate it so that Lorentz invariance is a local gauge symmetry. Personally I believe that it’s very clear that the fundamental geometry of the real world is spinor geometry (and possibly twistor geometry). The formalism of a fundamental theory should have these built into it, not added on as an awkward layer of complexity.
Enough of this though for now…
Marko wrote:
“The Millenium prize aside, I am failing to understand the overall importance of the whole topic of axiomatic foundations of QFT. Be it Wightman, or Osterwalder-Schrader, or Glimm-Jaffe, or whatever — QFT in Minkowski or Euclidean spacetime is simply not a fundamental theory. It takes just a single look into, say, the Birrell-Davies textbook, to note that all these axioms just fall apart in curved spacetime, and are therefore incompatible with gravity.”
When talking about “importance” it’s good to specify importance to what, or to whom. You’re arguing here that the clarifying the foundations of QFT on Minkowski spacetime is not important for understanding quantum theories where gravity matters. That could well be true.
It might not be true: sometimes understanding something better turns up insights that helps us with other questions in surprising ways.
But never mind that. More importantly, there’s a lot to life besides understanding quantum theories where gravity matters.
Indeed, for a whole lot of particle physics in the lab, it seems gravitational effects are quite small. We approximate spacetime by Minkowski spacetime. Then it becomes frustrating that we still don’t really know what our quantum field theories predict.
Yes, we can do some computations perturbatively. We get series that most experts agree don’t converge. If we sum up a bunch of terms we get answers that match experiments quite well in some situations. We are pretty sure that if we sum up more and more terms the agreement will get worse, because the series diverge. But we don’t know how to get precise answers from our theories, not even in principle.
Worse, there are plenty of computations where perturbation theory doesn’t work at all. For example, computing low-energy quantities in QCD. If you look at how people compute the neutron/proton mass ratio, you’ll see they not only do lattice gauge theory on a supercomputer, they also use lots of clever tricks that are based on rules of thumb and intuition rather than a general theory. Another example is computing the mass of the Roper resonance, the first excited state of the proton. This has bothered people ever since this resonance was discovered. If you look on the arXiv you’ll find a paper titled “Roper Resonance: Solution to the Fifty Year Puzzle”, and a more recent one titled “The Roper Resonance: Still Controversial 60 Years Later”. Part of the problem – I’m certainly not saying it’s all the problem – is that we still don’t have a rigorous formulation of QCD. We don’t know the theory really exists nonperturbatively, so we can never be sure when an approximation or clever trick is good.
There’s also the viewpoint of the mathematician. For a mathematician, having a rigorous understanding of QFT is important because QFT is connected to a lot of very interesting math questions.