I just heard today that mathematical physicist Arthur Wightman passed away earlier this month, at the age of 90. Wightman was one of the leading figures in the field of rigorous quantum field theory, the effort to try and make precise sense of the often heuristic methods used by physicists when they deal with quantum fields. He was a well-liked and very respected professor at Princeton during the years 1979-84 that I was a graduate student there, but unfortunately I don’t think I ever made an effort to talk to him, to my loss. The university has something about him here, the department here.

Wightman is most well-known for the “Wightman Axioms”, which are an attempt to formalize the fundamental assumptions of locality and transformation under space-time symmetries that any sensible quantum field theory should satisfy. His 1964 book with Raymond Streater, PCT, Spin, Statistics and All That, explains these axioms and shows how they lead to some well-known properties of quantum field theories such as PCT invariance and the Spin-Statistics relation. When this work was being done during the 1950s and early 60s, quantum field theory was considered something that couldn’t possibly be fundamental. All sorts of discoveries about strong interaction physics were being made, and it seemed clear that these did not fit into the quantum field theory framework (this only changed in 1973 with asymptotic freedom and QCD). In any case, problems with infinities of various sorts plagued any attempt to come up with a completely consistent way of discussing interacting quantum fields, providing yet another reason for skepticism.

Wightman was one of a small group of mathematical physicists who reacted to this situation by trying to come to grips with the question of exactly what a quantum field theory was, in an attempt to find both the implications of the concept and its limitations. After the early 60s, attention moved from the axioms and their implications to the question of “constructive quantum field theory”: could one explicitly construct something that satisfied the axioms? Examples were found in 2 and 3 space-time dimensions, but unless I’m missing something, to this day there is no rigorous construction of an interacting QFT in 4 space-time dimensions. There is every reason to believe that Yang-Mills theory, constructed with a lattice cut-off, has a sensible continuum limit that would provide such an example, but this remains to be shown (and there’s a one million dollar prize if you can do this).

Thinking back to the early 1980s and my days as a graduate student, it’s clear what some of the reasons were why I didn’t spend time going to talk to Wightman. With the triumph of the Standard Model, attention had turned to questions about quantum gravity, as well as questions about the non-perturbative behavior of QCD. I certainly spent some time trying to read and understand the Streater-Wightman volume, but its emphasis on the role of the Poincaré group meant it had little to say about QFT in curved space-time, much less how to think about quantized general relativity. Gauge theories in general did not seem to fit into the Streater-Wightman framework, with the tricky issue of how to handle gauge symmetry something their methods could not address. For non-perturbative QCD, we had new semi-classical computation methods, and I was happily programming computers do numerical simulations of Yang-Mills theory. Why pay attention to the difficult analysis needed to say anything rigorous about quantum fields, when the path integral method seemed to indicate one could just put them on a computer and have the computer tell you the answer?

In later years I became much more sensitive to the fact that quantum fields can’t just be understood by a Monte-Carlo calculation, as well as the importance of some of the questions that Streater and Wightman were addressing. As particle theory continues to suffer deeply from the fact that the SM QFT is just too good, anything that can be done to better understand the subtleties of QFT may be worthwhile. It remains true that gauge theories require new methods way beyond what is in Streater-Wightman, but looking back at the book I see it as largely devoted to understanding the role of space-time symmetries in the structure of the theory. The importance of such understanding of how symmetries govern QFT may be a lesson still not completely absorbed, with gauge symmetries and diffeomorphism symmetries part of a story extending Streater and Wightman to the Standard Model, in a way that we have yet to understand.

Hi Peter,

I’m not sure about what precise definitions of “rigorous” would make mathematicians happy, but in discussions over the last few years it seems they don’t have as much of an issue with the recent works on supersymmetric localization, and this includes work in d=4. The reason, if I understand, is that they usually take issue with the measure in the path integral, citing that the space of all possible field configurations isn’t well defined, but that in some cases supersymmetric will localize the set of contributing field configurations to a nice well-defined space.

Of course, one could take issue with the fact that the theories are rather formal, but at least there is some progress in some examples.

Any thoughts on how localization fits into your / other definitions of rigorous?

Cheers,

P

Peter,

This is sad news. Another giant I’ll never meet.

P,

The problem isn’t that the space of fields isn’t well-defined. Finding the space of fields for a given model is the easy bit; it basically has to be the space of linear functionals on the space of sources which the classical fields couple to. Most of these linear functionals are distributions rather than continuous fields, but the space is perfectly well-defined. And in fact, the distributions are necessary: without them, you won’t get the right short-distance singularities in the OPE.

The problem is, first, that it’s not easy to show that the path integral measure exists on this space of distributions, and, second, that it’s hard to show that the dressed observables are really integrable with respect to the path integral measure. (The latter question is really what the Yang-Mills millenium prize is about: Confinement means that many of the observables from the classical theory _aren’t_ integrable; the ones that remain generate a Hilbert space with a mass gap in the Hamiltonian.)

Supersymmetric localization says that — for some (but not most!) observables — the path integral over the big space of fields reduces to an integral over a finite dimensional subspace. This will someday be a wonderful theorem, but it’s not true for enough observables to count as a definition of the measure on the big space.

P,

A.J. does a good job of explaining the problem. If you just want a rigorous construction of not QCD, but something like a TQFT, where in principle we have an independent characterization of what all the observables are, in a much more tractable framework, you can get a rigorous 4d theory by just making that framework your definition.

I think the formal application of localization to path integrals is a fantastic idea. Perhaps the best hope for getting rigorous 4d QFTs is to evade the problems A. J. explains by not trying to get a rigorous version of an arbitrary QFT, but reformulating the definition of a QFT to be something for which localization works automatically. Right now though, as far as I know this won’t buy one any physically interesting QFTs. One speculation I’m fond of is that if you better understood the representation theory of certain infinite-dim groups, you could reformulate certain QFTs in terms of such representation theory, and get a rigorous definition of an interesting theory that way. Well, it works in quantum mechanics, where you can do the harmonic oscillator that way….

Though I am not really a pessimist, I am a little wary of focussing exclusively on axiomatic methods to study field theories. These methods seem to be most successful when we understand aspects of theories by other means. I’d be happy to be proved wrong about this.

I would be surprised if localization works for theories with asymptotic freedom. Most of these theories seem rather special. I’m not an expert though.

Most of the theories where the Wightman axioms are proved (using constructive field theory) are super-renormalizable. That means a simple subtraction makes it possible to remove the cut-off. So in a way, constructive methods are most powerful for theories where we can already control renormalization. There is a claim that planar phi^3 in six dimensions is “constructable”, but I have not tried to check this. I think there is good reason to be optimistic in 1+1 dimensions where we have a lot of exact information; getting to more dimensions is a long way off.

I also don’t expect that the mass gap/confinement problem will first be understood via axiomatic or constructive field theory (though methods borrowed from these subjects, say lattice constructive field theory methods, might be useful).

But don’t get me wrong… I am a big fan of Streater and Wightman and many of Wightman’s papers.

Maybe I can offer some opinions (and even some facts):

A.J.: as there are no complex measures, YM doesn’t have a Euclidean measure. That does of course not mean, that the Standard Model can’t be constructed mathematically.

Peter Orland: The people working on making the SM – as used to calculate predictions for experiments – mathematical is a set of measure zero. The problem is not, that there are some people looking at SM-physics “non-rigorously”. The problem is that there are no people doing it rigorously. It certainly will be hard to learn anything “non-perturbatively”, if we don’t even have constructed the theory we want to know something about.

All: Constructive QFT imho is the strife to construct the SM – again as used for predictions – in a mathematically sound manner. And it shouldn’t matter what methods will in the end to success (if that occurs). But in the moment mathematical physics doesn’t seem to be set up to achieve this goal, as it is divided into analysis people doing Schrodinger and maybe Dirac operators and algebraists having fun with TQFT. From my point of view it seems unlikely that either one tradition alone will be able to achieve above mentioned goal, which is of course an extremely hard one.

Peter: I am curious about that idea you mentioned regarding “representation theory of certain infinite-dim groups”. Can you point me to any sources? Sorry if this bores other people in here.

Anyway, RIP Arthur Wightman and thanks for setting the goal.

deconstructed: I’m sorry, but I don’t quite understand what you mean by “as there are no complex measures, YM doesn’t have a Euclidean measure”. Can you clarify?

A.J.: The generating functional for YM is not real-valued, which is a condition of Minlos’ theorem, which in turn gives necessary and sufficient conditions for the existence of measures over locally convex spaces.

(Minlos’ theorem is the analog of Bochner’s theorem for probability measures. Check out Gelfand’s Generalized Functions, Volume IV, I think. Or Bourbaki’s Integration)

There are also no complex-valued measures on the real numbers.

Sorry that I was a little cryptic there, but I can already hear Grandmaster P booming: “There are plenty a places to discuss measures. Enough.”

deconstructed,

I don’t know much about technical issues with measures, but in the case of pure Yang-Mills I believe you should be able to understand the problem as one of showing that an appropriately constructed limit of the theory regularized as some version of lattice gauge theory should exist and have desired properties. I think the point Peter Orland is making is one that I agree with: if you don’t have a good physical understanding of the infrared behavior of these theories (and we don’t…), no analytical technique is going to solve the problem.

As an example of what I meant by using rep theory of infinite dim groups, consider the Wess-Zumino-Witten model, a 1+1 dim QFT. If you try and naively discretize that and control its continuum limit you will encounter all sorts of problems. On the other hand, the behavior of the theory is largely determined by knowing about the representation theory of a loop group.

deconstructed:

I have some sympathy with your views, but I am not convinced that your statement, “It certainly will be hard to learn anything “non-perturbatively”, if we don’t even have constructed the theory we want to know something about,” is entirely true.

I suppose it could be so, but precedent suggests not. There are some physical phenomena (in classical mechanics or statistical mechanics) where proving theorems really settled physical questions, so you could be right. The field theories we have solved, however, were first solved by less rigorous methods more familiar to theoretical physicists.

There is a long list of field-theory models (the Lee model, integrable theories, QED with monopoles) which were well understood (by which I mean some observables were calculated) without rigorous methods. Rigorous tools were applied, only afterwards, to a small subset of these models. The only field-theoretic model I can think of which was solved first by constructive methods was Nelson’s. Nelson introduced it at the start of the constructive field theory program (it’s a model of non-relativistic quantum particles, interacting with a relativistic scalar field).

I am a fan of constructive FT, because it gives a lot of insight (as in phi^4), but its successes in seem to be for models we already had some control over.

First of all I don’t think that we have to, or even should, start with lattice gauge theory. Don’t get me wrong, if you are able to continue the work of Bałaban and prove results, that would be a gigantic sucess. From a conceptual point of view, however, lattice theories have the distinct disadvantage, that you have to redo the proof for every observable you investigate. I digress.

My point is, that from the outset it should not matter where we start, as long as the starting point is mathematically sound.

The term “good physical understanding” is key here imho. If it’s meant to be “knowlege gleand from calculating Feynman diagrams”, I agree – we don’t have a lot of that.

But we do have Bałaban’s work, which all but proves the existence of the YM partition function in the lattice framework. And as such it does have a lot of information not only on the ultraviolet behaviour but also on the infrared behaviour of YM.

Anyway, we have to find an approximation of SM where we can control the infrared behaviour. So we should try a few. Except for the ones we already know don’t work, obviously. Your WZW example is just one of many indicators, showing that there are a very large amount of bad approximations.

Also, there are a lot of interacting theories in 2d, where groups don’t help you at all. Hence what I said before, that we should throw everything in the ring we have, everything from algebra to analysis, groups to spaces.

Regarding the “rep theory of infinite dim groups” thing: where you being ironic when you said it worked for the harmonic oscillator in QM? If not, can you point me to a document I can read to understand this? I would like to understand the very basic idea you have.

Greetings!

To add to what Peter W. said about Witten-Wess-Zumino models…

Much the same can be said about other 1+1-dimensional theories. We know the

S matrix and some off-shell information exactly. I think that most people working on these expect that these results will eventually lead to a rigorous definition. This is done without the benefit of a functional measure or a Lagrangian.

Peter Orland: Your post came, while I was answering Peter’s. I shall gather my wits and reply to yours asap.

deconstructed:

Yes, I know what Minlos’ theorem says. I gave a class on it once, for whatever that’s worth…

And I don’t think there’s anything wrong with talking about constructive field theory in a post memorializing Arthur Wightman. So if you’ll allow me to pester you a bit more… I still don’t quite know what you’re trying to convey to me with your comment, and I’d be happy to learn. In particular, in lattice SU(2) Yang-Mills, everything physically relevant looks real-valued to me. The Wilson action is real, and the Haar measure on gauge fields is real, so the lattice path integral measure is real. The observables should all be generated from characters of products of holonomies, and the characters of SU(2) are real valued. (Forgive the restriction to SU(2). I thought it best to get concrete.) So it looks to me like the relevant generating functional is also real-valued.

So what precisely about my comment are you objecting to?

As a side comment: I doubt the Standard Model can be constructed mathematically. The Higgs sector is going to cause trouble.

Well I didn’t say it would happen that constructive theory is faster than any heuristic approach, Mr Peter Orland. I’m just saying, that it will get harder and harder to glean information about the complete theory without having the full theory. Do you contest, that perturbative calculations about QCD – strongly interacting as it is – are getting harder by the loop?

But that’s not even the issue. Even if you could calculate more and more diagrams, we’re not even sure that our theory is approximated anymore by the stuff we calculate. (I think recent quark gluon plasma observations at the LHC indicate that the strong interactions do in fact cause deviations from the calculations, if you don’t get goose bumps from some minor conceptual issues as long as the “predictions” fit.)

Can you have predictions out of a theory, that has no mathematical basis(, yet)?

Hi deconstructed,

No, I can’t do any of the things you ask. I am just saying that QFT is a complicated problem and we need to view it from with many different angles. Usually insight comes first, then theorems. Constructive FT is usually better for the latter, which I agree is important. Balaban’s stuff is great (though I can’t make the Polish “w” on my keyboard), but so are other ideas.

We can’t just be hammers (and hit nails) or screwdrivers (and provide OJ+vodka). We have to able to do everything.

Regards,

P.O.

Since ‘rigorous’ vs ‘non-rigorous’ methods are being discussed, it seems the right place to ask the following:

How can we trust ‘non-rigorous’ methods that have little mathematical justification while dealing with QFTs that have no observational support? That is, I can see that all our non-rigorous QFT methods can be justified for the SM because it fits with experimental data. But, is there any justification to using the same methods for things like SuperYM?

What if, the SM is a very special case where the non-rigorous methods just happen to give the same answer as the hypothetical rigorous formulation of QFT?

I can certainly see why some fields move faster than others.

A.J.: You’ll have to excuse my very brief comment at the beginning. Our discrepancy shows how differently we think, even if we both seem to both come from the mathematical physics community.

What I meant was, that starting from the classical Yang-Mills differential equations (or density), when you build a “time-zero-algebra” in order to go to Euclidean space and quantize (Osterwalder-Schrader reconstruction) the “measure” you encounter will not from the outset be real-valued. At this stage you’re not even allowed to use gauge invariance, because that’s what you want to prove for your quantized theory. Of course as soon you have constructed the whole theory you should be able to find a “section” through your solution space aka predictions, where everything you’re interested in is real (although that doesn’t ensure the existence of the measure, as you know).

Of course, starting from an approximation which is already obviously supporting a measure will have certain advantages. But other approximations not supporting a measure might have other advantages.

While writing this it ocurrs to me, that calculations for the LHC are probably done using lattice gauge theory. Maybe I should reevaluate my starting point.

KP: All your questions are excellent.

@KP: Which non-rigorous methods do you have in mind?

@AJ

For the sake of being concrete, how about path integrals? If I am not mistaken, it is not really clear that the “measure” on the space of field configurations even exists in a mathematically rigorous sense.

In any case, the Wick ‘rotation’ used to define the partition function is only valid in stationary spacetimes. So, we know at least one situation in which the path integral is ill-defined but QFT supposedly exists.

@deconstructed: Thank you for the comments. It’s educational to see other perspectives.

Regarding LHC, from what I recall, it’s a mix. The highest energy processes can be treated perturbatively, but as you get away from the beam collision point, you need numerical simulation to understand what’s going on.

Speaking of ‘rigor’ in QFT. I want to make a technical remark that may seem like a triviality, but I think is important for the perception of the mathematical foundations of QFT.

It is of course true that physicists often use heuristic mathematical manipulations in their calculations. And just as Peter Orland has pointed out, these calculations are replete with insights that are then routinely turned into completely rigorous proofs by mathematical physicists and mathematicians (it is also not excluded that the same person could wear more than one of these hats). So there is no particular lack of rigor in the mathematical treatment of QFT. And there are examples where I think rigorous methods do correct erroneous thinking that was guided by more naive heuristic methods.

What is true is that these rigorous proofs do not necessarily establish the exact results that we would want. Namely, the best physical predictions that we can get out of the Standard Model (though fully rigorous) are not numbers but rather formal power series in the theory’s coupling constants. It really doesn’t matter whether people use path integrals (whose mathematical foundations are “shaky”) to arrive at these answers, because by now there do exist fully rigorous (though somewhat different) methods that are known to give equivalent results.

What is missing is not rigor but ‘strictness’ (using ‘strict’ as the opposite of ‘formal’). By that, I mean that we are not currently able (with a few low dimensional exceptions) to replace the above mentioned formal power series by actual functions, whose asymptotics these series represent. And until we do, as already brought up by deconstructed and KP, we do not have reliable estimates on the errors that we get by truncating the asymptotic power series. That is, until then, we do not have reliable error bars around our theoretical predictions.

This lack of ‘strictness’ is the real problem with the foundations of QFT. Unfortunately, I think it is rather hard to predict at the moment whether it will be solved by a satisfactory construction of field-theoretic path integrals or by some other means. I think it would pay not to be dogmatic on this point.

KP:

I think the best answer I can give is that it’s a good idea to use methods that have been tried in more than one experimentally tested physical model, that it’s safer to use methods which make sense in regularized approximations, and that it’s safest to use methods which have been shown to work in rigorous examples.

The path integral is one of these. It works in QED, QCD, the Standard Model, and probably dozens of low energy approximations to these, like the Skyrme model. It’s grounded in lattice path integrals which arise by repeatedly inserting resolutions of the identity into correlation functions, which is about as mathematically kosher as it gets. The continuum limit of these lattice approximations has been constructed rigorously in a number of low-dimensional examples, and in these examples, everything works pretty much as the physicists think it should. This gives me some confidence that what works for Standard Model calculations also works for SO(99) gauge fields with Dirac fermions in some weird representation.

I am a little confused by this discussion. There are no people working on rigorous methods for the standard model (SM), because we do not expect that the SM can be rigorously defined (because of the Higgs sector and the U(1) gauge group). We believe that QCD can be rigorously defined, and we expect that chiral gauge theories can be defined as well. We think that in the case of QCD we already have a construction (as a limit, via euclidean lattice gauge theory), but of course there is no proof that the limit exists. I am not quite sure what the consensus is regarding chiral fermions.

There are a number of non-perturbative results in 4d that have been proven using physicist’s methods (not as many as we would like, obviously), for example the Seiberg-Witten result for the low energy effective action of N=2 SUSY YM, and proving these results is an obvious goal for more rigorous methods.

Dear Professor Woit,

The heyday of constructive QFT during 1970′s stopped short before 4

dimensions with a general supposition of triviality for scalar fields.

The Clay institute problem looks as Jaffe’s attempt to revive it for Yang-Mills fields.

Unfortunately, even modiﬁed Wightman axioms (see, e.g., Chapter 10 of Bogoliubov, N. N., Logunov. A. A., Oksak, A. I., and Todorov, I. T., “General Principles of Quantum Field Thory, Kluwer, 1990) are in a serious conﬂict with the simplest cases of Gupta-Bleuler theory of quantum electromagnetic ﬁelds, as well as commonly used local renormalizable gauges (see, e.g. Strocci, F., S., “Selected Topics of the General properties of Quantum Field Theory”, World Scientiﬁc, 1993.)

There was a vivid discussion among W. Heisenberg, P. Jordan, and W. Pauli of the corresponding “Volterra mathematics” for possible applications to functional Shroedinger operators.

This approach has been realized in my solution of Yang-Mills Millennium problem (the latest version “Mass gap in quantum energy spectrum of relativistic Yang-Mills fields”, arXiv:1205.3187).

The paper abstract:

A non-perturbative and mathematically rigorous quantum Yang-Mills theory on 4-dimensional Minkowski spacetime is set up in the framework of a complex nuclear Kree-Gelfand triple. It involves an infinite-dimensional symbolic calculus of operators with variational derivatives and a new kind of infinite-dimensional ellipticity.

In the temporal gauge and Schwinger first order formalism classical Yang-Mills equations become a semilinear hyperbolic system for which the general Cauchy problem (with no restriction at space infinity) is equivalent to one with a family of periodic initial data. Yang-Mills quartic self-interaction and the simplicity of a compact gauge Lie group imply that the energy spectrum of the anti-normal quantization of Yang-Mills energy functional of periodic initial data is a sequence of non-negative eigenvalues converging to infinity and, by caveat, has a mass gap at the spectral bottom. Furthermore, the energy spectrum (including the mass gap) is self-similar relative to an infrared cutoff: it is inversely proportional to the initial data period.

According to Wikipedia, “Since 2009, Alexander Dynin claims to have proved the Yang- Mills Millennium Problem. Nevertheless, the physics community seems to be turning a deaf ear to him,

apparently because they feel incompetent to assess his unorthodox mathematical methods.”

Regards,

Alexander Dynin

Professor of Mathematics,

Ohio State University

Dear Professor Dynin:

Although I do not know whether or not Wightman functions are definable in your theory, I would like to note that there are serious mathematical difficulties in formulating gauge theories in axial-type (i.e., axial, light-cone and temporal) gauge in the Wightman-like formalism. For a review, please see

N. Nakanishi, Critical Review of the Theory of Quantum Electrodynamics, in T. Kinoshita (ed.), Quantum Electrodynamics (World Scientific, 1990).

@AJ

I agree with your view that using the tools we already have is a good idea. At the same time, I feel that finding a more rigorous formulation is also warranted.

@Thomas

Do you mean to say that if the SM is the correct theory excluding gravity (as the LHC results seem to indicate thus far), then we will be stuck with a ‘correct’ physical theory that can not be hoped to be made mathematically rigorous? Or am I understanding this wrong?

Thomas: Just because “we believe”, or rather, just because there are heuristic calculations suggesting, that the SM is not a mathematical theory, that doesn’t mean nobody should work on settling the question. I want to know whether it exists or not and I hope I’m not alone with that wish.

P wrote: “I’m not sure about what precise definitions of “rigorous” would make mathematicians happy…”

The definition is pretty simple. In a “rigorous” approach to a subject, you start by laying out a set of axioms and rules for deduction. Then everything else you derive using those. Ideally you also show they’re consistent: that is, it’s impossible to derive a contradiction. In practice, you often try to show they’re relatively consistent: any contradiction would lead to a contradiction in some widely accepted set of axioms, like ZFC.

It’s easy to make small chunks of quantum field theory rigorous, just by precisely stating the rules used. The hard part is that quantum field theory as practiced by physicists uses many different bunches of rules, and it’s hard to precisely state them all, much less show they’re consistent or organize them into something elegant.

Nonetheless this has been successfully done in a bunch of cases, and there’s been a lot of progress now that more mathematicians are getting interested in quantum field theory.

@KP

Yes, but we know that gravity exists (and presumably dark matter).

@deconstructed

I think it is not correct to dismiss the existing work on triviality of $\phi^4$ and $U(1)$ gauge theory as mere heuristics. Having said that, I think it would still be valuable to provide a lattice definition of the complete standard model, and then show that this theory is indeed just QCD plus free fields.

Thomas,

I think what deconstructed means is that triviality of phi^4 in four dimensions is not proved. The papers of Froehlich and of Aizenmann proved it is trivial in more than four dimensions, but not in exactly four.

Just a small correction to what Peter Orland said above. The methods of constructive QFT are not limited to super-renormalizable theories. For instance massive Gross-Neveu in 2d which is only asymptotically free in the UV has been given at least three different rigorous constructions: by Gawedzki-Kupiainen, Feldman-Magnen-Rivasseau-Seneor and finally more recently by Disertori-Rivasseau. What is needed is that running couplings remain small in the range of scales under consideration. Super-renormalizability is not essential.

Dr. Abdesselam,

Actually, what I said was that constructive methods appear to be limited to theories for which renormalization can already be controlled. I did not say only super renormalizable theories could be understood this. The Gross-Neveu model, for example, can be studied with 1/n expansions, which gives a very good picture of its behavior in any dimension (even in more than 2D).

Dr Orland,

Sure. “renormalization can already be controlled” means heuristically and of course this precedes the corresponding constructive result which proves that the renormalization of the model can be controlled rigorously and nonperturbatively. By the latter I mean “not simply in the sense of formal power series” which is weaker than what physicists might understand by nonperturbative, i.e., pertaining to strong coupling phenomena for instance.

Arthur Wightman seems also to have been deeply involved in early work on BHPZ renormalization, even if he apparently didn’t publish on it at the time, (although he later co-edited a book on renormalization with G. Velo). Klaus Hepp’s fundamental paper contains a very substantial acknowledgement to Wightman.

Sorry, I meant BPHZ.