Several months ago Erick Weinberg had told me that his recollections of the story of the calculation of the Yang-Mills beta function were different than David Politzer’s. Erick actually did independently do the beta function calculation (for the case with scalars). At the time we talked he thought he had gotten the sign right, but the coefficient wrong, but now he has checked it and says the coefficient is right. He has posted his thesis on the arXiv, equation 6.68 is the beta-function. From the comments after this equation, you can see that he was aware that this meant that perturbation theory would break down in the infrared. Like ‘t Hooft though, who also did this kind of calculation, he wasn’t aware of the significance of asymptotic freedom in the ultraviolet for explaining the SLAC deep-inelastic scattering results.

Fabien Besnard has a new blog (in French), which is quite interesting. His latest post is a report from a Paris conference celebrating the Einstein centenary. He’s shocked by the comments of string cosmologist Thibault Damour that Popper was wrong, scientific theories don’t need to be falsifiable.

The New York Times has an article about the actress Danica McKellar and her work in mathematical physics. She was working with Lincoln Chayes while an undergraduate at UCLA. Lincoln and his then-wife Jennifer (also a mathematical physicist, now at Microsoft Research) were graduate students with me at Princeton. I have many happy memories of them and their impressive leather outfits, and our joint trips down to the punk-rock club City Gardens in Trenton.

This entry was posted in Uncategorized. Bookmark the permalink.

### 19 Responses to Some Quick Links

stevep: That was much clearer and illustrative! I agree that questions of falsifiability should not immediately throw out ad-hocs. And I fully agree that there are other criteria to throw stuff out, as you illustrate nicely.

Since we are using illustrations I too will take the opportunity to show how falsification can work. I was working in a group where we wanted to set up a model for a particular class of plasma processes to make thin films. This class has an hysteresis effect.

Earlier models assumed the hysteresis (!), so they could not be falsified. We set up a model from surface mass balances that showed this effect without assuming it. That model was widely accepted because of that (and because it fit nicely without ad hoc parameters). I saw it work, so that is a reason why I feel strongly that falsification can be a tool in the tool box too.

2. stevep says:

Torbjorn: I can be less vague by talking about stuff I work with. Economic models often start by trying to characterize an individual’s or organization’s optimization problem. Then they solve for the agent’s optimal policy (say, an investment decision) as a function of some exogenous parameters (say, a set of prices and technical features of the input/output relationship).

There are plenty of potential points of empirical falsification in these situations–we could have the wrong objective function, the wrong exogenous description, or the agent may not optimize but do something else. All of these turn out, in general, to be hard to observe and test. Nevertheless, some of these models are useful and others are not, even when their falsifiability is held constant (say, at a very low level). When they are useful, they help us understand what is likely to be going on in a given problem and give us a start on figuring out what advice to give to someone in such a situation.

Since you prefer to think about throwing theories out, the bad models have some combination of a) definitions of the agent’s choice set and constraints that are hard to map intuitively to real-world problems, b) lots of technical detail that obscures the primary mechanism, c) strong assumptions about functional forms that are hard to characterize intuitively, d) omitted variables that are of equal or greater importance to those included (although abstraction is OK if done consciously and without fooling oneself). There’s probably some other stuff, too, but those are the main bad things to look for. Usually, you can trade some of these off against each other, too, i.e. stronger assumptions can cut out extraneous technical detail. But lots of models get weeded out at review (or published but ignored) based on these kinds of considerations.

stevep: I must be dense (or vacuumheaded ðŸ˜‰ since the details of your argument are vague to me. But it is interesting.

Problem are that: You seem to discuss philosophy. I prefer to discuss use. You are interested in how to make a theory useful. I try to find out what makes it useless, IMO it is what we discuss and it is much easier to find.

You discuss predictive formal theories that may have trouble being falsified. But their predictions are tested for falsification by large errors. The objection you make about small errors should go into the ‘slippery notion’ category.

Your discuss nonpredictive formal theories, which are logical and mathematical theories and perhaps prototheories. The domains of logic/mathematics are tested then parts of them are used in theories on the real world. Prototheories should eventually be falsifiable.

You discuss a floor tiling model that I can’t agree with. It is the tiling model we are working on, not arithmetic that have been plenty verified earlier. This is exactly the matter of trust and why falsification is good.

“Sorry to be so long-winded.”

It’s OK as long as the ride is interesting.

4. stevep says:

Torbjorn: Perhaps I can clarify.

You said: “Information theory is a widely tested theory. It is used and verified in data communications, data compression, and other fields, sometimes directly on channel capacity and error rates. I can’t find the reference, but IIRC Shannon did verifications in his first paper. Shannon entropy corresponds closely to thermodynamic entropy which you may see as another field to falsify it in.”

I agree that the theory is widely used because it is useful–that was my premise. My point was that these “verifications” are not empirical tests in the Popperian sense, driven by observation. They are all done by the application of formal logic. The conclusions of communication theory follow deductively from the assumptions, with no room for contingent empirical tests. There is no conceivable set of observations of communications systems that could cast doubt on the correctness of communication theory. That’s not “slippery,” it’s the nature of deductive logic. If you think you’ve empirically falsified the theory, it must be because you inappropriately applied it.

The only “empirical” aspect to Shannon’s theory is that his assumptions about the abstract transmitters, channels, and the rest map directly and cleanly onto their real-world counterparts. The correctness of that mapping is intuitively obvious and no one bothers to “test” it–if we found that the infomation counts didn’t add up, we’d look for missing bit generators or absorbers, not worry that the theory is in danger of “falsification.” The intuitive correctness of that mapping is also why the theory is so useful.

You expressed puzzlement about how logical omniscience (or its lack) relates to falsifiability. My point about logical omniscience is that a theory can make a contribution by clearing up logical inconsistencies in our thinking, even if it has no direct empirical falsifiabiltiy. The proof that there is no greatest prime number is valuable because we don’t necessarily grasp it intutively before we see the proof. Yet the theorem is not empirically falsifiable. Contradictions in logic are not empirical tests, because we can postulate any abstract system we like without it having any necessary correspondence to the world we live in.

You say: “I think you allude to Peters ‘slippery notion’ of falsifiability. This is a indeed problem. One could also say that the theory is falsified and that a new one is needed. If you redefine your categories or their application, I could say that it is a new theory already.”

No, I was not talking about modifying the theory (although that is an interesting set of issues). I am saying that if you count the tiles on your floor, discover that there are 5 rows and 4 columns but that when you count them you end up with 22 tiles, you are not going to modify your theory of arithmetic. Instead, you are going to postulate that extra tiles pop up when you’re not looking, or some other modification of the application situation–not the arithmetic theory that appears to be “falsified.”

You say: “These are criteria out of vacuum. Verification of theories can be done without them. c) is actually contrafactual; since you want general theories they tend to be simple instead of consisting of several particular ones. (Due to Occam’s razor if you wish.) Instead they may be, and usually are, complicated to use in the particular case.”

My head may bear some resemblance to a vacuum, and these criteria are indeed not the result of prolonged analysis. But there is a logic to them. I’m interested in what makes a theory useful.

If a theory is like arithmetic or communication theory or queuing theory–mathematically derived from a set of premises not subject to test–then it can’t be useful because it rules out contingent states of the world. Rather, it helps us because we can use it to figure out the non-obvious consequences of our premises. If we’re interested in real-world problems, then we need the theory to be clear about how to map the real world into those premises, and ideally it should be easy to figure out the conclusions of those premises.

Your point about Occam’s razor is a good one. Simple theories of great generality may be hard to use in specific situations, even though their parsimony increases the chances that they are predictive. Their is thus a tradeoff in theory usefulness between these two considerations. But with theories whose predictivity is not in question, like arithmetic or communication theory, there is no real gain to Occam’s razor (unless simplicity makes a theory easier to remember).

Sorry to be so long-winded.

5. Alejandro Rivero says:

I am happy with E. Weinberg uploading his old papers. Some moths ago, in his duty as PhysRev editor, he rejected one of my papers, and a couple weeks later I become really disturbed: I discovered that in the rejection he was using an standard template… instead of suggesting pointers to his old work, which could be of some value to improve mine!

What world do we live in, if the bureaucratic roles of seniors researchers block them of guiding the younger ones?

stevep:

I have several problems with your commentary.

“One problem with falsification as a criterion is that it rules out something like Shannon’s communication theory. Shannon’s theory is the foundation of network engineering, but an empirical “test” of it is obviously ridiculous–it follows mathematically from assumptions and definitions about “sources,” “channels,” “coding,” etc.”

Information theory is a widely tested theory. It is used and verified in data communications, data compression, and other fields, sometimes directly on channel capacity and error rates. I can’t find the reference, but IIRC Shannon did verifications in his first paper. Shannon entropy corresponds closely to thermodynamic entropy which you may see as another field to falsify it in.

“A big reason why they can be useful is because we are not logically omniscient–we do not instantly know all the deducible conclusions of the things we believe.”

I fail to see that this means re falsifiability.

“Some of these conclusions may be impossiblity theorems, but they still are not empirically falsifiable.”

By observing a contradiction to impossibility.

“Communication theory tells us that certain things are impossible, but they are logically, not contingently, impossible; if you ever saw an event that seemed to violate the theory, you would have to go back and (assuming no errors in data) redefine how you applied the categories of the theory to the phenomena.”

I think you allude to Peters ‘slippery notion’ of falsifiability. This is a indeed problem. One could also say that the theory is falsified and that a new one is needed. If you redefine your categories or their application, I could say that it is a new theory already.

“A theory that is not empircally falsifiable must be judged by how much a) the entities it proposes to reason about are operationally definable, b) it includes the factors important to distinguishing situations, and c) it is easy to use in situations we care about.”

These are criteria out of vacuum. Verification of theories can be done without them. c) is actually contrafactual; since you want general theories they tend to be simple instead of consisting of several particular ones. (Due to Occam’s razor if you wish.) Instead they may be, and usually are, complicated to use in the particular case.

7. stevep says:

One problem with falsification as a criterion is that it rules out something like Shannon’s communication theory. Shannon’s theory is the foundation of network engineering, but an empirical “test” of it is obviously ridiculous–it follows mathematically from assumptions and definitions about “sources,” “channels,” “coding,” etc. The nature of Shannon’s contribution was to draw out the surprising conclusions of mundane assumptions and definitions, not to make bold predicitons about contingent facts. Communication theory establishes the basic logical constraints on signal transmission under different noise regimes, etc. (Warning: I am not an electrical engineer or an expert in communication theory–just an interested bystander.)

Falsifiability is a fine thing when you can get it, but lots of useful intellectual frameworks need not be falsifiable and can still say useful things about the world. A big reason why they can be useful is because we are not logically omniscient–we do not instantly know all the deducible conclusions of the things we believe. Some of these conclusions may be impossiblity theorems, but they still are not empirically falsifiable. Communication theory tells us that certain things are impossible, but they are logically, not contingently, impossible; if you ever saw an event that seemed to violate the theory, you would have to go back and (assuming no errors in data) redefine how you applied the categories of the theory to the phenomena.

A theory that is not empircally falsifiable must be judged by how much a) the entities it proposes to reason about are operationally definable, b) it includes the factors important to distinguishing situations, and c) it is easy to use in situations we care about. There are probably other criteria, too, but I can’t think of them right now.

I am not qualified to judge whether string theory is on the road to becoming a good non-falsifiable theory like communication theory (or queuing theory, or accrual accounting), but the discussions I have read so far make me wonder. It doesn’t sound like the three criteria listed above are going that well. And certainly most of the people pursuing it have the ambition to be empircially falsifiable, so even if it did turn out the way I’ve described, they wouldn’t be too happy.

Thank you Clark, it was educational.

“Merely to suggest that its problems can’t just be seen as simple falsification. From my limited knowledge I’d say it has more than enough problems to still be questionable though. Let’s just not buy into Popper to attack Superstrings.”

However, here I don’t agree. If falsification is useful at all, it should be used. Some of the anthropic principles that has been used falls for that requirement IIRC. But here I am quickly going offtopic because I don’t know enough about the current subject to take it any longer. I think Lubos Motl says similar things on his blog, though…

9. Clark Goble says:

Torbjorn, I think Kuhn ends up being the most problematic of the philosophers, if only because of the problems of “paradigm” as having a stable meaning. (He acknowledges this problem in his later writings, but doesn’t completely fix the problem) Still, I think the later neoKantians like I think most take Kuhn to be do recognize the categories that we are presented the world through are in part socially determined. That means that falsification is partially socially determined. Kuhn wouldn’t go as far as Feyerabend. But I think all of those people have some points – although one can debate how much of an impact it makes in practice. But I think Kuhn’s right in that this is because we have some dominate paradigms (or frameworks if we are to adopt the more positivist approach that I think Quine still favors somewhat)

The problem is that Superstring theory is, as I understand it, attempting to be one of those frameworks. As such, discussions of falsification become rather difficult. Especially when for the phenomena in question there isn’t a dominant system it is competing against. instead there are other frameworks with at least as many problems as Superstring theory. Given that fact, I’m not sure how falsification even in practice makes much sense.

One might instead just ask for some novel predictions of something unexpected. Yet thus far they can’t offer this.

Please note I’m not suggesting falsification isn’t useful, although I think the Popper view of them is hard to buy into. Rather falsification, testability, predictions and much else all work together. It just isn’t as simple as Popper presents it unless one of comparing a small theory against a dominant overarching one. Even there I think it can run into problem. (Experimental error, some other phenomena at work, etc.) So I think simplicity issues always enter in.

This isn’t to defends superstring. (And I’m not enough of a theorist to be able to) Merely to suggest that its problems can’t just be seen as simple falsification. From my limited knowledge I’d say it has more than enough problems to still be questionable though. Let’s just not buy into Popper to attack Superstrings.

10. $tringer says: From: http://physicsmathforums.com/showthread.php?p=200#post200 The Inmates ($tring Theorist$) Are Running The Asylum$tring theory has done far more damage to physics than just $tring theory itself.$tring theory’s central postulate is that there is nothing more to be asked, and that all government funding for all of science should thus go to $tring theory.$tring theory has fostered a class of fundamentally dishonest, hand-waving, conjecturing, posing, preening, vogueing pretenders. Corruption allows them to make more money from lying than seeking the truth, so they have no incentive to do physics. The fashionista class has bled over into other fields–even the experimentalists raising millions upon millions to test $tring theory’s hoax–they’re in on the con too. They’re all in on the joke, and should you speak out against them, they laugh at you. They call you a crank when you question their ridiculous theories that as someone pointed out here have no laws, nor postulates, nor any predictions that can be tested. When you ask them to draw a cube in dimensions 8-10, they jeer, sneer, and put you on the blacklist so you’ll never be a peer. And everyone lives in fear of not being a peer, because peer review is how they further the untrue.$tring Theory is about one thing–money. Book deals, government grants, TV shows–it’s a big-time tax-funded circus. A theory of nothing–an elite insider’s club for those smart enough to learn the rules of accepting and living the lie, but to stupid to ever think on their own. The worst have risen to the top. It’s not the first time in all of history, and it never lasts long.

I have dated many beautiful, elegant women, but none of them were subsidized by NSF nor the government nor student loans.

–caltechpostdoc

Gosh, sorry! Of course I mean Clark, not Michael!

I am not doing research anymore and foundational questions did not matter much at the time. But I am still curious, so here goes:

‘Undermine’ seems to be a vague term in english, I found both both ‘weaken’ and ‘destroy’. I would agree with the former but not the later. Which did Michael mean?

Feasible falsifiability seems to be good tool to make us trust theories and debunk much junk or faithbased reasoning. There are a lot of problems as Peter and Michael mentions, but it is still doable.

Kuhn, Lakatos, Feyerabend and Quine critisizes falsifiability. From a very short overview of the later two I get the feeling they are confused philosophers with critiques that does not mean much if one want to use falsifiability.

Kuhn’s and Lakatos’ criticism are more social and practical, about the problems of not allowing ad hoc hypotheses. Enforcing falsifiability would diminish social influence on theories. And since we can’t enforce falsifiability on each and every statement in non-formal theories ad-hocs would survive as long as they are needed.

In short, I agree with Peter and can agree with Michael if he means the weaker version of his statement. Applied to string theory it seems that most or all of the anthropic principles used on several occasions would be immediately out. To the benefit of the theory and lessening of criticism until the whole theory can be verified as more than a new part of QFT, or whatever the conclusion was on Cosmic Variance.

If anyone can explain more on the problems of falsifiability I would certainly appreciate that.

13. Clark Goble says:

Alas, I didn’t do grad work in superstring theory. (Although I did for a while consider doing work in spinors and clifford algebra which I guess would have led me towards loop theory given my inclinations)

As to whether Quine or Kuhn styled critiques apply. I suppose it would depend. I think both of them would argue that there are competing interpretations that would undermine falsification. That is one community would say things have been falsified while the other hasn’t. The problem with string theory is that I don’t see that they’ve even *gotten* that far yet. But I can see why some would suggest this as a problem in the future.

14. Peter says:

Clark,

I think the kinds of critique of Popper that I assume you have in mind (Kuhn, Quine?) aren’t really relevant in this context. While falsifiability can sometimes be a slippery notion, up until now physicists haven’t had any trouble agreeing what it means in this particular case and that it is an important criterion for distinguishing whether a proposed idea about theoretical particle physics is vacuous or not. The only physicists challenging this now are ones whose pet theory is failing the test, and all evidence is that this is just because they don’t want to admit failure.

If you can identify how one of the standard problems with falsifiability is relevant to this particular case, that would be interesting.

15. Clark Goble says:

Surely critiques of Popper have been around long enough that someone shouldn’t be surprised that many reject his overly simplistic views of scientific theory. I don’t see why someone would be shocked on this. This has been widely discussed for probably 30 – 40 years.

16. quantum says:
17. Carl Brannen says:

Re: [Peter Woit]I’m certainly not sure what is the exact relation between QFTs in curved spaces with Euclidean signature and with Minkowski signature, especially when there are spinors, but something interesting is going on. Even in flat space, free field QFT, if you try and directly formulate it in Minkowski space you run into trouble (the propagator is given by an integration contour that goes through poles) and you have to do something. Mathematically the simplest way of saying what you have to do is to formulate the theory in Euclidean space and analytically continue.[/Peter Woit]

The short way of describing the interesting thing that is going on is to note that QFT is equivalent to a quantum statistical mechanics for a space-time where time is imaginary (i.e. carries the same signature as the space coordinates) and cyclic. Quantum statistical mechanics is simply classical statistical mechanics for waves (which are not distinguishable) as opposed to particles.

The problem, of course, is that time is not cyclic. However, if you start with a space-like metric (such as would be used as the path length $ds$ along the track of an observer or particle) and promote proper time from being a parameter to a coordinate, you will have a Euclidean space-[proper]time. If you assume that proper time is not classically observed as a coordinate because it is small and cyclic, you will have exactly what you need to establish a quantum statistical mechanics as the theory underlying quantum field theory.

Now all this has VERY IMPORTANT implications on particle theory. As it is currently written elementary particles and fields are specifically designed to avoid any possiblity that a non Lorentz compatible theory will slip through. This is partly due to the success of this technique, and partly due to the fact that Lorentz symmetry has not been experimentally disproved. If the gravitation people begin moving into Euclidean relativity, rather than Einstein’s relativity, the particle people will have to eventually follow.

I find it very gratifying that Stephen Hawking is moving into Euclidean gravitation. I find it even more gratifying that the particle physicists gave me a 3 year head start on figuring out what the Dirac equation looks like in a world where Lorentz symmetry is only an accidental:
http://brannenworks.com/PHENO2005.pdf

Carl

[Apologies for pumping a personal theory on your blog. I believe that you allowed each of us one such post. It won’t happen again without your permission. Feel free to edit for length.]

18. I have posted a reply to Lubos Motl critique of loop quantum gravity in my weblog http://universalwatch.blogspot.com/

19. Anonymous says:

at the end, Fabien Besnard uses an expression I hadn’t heard before which I think is a reference to this fable of La Fontaine
http://www.lafontaine.net/lesFables/afficheFable.php?id=122

to let the prey go and chase the shadow
lâcher la proie pour l’ombre

or, as in this story, not the shadow but the reflection in a pond: the image or appearance being taken for the real goal

I wish he would correct his spelling of Karl Popper’s name