Defenders of certain failed speculative theories like to accuse those who point to their failure of being “Popperazzi”, relying on mistaken and naive notions about predictions and falsifiability due to Karl Popper. That’s never been the actual argument for failure, and two excellent pieces have just appeared that explain some of the real issues.
- Sabine Hossenfelder’s latest blog entry is Predictions are overrated, a critique of the naive view that you can evaluate a physical theory simply by the criterion “Does it make predictions?”. She goes over several important aspects of the underlying issues here, making clear this is a complex subject that resists people’s desire for a simple, easy to use criterion for evaluating a scientific theory.
- Over at Aeon, Jim Baggott writes about this under the headline How science fails, focusing on the life and work of philosopher of science Imre Lakatos. I wish I had been aware of the ideas of Lakatos when I wrote a chapter about the complexities of evaluating scientific success or failure in my book Not Even Wrong, since he was concerned with exactly the sorts of issues I was grappling with there.
One of the main ideas of Lakatos is that you should conceptualize the problem in terms of characterizing a research program as “progressive” or “degenerating”. As relevant new experimental and theoretical results come in, is the research program showing progress towards greater explanatory power or is it instead losing explanatory power, for instance by adding new complex features purely to avoid conflict with experiment? One way I like to think of this is that it’s hard to come up with an absolute measure of success of a research program, but you can more easily evaluate the derivative: is some new development positive or negative for the program?
I don’t think there’s any question but that supersymmetry, GUTs, and string theory are classic examples of degenerating research programs. In 1984-5 there was great hope for a certain idea about how to get a unified theory out of string theory (compactification on Calabi-Yaus), but everything we have learned since then has made this hypothesis one with less and less explanatory power.
The Lakatos framework has the feature that there is no absolute notion of failure. It always remains possible that the derivative will change: for instance the LHC will find superpartners, or a simple compactification scheme that looks just like the real world will be found. The not so easy question to answer is when to give up on a degenerating research program. I think right now prominent string theorists are taking the attitude that it’s past time to give up work on the idea of string theory unification (and they already have), but not yet time to admit failure publicly (since, you never know, a miracle may happen…).
And now, for something completely different: If you want something more entertaining to read about particle physics, I highly recommend Tommaso Dorigo’s Anomaly! (see review here). The one problem with that book was that it stopped in the middle of the story (end of Tevatron Run 1). He now is making available some chapters (see here and here) he wrote that cover the later, Run 2, part of the story.
The whole question of what precisely a prediction is, has had a very stimulating analysis in the philosophy of science under the concept of the “empirical content of a theory.” Some hard problems here include the question of the theory dependence of our apparatus, so e.g. if you take a reading on the LHC, you not only need the particle physics, but also the electronics, solid state physics etc. to understand the numbers. I enjoyed the book “Structure and Dynamics of Theories” by Stegmuller (based on the work of Joseph Sneed) who makes sense of this.
Another good post is by Stacy McGaugh, Predictive Power in Science. He discusses varying levels of prediction/explanation, with the “gold standard” being a priori predictions because they can’t be fudged.
I simply can’t agree that “predictions” are underrated, unless one is so inclusive as to consider a whole lot of obvious rubbish. Of course the mere ability to make some prediction doesn’t qualify as good science. But the inability to make any prediction is surely enough to disqualify.
“Explanatory power” is “elegance” with more letters. I doubt there’s a way to really quantify the constituents of hypotheses and of theories in any universally reliable sense, so as to allow us to compute the magical ratio of explanations to assumptions. Sure, some things obviously need to be slashed by Occam’s razor. Other things, it’s not so cut-and-dry. A lot of people have come up with ideas that lacked the desired economy, and had them thinking, rightly, that “there’s gotta be a better way”. But it did the job, at least for a time. That’s perfectly scientific.
Why can’t it be enough to “work sufficiently accurately to encompass known observations and predict new phenomena, such that it can be accepted provisionally?”
I’m not sure your suggestion “work sufficiently accurately to encompass known observations and predict new phenomena, such that it can be accepted provisionally” is not really what people are asking for. It sounds as if you want consistency with known facts and non-trivial predictions, based on an understanding that by necessity must be provisional as all other science.
Predictions serve as more than a test of theories, of course. We should not forget that science is done by humans and comes with a huge baggage of sociology. My otherwise extremely diplomatic PhD supervisor had a remark about people who “can explain everything and predict nothing”. Non-trivial predictions that get confirmed demonstrate the hard way that somebody has understood something about the way Nature works.
Thanks for the link. Was about to mention Stacy’s post, but I see someone already did!
Sabine seems to embrace only explanatory power and discards prediction power as a criteria of good science. But there is always the problem of how to define good explanation. She mentioned a good explanation should have less assumptions and is able to accommodate more data. This is reasonable, but it is not enough. For example, to certain people the statement “God created everything” seems to be a good explanation. It is also an extremely simple one. The problem with this “explanation” is that the statement itself has no predictive power at all. So both simple non-ad hoc explanatory power and novel successful predictions are necessary for the definition of a good science.
Klavs: Sure, as long as we can get everyone to agree on what “non-trivial” means. Generally, that’s in the same league as “porn”, i.e., you know it when you see it. Except when you don’t. Obviously it’s damn hard to nail down. I tend to like the stock, anodyne “definitions” of the above sort I quoted simply because there’s hope they invite enough consensus to get people off of the subject of definitions and move on to the work. I wouldn’t claim they’re terribly useful as a guide, but as soon as one tries to get more particular there’s a whole lot to debate, and little to show for it afterward.
But for heaven’s sake, let’s not trivialize the importance of a prediction…at least of the sort we know to be great when we see it.
We don’t all need to agree on a definition of the (non-)trivial nature of predictions to appreciate them.
Just like we don’t need to agree on how precisely to define dirty clothes before we agree that laundry is a good invention.
I think any physicists should at minimum have read the book by Alan F. Chamlers – What Is This Thing Called Science? (4th edition).
It is also interesting to note, that physics books for students were rewritten after the Second World War and the Sputnik crisis in the US to target more Engineers and so by removing all ‘too philosophical chapters’. It is very nicely described in David Kaiser’s article: “Turning physicists into quantum mechanics,” Physics World (May 2007): 28-33.
Like progressive and degenerating research programs, there are progressive and degenerating comment threads, and this one is degenerating in the usual way. If you don’t have something new and interesting to say that is specifically about the material in the posting, please resist temptation.
many thanks for citing my outlet of additional material from Anomaly!
Concerning Sabine’s defense of theory that make failed predictions, I will iterate here what I commented on twitter about it. Since she has been bashing Supersymmetry theorists for quite some time now, precisely because of their failure to make falsifiable predictions and their undeterred insistence on the possible veracity of their theory (with connected attacks to new accelerator physics projects), I mentioned that her latest entry seemed inconsistent. But consistency, so they say, requires you to be as ignorant as you were a year ago….
Since Sabine hasn’t been writing editorials for the New York Times arguing for shutting down the future of what I and my colleagues do, I can afford to be more charitable. I think her criticism was of the naive idea that you can easily tell good from bad theory by use of the identification “makes predictions=good theory”. Unfortunately it’s not so easy, and that’s one reason I decided to write a whole book about the problem of evaluating SUSY/GUTs/string theory, instead of a paragraph saying “no predictions” (lots of people never read the book, just a review or two, and so are convinced that the content of that book is nothing but that kind of paragraph).
In the SUSY case, many of the prominent theorists involved decided pre-LHC to stick their necks out and claim to have a specific prediction falsifiable at the LHC (weak scale SUSY). This was quite unusual. Thanks to you and your colleagues, science progressed, with heads chopped off and their previous owners reduced to the behavior of proverbial chickens with a similar experience. This hasn’t stopped some of them from continuing to award each other $3 million prizes, but has very seriously damaged their credibility and influence. So, yes, it would have been a good idea for Sabine to acknowledge that sometimes the naive model does work (and we desperately need more of that…)
Regarding “I don’t think there’s any question but that supersymmetry, GUTs, and string theory are classic examples of degenerating research programs. ”
Do you think that the attempt to use supersymmetric particles to explain the supposed anomalous events recorded in the ANITA project are legitimate, or simply an attempt to resuscitate supersymmetry?
Popularly written about here : https://www.livescience.com/63692-standard-model-broken-supersymmetry-new-physics.html
Referring to this preprint: https://arxiv.org/pdf/1809.09615.pdf
I would like your thoughts Prof. Woit, as it is beyond my ability to judge.
The great thing about SUSY is that you can explain any weird event or two using it, but no one takes this kind of activity seriously. Those 2018 media claims were bad enough, but the coverage of ANITA has now completely jumped the shark: the NY Post today has this
“In a scenario straight out of “The Twilight Zone,” a group of NASA scientists working on an experiment in Antarctica have detected evidence of a parallel universe — where the rules of physics are the opposite of our own, according to a report.”
There doesn’t appear to be anything new here, I can’t figure out what idiot told the press about “evidence for a parallel universe”. If I had any association with ANITA, I’d be suing the Post for defamation.
This kind of nonsense really should just be ignored, I’m only commenting on this because one needs a little bit of comic relief these days.
There’s a CNET story (“No, NASA didn’t find evidence of a parallel universe where time runs backward”) that suggests the NY Post cribbed from the un-paywalled portion of a New Scientist story from April:
My guess is that PI has a pretty well-funded PR staff.
Sabine has admirably defeated a straw man without even giving the standard lines from the likes of Paul Thagard (“falsifiability provides no criterion for rejecting astrology as pseudoscience”) that usually accompany “disproofs of Popper.” She doesn’t seem to have read Lakatos carefully, and possibly not even Popper. Arguments defending explanatory power over predictive success have all the problems that led Popper to see a clear difference between the predictive theory of Einstein and the explanatory-only theories of Freud, Adler, and Marx. Popper saw making correct predictions as possibly more of a vice than a virtue; it is very wrong to pin predictive success on Popper. Astrology makes many true predictions, but they are not bold. Popper admired bold predictions (falsifiable on that basis, not merely trivially falsifiable) for all the reasons that philosophical Bayesians do; without the theory the prediction should be utterly surprising, with the theory, less so. Eddington’s experiment was his exemplar for this. Of course science seeks good explanations, but with no element of refutability, theory choice based on explanatory power descends into pure relativism.
I note that the price of “Anomaly!” no longer is one. Looking forward to reading it.
Pingback: The Week’s Anti-Hype | Not Even Wrong