Do We Need to Change the Definition of Science?

Media hype about how the LHC is going to test string theory continues: see Will String Theory Be Proven and here:

String theory has come under attack because some say it can never be tested; the strings are supposed to be smaller than any particle ever detected, after all. But Arkani-Hamed says the Large Hadron Collider could lead to the direct observation of strings, or at least indirect evidence of their existence.

A recent New York Times article ends with another Arkani-Hamed quote about what to expect at the LHC:

He pointed out that because of the dice-throwing nature of quantum physics, there was some probability of almost anything happening. There is some minuscule probability, he said, “the Large Hadron Collider might make dragons that might eat us up.”

Obviously I’m being unfair to put these two quotes together, but they both raise a basic question about the philosophy of science. When can we legitimately say that a theory is testable and makes a scientific prediction? The most straightforward examples of scientific predictions are cases where we have high confidence that a certain experimental result has to happen if a theory is right: such a theory satisfies Popper’s falsifiability criterion. But many theoretical ideas are not so tightly constrained, and compatible with a range of possibilities. This range generally comes with some notion of probability: certain experimental results are more likely to come out of the given theory, others less likely. This may allow you to gain confidence in a theory even if it is not falsifiable, by seeing things that the theory says are more likely, not seeing the things it says are unlikely. The problem with the idea that the LHC is going to test string theory by seeing strings is that according to the standard framework of string theory, this is just very unlikely. Saying that an experiment is going to test your theory when it is extremely unlikely that it will provide any evidence for it or against it is highly misleading. You’re always free to say “this experiment is unlikely to test my theory, but who knows, I may get incredibly lucky and something unexpected will come out of it that will vindicate me”. But that’s not really a “test” of your theory, that’s wishful thinking.

There’s a new article in New Scientist closely related to this by Robert Matthews entitled Do we need to change the definition of science?. It’s about claims being made that multiverse studies show that we need to re-examine conventional ideas about what is science and what isn’t. I’m quoted saying the sort of thing that you might expect:

I never would have believed that serious scientists would consider making the kinds of pseudoscientific claims now being made…

an outrageous way of refusing to admit failure…

The basic problem with the multiverse is not only that it makes no falsifiable predictions, but that all proposals for extracting predictions from it involve massive amounts of wishful thinking.

Max Tegmark argues against a straw man:

Some people say that the multiverse concept isn’t falsifiable because it’s unobservable – but that’s a fallacy

noting that just because some implications of a theory aren’t directly observable doesn’t mean the theory is untestable. If a theory passes many convincing tests involving things we can observe, and the theoretical structure is tight enough, then we have good evidence about what is likely to be going on with phenomena we can’t observe. This is certainly true: if the string theory landscape made lots of testable predictions so that we had good reason to believe in it, and the same structure implied a multiverse, that would be good reason to believe in the multiverse. The problem is that the landscape makes no predictions and we have no reason to believe in it. It’s not a real testable scientific theory, rather an untestable endpoint of a failed theory. As such it implies nothing one way or another about the existence of a multiverse.

Matthews quotes various people arguing for a “Bayesian” view of science, that what is going on is that experimental observations probabilistically provide evidence for and against theories, with the falsifiability case of probability zero or one not usually occurring. This may be a good way of thinking about how science actually works. But by this criterion, string theory unification and the multiverse remain pseudo-scientific, as no one has been able to come up with proposed experimental tests that have a significant chance of providing such evidence for or against these theories.

This entry was posted in Multiverse Mania. Bookmark the permalink.

30 Responses to Do We Need to Change the Definition of Science?

  1. “Matthews quotes various people arguing for a “Bayesian” view of science, that what is going on is that experimental observations probabilistically provide evidence for and against theories, with the falsifiability case of probability zero or one not usually occurring.”

    The big problem with this is called the “Paradox of the Ravens” and is due to the philosopher Hempel. If my hypothesis is “All Ravens are Black”, then whenever I see something that isn’t black and isn’t a raven, whatever it may be, whether a nugget of gold or a horse, that has counted as an “experiment” which “confirms” my hypothesis.

    People have been trying to change the definition of science throughout the 20th century.

  2. anon. says:

    “This is certainly true: if the string theory landscape made lots of testable predictions so that we had good reason to believe in it, and the same structure implied a multiverse, that would be good reason to believe in the multiverse.” – PW

    Even a theory which makes tested predictions isn’t necessarily truth, because there might be a another theory which makes all the same predictions plus more. E.g., Ptolemy’s excessively complex and fiddled epicycle theory of the Earth-centred universe made many tested predictions about planetary positions, but belief in it led to the censorship of an even better theory of reality.

    Hence, I’d be suspicious of whether the multiverse is the best theory – even if it did have a long list of tested predictions – because there might be some undiscovered alternative theory which is even better. Popper’s argument was that scientific theories can never be proved, only falsified. If theories can’t be proved, you shouldn’t believe in them except as useful calculational tools. Mixing beliefs with science quickly makes the fundamental revision of theories a complete heresy. Scientists shouldn’t start begin believing that theories are religious creeds.

  3. Icecycle says:

    Not being a scientist I really don’t see the need to change the definition of science, however; much of our current knowledge seems to be based on faith.
    (I know a swear word; in the first paragraph.)
    Why don’t we have a little funding for the fanatics out there; you know who you are; and just step into the fringe on occasion.
    If there is a multiverse (for instance) it is a black box and has to be investigated as a black box; we would need philosophy to even look at it.
    But philosophers (generally) don’t know physics (just look at Bertrand Russell’s explanation of relativity; I felt really bad for him when I found out he was a mathematician.)

    For myself; not being a scientist; I make myself believe a concept absolutely. Then, finding all the proof that fits I make myself believe the same concept is wrong and find all the proof that makes it fail.
    Because; like everyone else; I tend to get too much of my personal beliefs into my world view.

    As a computer programmer I can’t get away with that nonsense.

  4. St. George says:

    Is the probability of getting a fire-breathing dragon out of the LHC more or less than that of directly observing a string?

  5. Peter Erwin says:

    The big problem with this is called the “Paradox of the Ravens” and is due to the philosopher Hempel. If my hypothesis is “All Ravens are Black”, then whenever I see something that isn’t black and isn’t a raven, whatever it may be, whether a nugget of gold or a horse, that has counted as an “experiment” which “confirms” my hypothesis.

    Why is this relevant? No real scientist, Bayesian or not, operates that way.

    A reasonable Bayesian would argue that the probability of seeing a non-black nugget of gold is independent of the hypotheses “All ravens are black” and “Not all ravens are black” [or hypotheses like “No ravens are black”, “Half of all ravens are black”, etc.]. So actually seeing a non-black nugget of gold does not change the prior probabilities for any of the relevant hypotheses.

    Someone taking a non-Bayesian approach (e.g., a naive Popperian, or someone who prefers a “frequentist” approach to probabilities and hypothesis testing) would operate in essentially the same way: scientific hypotheses about the color of ravens make no predictions about the color of gold nuggets, and so observations of nugget colors do not test the raven hypotheses.

  6. Peter Erwin says:

    anon. said:
    Popper’s argument was that scientific theories can never be proved, only falsified. If theories can’t be proved, you shouldn’t believe in them except as useful calculational tools. If theories can’t be proved, you shouldn’t believe in them except as useful calculational tools.

    Ironically, that’s not unlike what some Jesuit astronomers suggested to Galileo during his trial: go ahead and use the Copernican model if it makes good predictions, but don’t claim that it actually describes reality in any fashion — it’s just a useful calculational tool!

    The point about scientists “believing” in theories is not that they accept them as religious dogma. It’s that they’re making positive statements about which theories seem to be better descriptions of reality, and about the relative amount of observations/experimental support for them.

    The problem with Popper’s argument is not that it’s wrong per se, it’s just that it’s incomplete. It has no way of distinguishing between, say, the heliocentric model of the Solar System (which has successfully passed an enormous array of increasingly stringent tests) and some preliminary hypothesis about the spectrum of density fluctuations in the early universe (which has been tested by the WMAP data, but not by anything else yet). All Popper allows is a passing grade for both (“not yet falsified”). In practice, however, scientists do accord more “belief” to the first than they do to the second. That, I think, is what the Bayesian argument is about. (It’s also, as Peter Woit notes, about how to accommodate tests that provide statistical limits, rather than idealized “confirm or deny” results.)

  7. Zathras says:

    Peter Erwin,

    A reasonable Bayesian could also say that seeing the nugget of gold acts as “confirmation” of the raven hypothesis, but only by raising the probability of the proposition’s truth an infinitesimal amount e.

    Come to think of it, e might be the same order of magnitude as the possibility of confirming string theory.

  8. weichi says:

    “All Popper allows is a passing grade for both (”not yet falsified”).”

    It’s been a while since I read popper, but I don’t think this is correct. He certainly certainly acknowledges that theories can be submitted to more or less severe tests, and I thought that he states that the more severe the tests, the more we should trust the theory (or something along these lines). Am I misremembering this?

  9. Joseph Triscari says:

    Unfortunately I cannot read the whole article because I don’t have a subscription to New Scientist.

    I wonder if the relevant point about a move to a Bayesian philosophy is not the way evidence is aggregated but the fact that in a Bayesian philosophy the priors are admittedly subjective. In my understanding, this is the central point of a Bayesian. The opposing view – which I think is called Frequentist – requires all inputs to a model to be probabilities that can conceivably be measured.

    A Bayesian approach is fine for reasoning under uncertainty because many times, priors *are* subjective and there’s nothing to be done. On the other hand, openly subjective priors are a facet of the Bayesian philosophy that can be abused in practice by shifting inputs to fit data (and declaring that it’s OK because, after all, they’re subjective).

    I am curious to know in what way a Bayesian philosophy of probability is being applied to justify ST.

  10. Gil Kalai says:

    Joseph, in The Bayesian approach, initial probabilities are subjective but with more and more evidence the emerging scientific conclusion will be the same regardless of the initial probabilities. In practice, the interpretation of new evidence which is required for updating your initial subjective probabilities, is not entirely objective. Still the Bayesian approach looks to me a more realistic description than the Popperian approach regarding how science is practiced, and both approaches have problems.

  11. Peter Erwin says:

    Joseph Triscari said:
    I wonder if the relevant point about a move to a Bayesian philosophy is not the way evidence is aggregated but the fact that in a Bayesian philosophy the priors are admittedly subjective. In my understanding, this is the central point of a Bayesian. The opposing view – which I think is called Frequentist – requires all inputs to a model to be probabilities that can conceivably be measured.

    My impression is that prior probabilities are, if anything, better viewed as the Achilles heel of the Bayesian approach: there’s no clear, obvious way to decide on what the prior probabilities should be, and this irks a number of people. The usual response by Bayesians is to point out that the choice of the prior ceases to matter once enough relevant observations have been used to update the probabilities. This amounts to admitting that arbitrary or subjective priors are a problem (not an advantage), while arguing that in most practical cases it doesn’t matter too much.

    The real difference between the Bayesian and Frequentist approaches is in the interpretation of “probability” and whether it makes sense to say things like “The probability of rain tomorrow is 90%.” See, for example, the posting and comments here.

    (There are also practical differences, in that a Bayesian approach arguably provides a more general and direct way of constructing tests of hypotheses, without requiring the assumptions — e.g., a Gaussian distribution for errors — that underlie most traditional frequentist hypothesis tests.)

  12. Peter Woit says:

    Joseph,

    The article was more about the multiverse than string theory. The discussion of the problems of falsification and the advantages of Bayesian ideas seemed to me to be a red herring. There just aren’t any conventional scientific tests of string theory or multiverse ideas. So arguing about the philosophy of science is irrelevant, unless you are trying to abandon the conventional understanding of what science is. Doing this to avoid confronting the failure of these ideas seems to me to be a big mistake.

  13. Joseph Triscari says:

    I agree with you Peter that it’s a mistake not to confront the failure of an unfalsifiable theory. I didn’t mean to invite an exploration of the differences between Bayesian and Frequentist theories of probabililty.

    The point I was trying to make (and I see I made it poorly) was that if one doesn’t wish to confront the failure of a theory, one might try and legitimize the theory using an established philosophy that acknowledges and sometimes encourages subjectivity – such as a Bayesian philosophy of probability.

  14. davetweed says:

    For completeness, I’ll point out that Peter Erwin’s point about subjectivity being a philosophical problem primarily applies in those cases where one is obtaining more than enough observations to sharply determine the posterior distribution, so that the question “why did you use a personal prior if it doesn’t actually matter” comes up. This is generally the case in hard science (although even then…). The advantage of Bayesian model evaluation (to some extent in science and more widely in technology) is strongest when you have relatively few relevant observations (eg, due to medical ethics, response time constraints, etc) so that the prior can make observations which which would be inconclusive under frequentist tests (which often hide a universally held prior in them) more discriminatory when they coincide with various parts of the prior distribution. The personal (what’s called “subjective”, although I dislike that term because AFAIK you can’t have a subjective idea that’s wrong, whereas personal ideas can be wrong) nature of the prior is arguably not a problem in this case as long as it is made explicit. It’s always struck me as strange that Bayesianism is widely touted for dealing with uncertainty (which I always think of in the sense of randomly inaccurate sensors) rather than for dealing with small numbers of observations.

  15. Peter Shor says:

    Most scientists have never really paid much attention to the definition of science anyway, have they?

  16. Pingback: Do We Need to Change the Definition of Science? : Mormon Metaphysics

  17. Peter Woit says:

    Peter Shor,

    I find it very strange to be involved with physicists in discussing what’s science and what isn’t. It always seemed to me that such discussions were for philosophers or for softer sciences (e.g., is economics a science…), with physics embodying the extreme of a hard science, with everybody on the same page as to what counts as a testable prediction.

    Then string theory and the landscape came around….

  18. Peter Erwin says:

    I find it very strange to be involved with physicists in discussing what’s science and what isn’t. It always seemed to me that such discussions were for philosophers or for softer sciences…

    Might one argue that previous debates about “interpretations” of quantum mechanics, what “collapse of the wavefunction” actually means or entails, etc., had at least some “philosophy of science” flavor? That is, physicists ended up disagreeing about whether investigating interpretations was scientifically useful, or whether “interpretation” was even a part of science, and some of this did involve philosophical issues about science.

    (I don’t mean that this was the same kind or order of disagreement that may be happening around the landscape — no one, so far as I know, was suggesting that we didn’t need testable predictions or experimental results. Just that this isn’t the first time that physics has been involved in “philosophy of science” territory.)

  19. Peter Shor says:

    I have a sort of ambivalent view of the previous debates on the interpretation of quantum mechanics. On the one hand, as Feynman warned graduate students, most people who started thinking about the interpretation of quantum mechanics and stopped worrying about more mainstream physics never got anywhere, and eventually derailed their careers. On the other hand, thinking about the interpretation of quantum mechanics led David Deutsch to think about quantum computing and from their to the first quantum algorithms.

    What, if any, lessons about string theory we should take from this I leave to other readers.

  20. Michael says:

    Peter,

    just a little reminder: You are an insufferable fool who shamlessly promotes his anti-science agenda for personal benefit. Shame on you and go to hell!

  21. Chris W. says:

    Philosophical concerns have always been at least a subtext in physics. In fact, I would argue that physics really amounts to an effort to confront what were originally philosophical problems with detailed observation, in such a way that we can actually learn something, ie, discover that some of our preconceptions are wrong. If this sounds strange to most scientists, that’s because they have so little interest in the history of science, and tune much of it out.

    The great physicists of the 20th century were all interested in philosophical issues. Efforts to confront current problems in fundamental physics are hamstrung by naive empiricism just as much as by unhinged elaboration of formalism and metaphysics masquerading as physics.

    Even Feynman was interested in the philosophical preconceptions of physics and its practice, notwithstanding the fact that he considered overt preoccupation with philosophy to be an early sign of senility.

  22. Peter Woit says:

    My point about physicists and the philosophy of science was restricted to the specific question of the so-called “demarcation problem”. There have always been interesting and real questions in the philosophy of physics, but, until recently the question of whether particle theorists were doing science or pseudo-science was not one that ever came up. You just didn’t see leading figures in the field publicly making bogus claims about what it means to test a scientific theory.

  23. Clark says:

    I think there’s two kinds of philosophy of science – prescriptive and descriptive. The prescriptive folks are effectively telling scientists how they ought do science. (I think Popper partially falls into that category) Then there are the descriptive folks who suggest we merely look at what the scientists are doing and excluding to understand what science is. Of course in practice most do a little of both.

    The problem is that Popper was hardly the last word in philosophy of science. A lot of people have grave difficulties with his views. So it’s odd that so many scientists – especially physicists – seem to take Popper as if he were telling it like it was. There’s been quite a few decades of thought on things as well as arguments and counterarguments.

    Personally I find the demarcation problem largely irrelevant. I don’t think there’s anyway to tell what is or isn’t a science beyond looking at what the community of scientists exclude or include. And that’s good enough for me.

    The string issue is interesting since it’s one of the rare cases where agreement breaks down. Given that turning to the philosophers can be helpful (although I don’t think they’ll ultimately be able to resolve the problem either). I suspect what will happen is that research will continue and that if string theory doesn’t make more practical progress scientists will stop working on it. A few decades after it’s ceased to be a concern to any but a few scientists may start to think of it as a pseudoscience. But who knows. They may not. It may end up being viewed as science but an example of a dead end and something one shouldn’t emulate.

    What I do think is true though is that scientists – especially physicists – would benefit from doing a bit more reading on philosophy. Especially philosophical histories of various ideas in physics. So, for instance, read a little Sklar, Fine, and so forth. Get a Philosophy of Science reader from Amazon. (There are several good ones)

    Regardless of what you think of his physics, I think Lee Smolin made a very well thought out appeal a couple of years ago for physicists to engage philosophy more. Normally it doesn’t matter. But I think especially in theoretical physics it can be quite helpful. Despite what Feynman said about physicists and birds.

  24. Arun says:

    What is scientific and not-scientific is not a eternal classification. Democritus, Kannada’s atomic theory, while containing a correct insight and pertaining to reality only became scientific when the technological means to address the theory were developed.

    I think the history of science will show that many ideas were kept in limbo for a time because their development was not feasible at the time the idea was generated. (I hesitate to call such ideas theories.) One can easily imagine a plausible alternate history, where General Relativity was developed much earlier than any capability to test it.

  25. Zathras says:

    Those are good points Arun.

    In fact, under the lax standards of string theorists, one would consider the mathematical theories of Riemann, Lobachevsky, etc as having “developed [General Relativity] much earlier than any capability to test it.”

  26. Christine says:

    PhD means Doctor of Philosophy; we should not forget our roots. The aim of physics is to advance its frontier into metaphysics, trying to make the latter a smaller and smaller territory. For instance, cosmology was in the metaphysics territory before the 19th century, but gradually became a physical science in the 20th. History shows that the advancement of physics proceeds over two pillars: the scientific method *and* philosophy. When doing “standard science” the physicist might disregard the latter and focus on the former, but the *advancement* of scientific knowledge needs both. Philosophy about nature and observations of phenomena trigger the first logical questionings; science is often developed from these prerequisites, it does not develop from itself alone, since it is a process to gain knowledge, not an end per se, otherwise it makes no sense at all.

    The relative emphasis on the scientific method and on the philosophical inputs, however, varies from time to time, from person to person, from objects of investigation from objects of investigation. One thing, however, that must be invariant from this relative emphasis is the meaning of science, which must have a clear definition and agreeded among its practitioners. If this meaning is what is being currently debated or revised, then one should ask why, and this is what puzzles me.

  27. Peter Woit says:

    Arun and Zathras,

    The issue of when a theory is practically testable is just another red herring in the string theory debate. String theory unification not only makes no predictions that are testable using current technology, it makes no predictions testable at any energy scale using any conceivable technology. That’s the problem, not that tests are not currently feasible.

  28. Arun says:

    Peter,
    I have no disagreement with you on string unification/theories of everything.

    To the extent that we consider as science the elaboration of mathematical formalisms that include models that are scientific (e.g., study of QFTs other than the Standard Model) string theory other than unification has not yet departed from science.

    Recognition of the string unification failure also leads to the next puzzle – from which direction is progress likely to come?

  29. Peter Orland says:

    “Recognition of the string unification failure also leads to the next puzzle – from which direction is progress likely to come?”

    Arun,

    Though it all depends on what you mean by progress, I think the best hope is experiment.

  30. Matt says:

    This is a subject near to my heart, as my own website is named after a quote of Maxwell which some scientists could stand to re-read.

    In every branch of knowledge the progress is proportional to the amount of facts on which to build, and therefore to the facility of obtaining data.

    If string theory or the multiverse or anything else can’t build on facts, it ain’t science.

Comments are closed.