Sean Carroll has a new paper out defending the Multiverse and attacking the naive Popperazi, entitled Beyond Falsifiability: Normal Science in a Multiverse. He also has a
Beyond Falsifiability blog post here.
Much of the problem with the paper and blog post is that Carroll is arguing against a straw man, while ignoring the serious arguments about the problems with multiverse research. The only explanation of the views he is arguing against is the following passage:
a number of highly respected scientists have objected strongly to the idea, in large part due to a conviction that what happens outside the universe we can possibly observe simply shouldn’t matter [4, 5, 6, 7]. The job of science, in this view, is to account for what we observe, not to speculate about what we don’t. There is a real worry that the multiverse represents imagination allowed to roam unfettered from empirical observation, unable to be tested by conventional means. In its strongest from, the objection argues that the very idea of an unobservable multiverse shouldn’t count as science at all, often appealing to Karl Popper’s dictum that a theory should be falsifiable to be considered scientific.
The problem here is that none of those references contain anything like the naive argument that if we can’t observe something, it “simply shouldn’t matter”, or one should not speculate about it, or it “shouldn’t count as science at all.” His reference 7 is to this piece by George Ellis at Inference, which has nothing like such arguments, and no invocation of falsifiability or Popper. Carroll goes on to refer approvingly to a response to Ellis by Daniel Harlow published as a letter to Inference, but ignores Ellis’s response, which includes:
The process of science—exploring cosmology options, including the possible existence or not of a multiverse—is indeed what should happen. The scientific result is that there is no unique observable output predicted in multiverse proposals. This is because, as is often stated by proponents, anything that can happen does happen in most multiverses. Having reached this point, one has to step back and consider the scientific status of claims for their existence. The process of science must include this evaluation as well.
Ellis here is making the central argument that Carroll refuses to acknowledge: the problem with the multiverse is that it’s an empty idea, predicting nothing. It is functioning not as what we would like from science, a testable explanation, but as an untestable excuse for not being able to predict anything. In defense of empty multiverse theorizing, Carroll wants to downplay the role of any conventional testability criterion in our understanding of what is science and what isn’t. He writes:
The best reason for classifying the multiverse as a straightforwardly scientific theory is that we don’t have any choice. This is the case for any hypothesis that satisfies two criteria:
- It might be true.
- Whether or not it is true affects how we understand what we observe.
This seems to me an even more problematic and unworkable way of defining science than naive falsifiability. This whole formulation is extremely unclear, but it sounds to me as if various hypotheses about supreme beings and how they operate would by this criterion qualify as science.
Carroll also ignores the arguments made in my letter in the same issue (discussed here), which were specifically aimed at the claims for multiverse science that he is trying to make. According to him, multiverse theory is perfectly conventional science which just happens to be hard to evaluate:
That, in a nutshell, is the biggest challenge posed by the prospect of the multiverse. It is not that the theory is unscientific, or that it is impossible to evaluate it. It’s that evaluating it is hard.
The main point I was trying to make in the piece Carroll ignores is that the evaluation problem is not just “hard”, but actually impossible, and if one looks into the reason for this, one finds that it’s because his term “the theory” has no fixed reference. What “theory” is he talking about? One sort of “theory” he discusses are eternal inflation models of a multiverse in which you will have bubble collisions. Some such models predict observable effects in the CMB. Those are perfectly scientific and easy to evaluate, just wrong (since we see no such thing). Other such models predict no observable effect, those are untestable. “Hardness” has nothing to do with it, the fact that there is some narrow range of models where tests are in principle possible but hard to do is true but irrelevant.
The other actual theory Carroll refers to is the string theory landscape, and there the problem is not that evaluating the theory is “hard”, but that you have no theory. As for bubble collisions, you have plenty of conjectural models (i.e. “string vacua”) which are perfectly well-defined and scientific, but disagree with experiment so are easily evaluated as wrong. While many other conjectural models are very complex and thus technically “hard” to study, that’s not the real problem, and acquiring infinitely powerful computational technique would not help. The real problem is that you don’t have a theory: “M-theory” is a word but not an actual theory. The problem is not that it’s “hard” to figure out what the measure on the space of string vacua is, but that you don’t even know what the space is on which you’re looking for a measure. This is not a “hard” question, it’s simply a question for which you don’t have a theory which gives an answer.
I do hope someday Carroll and other multiverse fans will some day get around to addressing the real arguments being made, perhaps then this subject could move forward from the sorry state it seems to be stuck in.
Update: Philosopher of science Massimo Pigliucci has a very good discussion of this debate at his blog. Sabine Hossenfelder has a piece at 13.7 on Scientific Theory and the Multiverse Madness.
Update: Coel Hellier has a blog posting here taking Carroll’s side of the debate.
Update: Yet another new argument for multiverse mania as usual science on the arXiv, this time from Mario Livio and Martin Rees. The same problems with the Carroll article recur here, including the usual refusal to acknowledge that serious counter-arguments exist. They give no references at all to anyone disagreeing with them, instead just knock down the usual straw man, those unknown scientists who think that theorists should only discuss directly observable quantities:
We have already discussed the first main objection — the sentiment that envisaging causally disconnected, unobservable universes is in conflict with the traditional “scientific method.” We have emphasized that modern physics already contains many unobservable domains (e.g., free quarks, interiors of black holes, galaxies beyond the particle horizon). If we had a theory that applied to the ultra-early universe, but gained credibility because it explained, for instance, some features of the microphysical world (the strength of the fundamental forces, the masses of neutrinos, and so forth) we should take seriously its predictions about ‘our’ Big Bang and the possibility of others.
We are far from having such a theory, but the huge advances already made should give us optimism about new insights in the next few decades.
Livio and Rees do here get to the main point: we don’t have a viable scientific theory of a multiverse that would provide an anthropic explanation of the laws of physics. The causes for optimism that they list are the usual ones involving inflationary models that give essentially the same physics in other universes, not the different physics they need for anthropics. There is one exception, a mention of how:
accelerator experiments can (in principle) generate conditions in which a number of metastable vacuum solutions are possible, thereby testing the premises of the landscape scenario.
They give no reference for this claim and I think it can accurately be described as utter nonsense. It’s also (in principle) possible that accelerator experiments will generate conditions in which an angel will pop out of the interaction region bearing the laws of physics written on gold tablets. But utterly implausible speculation with no evidence at all backing it is not science.
The authors note that:
an anthropic explanation can be refuted, if the actual parameter values are far more ‘special” than anthropic constraints require.
The problem with this is that you don’t have a theory that gives you a measure on parameter values, so you don’t know what is ‘special’ and what isn’t. As I keep pointing out, the fundamental problem here is even more basic that not having a probability measure on possible universes: we have no viable theory of what the space of possible universes is, much less any idea of how to calculate a measure on it. And no, we are not seeing any progress towards finding such a theory, quite the opposite over the past decades.
Truly depressing is that even the best of our journalists see this kind of article, written by two multiverse enthusiasts and giving no references or serious arguments for the other side, as “even-handed”.
Update: Two new excellent pieces explaining the problems with the multiverse. Ethan Siegel in particular explains the usually ignored problem that the kind of inflation we have any evidence for doesn’t give you different laws of physics, and ends with
The Multiverse is real, but provides the answer to absolutely nothing.
Sabine Hossenfelder explains four of the arguments generally given for why the Multiverse is science, answering them each in turn, with conclusions:
1. It’s falsifiable!
So don’t get fooled by this argument, it’s just wrong.
2. Ok, so it’s not falsifiable, but it’s sound logic!
So don’t buy it. Just because they can calculate something doesn’t mean they describe nature.
3. Ok, then. So it’s neither falsifiable nor sound logic, but it’s still business as usual.
So to the extent that it’s science as usual you don’t need the multiverse.
4. So what? We’ll do it anyway.
so you are allowed to believe in it. And that’s all fine by me. Believe whatever you want, but don’t confuse it with science.
Excellent post and analysis, as ever Peter.
I recall reading an article once by Carroll, where he posited the small value of the cosmological constant as the paradigmatic example of how the “inflationary-string landscape multiverse” model was the only one that could account for the data.
“…Consider the multiverse. It is often invoked as a potential solution to some of the fine-tuning problems of contemporary cosmology. For example, we believe there is a small but nonzero vacuum energy inherent in empty space itself…The problem for theorists is not that vacuum energy is hard to explain; it’s that the predicted value is enormously larger than what we observe. If the universe we see around us is the only one there is, the vacuum energy is a unique constant of nature, and we are faced with the problem of explaining it. If, on the other hand, we live in a multiverse, the vacuum energy could be completely different in different regions, and an explanation suggests itself immediately…”
To me, this is a very weak argument.
The logic seems to be: the value of the cosmological constant can’t be calculated working from first principles. This indicates there may not be a “natural” mechanism to account for why it takes the value that it does. Likewise, quantum physics provides formulas for calculating what the Higgs mass should be, and the Higgs should be very, very heavy. Only its value is actually light with cancellations down to a very fine degree of precision. “Naturalness” is thus probably out as a solution.
An “unnatural” explanation for the Higgs mass or cosmological constant is then sought, invoking “anthropic” reasoning. Perhaps the Higgs mass or the lambda value isn’t fixed precisely by any underlying theory, but can assume a wide range of values in different regions of the universe – that is, in different bubble universes in a multiverse.
We live in the life-permitting one, because we’re here – a selection effect.
But for the multiverse to work, you need string theory (which in turn needs SUSY, for most variants of it, I think) or eternal inflation…and you’ve covered the problems with both of those.
So in the end, according to Carroll, the “multiverse” is true…just because there’s allegedly no other show in town, even though nature – courtesy of the LHC – seems to be telling us otherwise about SUSY/string theory, which is a prerequisite for the landscape.
BTW am I right in thinking that, on its own, cosmic inflation might result in bubble universes if it is “eternal” (rather than chaotic, which doesn’t lead to a multiverse) but those universes or “patches” would all have the same constants without the existence of a string landscape provided by M-Theory (or, rather, M-not-so-Theory)?
It looks like a house of cards.
Yes, and invoking Popper’s “falsifiability” as a criterion for a scientific theory is itself a straw man; product of an overly naive model of how science is done. (If the people you cite want to invoke the philosophy of science, they should discuss a modern, i.e., post 1930’s, philosophy of science, such as e.g. Imre Lakatos’ formulation of a progressive research program. Wikipedia explains:
“A Lakatosian research programme is based on a hard core of theoretical assumptions that cannot be abandoned or altered without abandoning the programme altogether. More modest and specific theories that are formulated in order to explain evidence that threatens the ‘hard core’ are termed auxiliary hypotheses. Auxiliary hypotheses are considered expendable by the adherents of the research programme—they may be altered or abandoned as empirical discoveries require in order to ‘protect’ the ‘hard core’. Whereas Popper was generally read as hostile toward such ad hoc theoretical amendments, Lakatos argued that they can be progressive, i.e. productive, when they enhance the programme’s explanatory and/or predictive power, and that they are at least permissible until some better system of theories is devised and the research programme is replaced entirely. The difference between a progressive and a degenerative research programme lies, for Lakatos, in whether the recent changes to its auxiliary hypotheses have achieved this greater explanatory/predictive power or whether they have been made simply out of the necessity of offering some response in the face of new and troublesome evidence. A degenerative research programme indicates that a new and more progressive system of theories should be sought to replace the currently prevailing one, but until such a system of theories can be conceived of and agreed upon, abandonment of the current one would only further weaken our explanatory power and was therefore unacceptable for Lakatos. Lakatos’s primary example of a research programme that had been successful in its time and then progressively replaced is that founded by Isaac Newton, with his three laws of motion forming the ‘hard core’.”
In these terms, string theory is a (chronically) degenerative research program. This is an argument that scientists should decide not to devote their efforts to this theory. Lakatos, though, recognized that senior scientists who have devoted many years to such a program will continue to pursue it until they retire. As you have often pointed out abandoning this program would not at all diminish science’s explanatory power.
Yes, you accurately describe the CC argument, which raises a host of other issues (anthropics, naturalness). There are claims that this is “evidence for string theory”, basically because you think can produce string vacua with any CC and you assume flat probability distribution. So, “string theory” is playing the same role here as “we have no idea what the CC physics is”. I don’t believe you can get non-trivial evidence for a theory this way.
Yes about eternal inflation models based on a single inflaton field. They don’t give you the CC argument you want, for that you need the string landscape, and typically think of it as having hundreds of inflaton fields.
Yes, Carroll makes a big deal of claiming that he, unlike other physicists with their naive Popperism, has a serious understanding of modern philosophy of science. For him this doesn’t seem to include the progressive/degenerative research program distinction, likely because string theory provides a textbook example of the degenerative case.
Since Galileo and Newton, there are precious few cases, if any, where philosophy has been a helpful approach to progress in the quantitative experimental sciences. But there have been many examples in the past where it fostered sterile debates over what turned out to be irrelevant philosophical concepts, semantics, and personal beliefs. A recent novel feature in such debates is adding modern Bayesian mysticism to give personal beliefs the illusion of quantitative reasoning.
What a confusing article… it is really troubling to see a scientific hypothesis being debated and defended on the grounds of assigning “a prior probability that the theory is true” … It is an impossible evaluation for an extraordinary claim like the multiverse.
He also states that “it is very hard on the basis of indirect evidence alone to send those credences so close to 0 or 1” for which I don’t know which “indirect evidence” he is talking about. Indirect evidence would of course be evidence.
He finishes with the obvious point that “There really might be a multiverse out there, whether we like it or not “, which is true as far as it goes, although I think he should have added that “there really might be NO multiverse out there” and there is currently no way to know one way or the other (and the reason why the whole discussion is completely empty).
Polchinski provided a reductio ad absurdum argument against the Bayesianism business in a paper for the same proceedings as the Carroll one. He calculated a Bayesian probability of “over 99.7%” for string theory, and 94% for the multiverse.
One valuable point in Carroll’s blog post, which I didn’t know, is how to hear Dawid’s phrase “non-empirical theory confirmation”. I quote Carroll:
“It sounds like Dawid is saying that we can confirm theories (in the sense of demonstrating that they are true) without using any empirical data, but he’s not saying that at all. Philosophers use “confirmation” in a much weaker sense than that of ordinary language, to refer to any considerations that could increase our credence in a theory. Of course there are some non-empirical ways that our credence in a theory could change; we could suddenly realize that it explains more than we expected, for example. But we can’t simply declare a theory to be “correct” on such grounds, nor was Dawid suggesting that we could.”
My guess is that Carroll is still inappropriately cool with untestable theories, and that Dawid is even further gone. However, Carroll’s explanation of how Dawid uses the word “confirmation” has made me update away from dismissing them.
I don’t really want to start a discussion here of Dawid’s work, which I’ve written about extensively, see for instance
Carroll’s treatment of criticism of Dawid is a bit like the way he treats those critical of the multiverse: he again ignores their arguments and sets up a straw man to knock down. While those who haven’t read Dawid might be critical because of a misunderstanding over use of the word “confirmation”, many (actually, I think most…) who have read Dawid are critical for serious reasons that Carroll doesn’t address.
Bayesianism is not meant to enshrine subjective opinion (to give it a falsely elevated status by quantification). Rather, the Bayesian view recognizes that subjectivity exists, but provides a model for how the beliefs of rational agents should be updated and should converge, as more information becomes available. (I felt a need to add this small comment, but realize a general discussion of the topic might not belong on this forum.)
Peter, your notion of confirmation is simply too narrow. Since “The multiverse exists” is logically equivalent to “All nonexistent things are non-multiverses,” the existence of the multiverse is confirmed by unicorns, leprechauns, square circles…
Apart from the lack of scientific validity or usefulness for the multiverse, one thing that i find disappointing as a non scientist is the intellectual banality of it all, and especially the way it is presented by its supporters. Compared to the radicalism of the ideas within early 20th century physics, the multiverse feels more like cheap sci-fi. A radical idea could be useful in shifting discourse even if it has little scientific merit, but the multiverse doesn’t seem to be anything other than a plot twist to hastily wrap up 20th century physics now that the studio ran out of ideas.
To the best of my understanding, such arguments run along the lines of Thomas Kuhn’s view of scientific progress (the best candidate theory at any given moment is endorsed by the community, even if it doesn’t fit all facts, and progress is made by building upon it and adjusting as necessary). Only that imho Kuhn described what usually happens, not what _should_ happen. In any case certain individuals should stop calling everyone who simply appeals to logic a Popperazzi. But then again, speaking personally I don’t care, strings and multiverse are unscientific under any definition one picks.
“Scientific validity”, “usefulness” … unfortunately there’s a lot more at stake here. Inflation, string, parallel universes and multiverse fans enjoy repeating how this is the next step in Copernicus’ path, rejoicing in rediscovering ways to keep the human race humble. Only that not being the centre of the universe is totally different from declaring the universe unreachable, and if you bring something so radical on the table then the burden of proof falls on you. (Yes, it’s very similar to the burden of proof for unseen creatures falling on those who believe in them.)
Also, Copernicus’ theory happened to explain previously inexplicable facts in an unambiguous way.
Karl Popper is considered a sort of fallibilist, contrary to popular belief he didn’t hold that non-scientific claims are meaningless, such unfalsifiable claims can often serve important roles in both scientific and philosophical contexts, even if we are incapable of ascertaining their truth or falsity. He maintained that while the particular unfalsified theory we have adopted might be true, we could never know this to be the case.
This fallibilism invites to stick the weak version of Popper’s falsificationist criterion in the sense that falsification provides a methodological distinction based on the role that observation and evidence play in scientific practice. Then, the aim of the partial refutation/improvement of a theory isn’t to highlight its inaccuracy but to point out their faint points, that also means to improve the theory. Einstein’s theory of gravity partially refuted Newton’s theory and therefore improved it. Popper thought that the scientific theory that passed the experimental test is submitted to future refutation and/or improvement. Thus, the term falsifiability is synonymous to testability.
Like Gödel’s theorems of incompleteness, that don’t prevent the creative work of the mathematicians, Popper’s fallibilism doesn’t hinder the imagination of theoretical physicists. Lashing out against the popperazi in order to defend a theoretical position is an approach that has little to do with Popper’s fallibilism.
I am always amazed at the quite unsophisticated level of discussion of foundations of the scientific method by practicing scientists … probably due to the fact that they (scientists) often, or exclusively, become interested in such issues only when they can become “useful” for certain otherwise undefendable/weak ideological biases and claims … or simply when they do not have any other resources for their arguments 😉
The actual critic to Popper’s “falsificationism” and “demarcation theory” of science has a quite long and respectable history that is systematically ignored by the protagonist of current debates. I have been a very close follower of Paul Feyerabend epistemological anarchism position since I was 14 years old and studied these elementary things in high school (yes, it was usually normal to have some readings on epistemology at the beginning of high school) … and all this noise against “Popperism” looks very much like the rediscovery of the hot water for any student that had a chance to read and study the works of Lakatos, Khun, Feyerabend, etc. The problems with “abduction” and foundations of science by inductive methods have even older roots in Hume. The problematic status of falsifiability of theories/models with free parameters has been repeatedly discussed even in jokes (“with four parameters I can fit an elephant, and with five I can make him wiggle his trunk” – E.Fermi, J.von Neumann, F.Dyson, etc.).
The role of “crucial experimental validations/falsifications” has been repeatedly questioned by P.Feyerabend; similarly the usage of fundamental theoretical principles has been criticized, as for instance in the case of “general covariance” by E.Kretschmann’s argument (variants of this argument can be adapted to show that essentially any “sufficiently complicated theory can be made to satisfy certain abstract requirements and/or empirical observations”).
The fundamental issue here (in my understanding) is not the “multiverse”: theoretical physics in one way or another has always been dealing with the existence of “multiverses”: in the most naive sense, theories do not necessarily specify a unique “history” of the physical system under study; such a system can actually be in many different “states” each of them corresponding to different evolutions (in some cases they do even specify a unique selection criterion for dynamical laws) and the operational specification of the “exact state of the system” might be out of reach even in principle. The quite beautiful popularization book “Farewell to Reality” by Jim Baggott is one of the rare cases where an attempt is made to identify some of the principles and assumptions “usually” hiding beyond scientific practice: very often a “realist reductionist trinitarian” point of view (Ratzinger’s Trinity) manifested in a) the assumption of the existence of some fundamental “ontological truth”, b) the belief that such truth influences “empirical reality” and c) a faith in the ability of scientists (humans or otherwise) to “obtain reliable information” on such truth from empirical testing of nature. Such principles are usually complemented by equally essential use of criteria like “Occam’s razor” (“simple” theories that “compactify information” are preferred) and the related “Copernicanican symmetry principle” (that is actually one of the strongest motivations for the introduction of multiverses).
The real issue here, I repeat, is that some theoretical physicists are exceptionally becoming interested in epistemology because they cannot find other reasonable ways to defend and justify the support received by some theories, currently undergoing “degenerative involution processes” … theories that they a-priori consider as “truth”.
The invocation of “Bayesian abduction” as a substitute for Popper’s falsificationism here does not seem to be an improvement, since the specification of “priors Bayesian probabilities” is influenced by “arbitrary sociological” input, that in this case just means that the current “most established” (?) hypothesis and conjectures will decide the output (“The role of priors is crucial” – S.Carroll pg.8).
The general epistemological position of S.Carroll (and somehow His deep religious view of “Science” – “Poetic Naturalism” – and the consequent need to find an new absolute methodological criterion for separating “True Science” from “Non-Science”) is quite clear in his recent book “The Big Picture” (but also in some of His technical writings on the “ontological role”, against “epistemic role” of the wave-function in QM).
You already discussed in some details some of the underlying problems of such philosophical point of view in a previous post, if I remember correctly.
The panorama of epistemology is way more complex today … and I would personally suggest to those that are not comfortable enough with strong “anti-realist” proposals (P.Feyerabend) to give a closer look at positions like “structural realism” 🙂
Pingback: The cosmological multiverse and falsifiability in science | coelsblog
Judging from the comments above, there is prevalent notion held by both pro-multiverse and anti-multiverse partisans that the the validity of these ideas has something to do with the philosophy of science.
I think the multiverse is nonsense on its face. There is no mathematics in it, and it can’t be used to calculate anything. And, although I’m not proud to say it, my knowledge of Popper’s work is rock-bottom zero.
To be a true scientist, I think it is certainly helpful if one has the ability to “calculate” but that ability must be accompanied by a certain level of reasoning skills.
When it comes to why one would study “the multiverse” in the first place, I think Thomas Reid, founder of the Scottish school of Common Sense put it best: “If there are certain principles, as I think there are, which the constitution of our nature leads us to believe, and which we are under a necessity to take for granted in the common concerns of life, without being able to give a reason for them — these are what we call the principles of common sense; and what is manifestly contrary to them, is what we call absurd.”
Peter Orland/Chris Kennedy,
I agree that there’s a very strong common sense argument here. What’s mystifying though is that otherwise very smart and sensible people take this seriously. Why is this? What can be done about it?
For a good example I just noticed this on Twitter
Preskill is an eminently sensible theorist, but even he seems unable/unwilling to recognize the obvious common sense point (which Hossenfelder is making) that the fact that the string landscape/eternal inflation theory predicts nothing at all about anything means that it is not a conventional scientific puzzle but just an excuse for a failed research program. Yes, theorists who use this excuse generally couple their use of it with “dismay”, but they still hang on to the excuse as a lifeline.
In the interest of moving the field forward, let me offer a few comments which I hope everyone can agree with.
-No one, not even Popper, proposes that theories are falsifiable up and down by single experiments. After absorbing Lakatos, Feyerabend and Kuhn, we can agree that what we evaluate are, first of all, research programs and second, competing theories within research programs.
-It is more precise to talk about the falsifiability, not of theories, but of predictions, which may be consequences of one or more theories.
-There are cases where the asymmetry implied by falsifiability holds, and cases where it doesn’t. But in all contexts, science proceeds by the testing of testible predictions by experiment.
-Within a fixed research program, theories that imply falsifiable, or at least testible, predictions are generally to be preferred to theories that make no predictions. One reason is that they will, when they succeed, provide tighter and more insightful explanations.
-Research programs that generate theories that imply falsifiable predictions are generally to be preferred to research programs that generate none.
-Some theories involving stochastic evolution of a population on a landscape do generate falsifiable prediction. An example is modern population biology. Indeed, having made a great many falsifiable predictions, which were confirmed, this is now considered established science.
-These predictions were possible because one can deduce features common to all fit individuals, which are then the basis of falsifiable predictions.
-Therefor, it is not impossible that a cosmological theory might be constructed to emulate the success of population biology, and involve stochastic evolution of a population on a landscape, in a way that likewise leads to falsifiable predictions.
-Unfortunately, the multiverse of eternal inflation is not of this type. It fails to imply falsifiable predictions because our universe must be considered a highly atypical member of the ensemble, so there are no common features our universe shares with the typical case, which could provide testible predictions. Instead, there are members of the ensemble consistent with wide ranges of possible values of parameters of effective field theory. (One possible exception is the prediction to have a very small negative curvature, but this is not testible, at least so far).
-One can construct cosmological models which do posit that our universe is a typical member of the ensemble its postulates generate, which do imply falsifiable predictions. An example is cosmological natural selection.
“What can be done about it?”
More voices are needed.
It’s not easy, because like in many other instances of destructive human behaviour, those who are fed by it have a lot more motivation to devote energy to it. But the crisis is real and can harm science in very long term.
More voices are needed to tell the public (and, because we’ve reached that point, undergrads) that the big words they hear about have amounted to exactly nothing. And probably more courage to stop treating select colleagues as people who can just go on doing their job like it doesn’t matter.
I’m becoming more and more convinced that it’s a waste of time to engage in abstract argumentation about this issue, decoupled from discussion of any specific theory/model. I’ll leave it to you to do this for cosmological natural selection models, these are not what the currently vocal multiverse proponents have in mind. What they are arguing for are:
1. eternal inflation models based on single inflaton fields, which they never mention are models that don’t give them what they want: the variety of possible universes needed for anthropics.
2. string landscape models, for which they have no viable theory that can say what the landscape is, much less calculate anything relevant about it.
The bottom line here is that these people are, in a highly misleading way, trying to sell a pseudo-scientific story to the public and their colleagues. This is pseudo-science simply because there is no theory there, nor even a plausible hope of finding one.
By the way, there’s yet more of this that just came out
I’ve appended commentary about this to the blog entry.
I don’t want to be seen as biased, but is it fair to say that it is mostly theorists that are multiverse proponents? How many experimentalists and observationalists buy into the multiverse? Seems an experimentalist would be highly skeptical, I am, if there is no reasonable way to test it.
Yes, it’s mostly theorists who are the problem here. But, sometimes experimentalists (especially in observational cosmology) do buy into the mania, generally just because it functions well as something to get the public interested in their work. As a random example, look at the last part of this Tedx talk
Saying that you’re going to find evidence for a multiverse is a lot more exciting than saying you’re looking at dust, no?
A problem here is that experimentalists, knowing less about the details of the speculative models involved, are sometimes less able to distinguish between what Carroll claims is going on here (a serious scientific model that is hard to test) and what really is going on (no viable model).
I would think that experimentalists and observationalists would be generally be agnostic, at least publicly, as in: “When you have a suggestion and some guidance for a genuine test that we might conceivably carry out, we’re happy to hear it. Until then, we have plenty of other work to do.”
Of course, after some decades, considerable skepticism about (and loss of interest in) the likelihood of a genuine test ever being proposed can be expected. After all, the raison d’être of experiment and observation is the testing of ideas that actually admit tests.
What on earth is “Consolidation of Fine Tuning”?
According to google, a Templeton funded project at Oxford, http://finetune.physics.ox.ac.uk/
” The causes for optimism that they list are the usual ones involving inflationary models that give essentially the same physics in other universes, not the different physics they need for anthropics.”
I saw this also in the comment section in your discussion with Coel Hellier. Is it correct to say such models are simple implying that whatever lies beyond the horizon of our universe is probably a region (another universe or whatever the name) where the physical laws remain the ones we know? In other words, isn’t this just the same as saying the universe is infinite but simply calling this a multiverse just to include the region we actually know? It would be helpful if such models would actually not call this a multiverse.
I think the Livio-Rees essay deserves more credit. I appreciated that the essay avoids the usual diatribes about the purpose or rules of science, about which no one will ever agree, and dives head first into the actual issues. Why “evenhanded”? They don’t assert that properties like the value of Λ are fine-tuned, rather that they might be, and they clearly articulate the immense (OK, some would say insurmountable) challenges involved in figuring out whether these properties really are fine-tuned. For instance, this paragraph:
“We are currently far from having any theory that determines the values of Λ or Q or the dark matter density (and we know even less about the relative likelihood of various combinations of these constants or how they might be correlated). Still less do we have a cosmological model that can put a ‘measure’ on the probability density of various combinations. But if we did, we would then have another way of testing — and in principle refuting — whether the ‘fine tuning’ was due to anthropic selection. We could do this by examining whether we live in a ‘typical’ part of the anthropically allowed multiverse, or whether the tuning was even more ‘special’ than anthropic constraints required.”
I know you don’t disagree with any of that, since it’s accurate; I imagine you just don’t like the sense of optimism conveyed by “But if we did, we would then have…” I don’t feel particularly optimistic myself about the future prospects, but whether one does or doesn’t, there’s still a crisis.
I understand your point that Livio/Rees are careful to say “this is all speculative” and that’s why you think of them as “even-handed”. But whether this is speculative is not really the question (few would describe it as not-speculative, as something with significant evidence). The question is whether it’s legitimate science at all, and they write in their abstract:
“Although the concept of a multiverse is still speculative, we argue that attempts to determine whether it exists constitute a genuinely scientific endeavor.”
coming down strongly on one side of that debate. My comments here were specifically addressed at their argument for this, which I think doesn’t hold up at all, for reasons I explain.
There is in principle a distinction between points causally disconnected from us because they’re just too far away, and those causally disconnected because they’re in a different “bubble”. For better or worse, the first kind of points are usually said to be in our universe, the second kind in a different one.
The crucial distinction here is a different one: if you are talking about more than one universe, do your other universes have a sufficient variety of laws of physics to allow for an anthropic explanation of our laws? Refusing to make that distinction, claiming evidence for models of a multiverse with no such variety of laws as evidence for an anthropic multiverse is absolutely standard behavior now from multiverse advocates (with Livio/Rees just another example).
I appreciate your point, and I also am glad to see you are joining us who have been saying there is a crisis for some time.
But I would like to point out that there is a theory that “…determines the values of Λ or Q”, by putting “…a ‘measure’ on the probability density of various combinations.”
This theory is cosmological natural selection and, as was discussed in the 1992 paper, and the 1997 book, CNS gives a non anthropic explanation for the values of Lambda and Q. The result is that the anthropic principle is not needed to explain this fine tuning. This works because we live in a typical member of the ensemble, as was discussed in my comment above.
I’ve added to the posting links to excellent new discussion of this from Ethan Siegel and Sabine Hossenfelder that have just appeared.
So what’s Sabine’s solution? Anybody’s solution? 1. No multiverse, 2. I read on her blog Sabine doesn’t like the idea reality is a computer simulation (who does?), 3. Some still believe in a complete consistent theory explaining all reality with equations that can be written on a tee shirt (good luck with that), 4. Then there’s possibly a creator.
The link above to fine tuning is interesting … http://finetune.physics.ox.ac.uk/
Isn’t this last the way forward as it’s exploring fine tuning tentatively without assuming what any final theory will be? It looks like a heck of a project.
Why does Sabine or anyone other scientist have to provide a solution? Part of science is knowing when you don’t know something. The situation here is that we don’t know whether there’s a multiverse with different physical laws since we have no viable testable theory/model. When science can’t provide the answer to a problem, you’re welcome to advertise your evidence-free speculation, attribute the answer to God/simulation overlords, or deal with the problem in any other way you feel like. You just can’t claim that doing this is doing normal science (as Carroll/Livio/Rees are claiming).
Peter, I’m not frustrated at anyone at all in particular (honest), it’s just that once you rule out a multiverse to explain our particular universe with life in it, I think one has to look at this fine tuning issue very carefully. Maybe investigations need to be then carried out beyond what is called “normal science”. Does anybody else think like this?
Not one is ruling out multiverse explanations, they’re just noting that we have no evidence for such explanations or any plausible way of getting any. Sure, plenty of people react to this situation by wanting to go beyond “normal science” and pursue for instance metaphysics instead of physics. No reason they shouldn’t, but they shouldn’t be misleading people by claiming that they’re doing the usual sort of science when they’re not.
I see people saying “there’s a crisis, what can we do about it”. I think that’s nonsense. The real situation is that there are some things we know about nature, other things we don’t; and the human race has more ideas and more data than ever before, about the things we don’t yet understand. Not only is there every reason for optimism, there’s no shortage of concrete things to work on. It’s not like we’re lost in space with no clue what to do.
I think we don’t have much in the way of good ideas (lots of bad ones) about certain fundamental issues in physics, but don’t think this is a “crisis”. The people heavily invested in SUSY/GUTs/string theory unification do have a “crisis” on their hands as their ideas have clearly failed and the bottom is falling out of their investment. Some of them are attempting to prop up their investment by creating a crisis for physics in general, by trying to change the definition of what successful science is in order to avoid acknowledging failure.
This is off topic but does https://arxiv.org/abs/1801.08160 imply the first shovel full of dirt on the coffin that is string theory, and will the string theorists see it as such?
Pingback: 15 Years of Multiverse Mania | Not Even Wrong