Wilczek Goes Anthropic

A few weeks ago one Nobel prize winner put out an article promoting the idea of adopting anthropic reasoning as a new paradigm of how to do theoretical physics. More recently another Nobelist, Frank Wilczek, has to some degree followed suit. Wilczek is one of four authors on a new paper entitled Dimensionless constants, cosmology and other dark matters which first appeared on the arXiv November 29th, then in a slightly revised version on December 8. The other authors are Tegmark, Aguirre and Rees, with Tegmark’s name appearing first indicating it’s more his work than that of his co-authors.

I wasn’t sure quite what to make of this paper when it first came out, especially how much it reflected Wilczek’s own point of view on anthropism. Last Friday I attended talks by Wilczek and Tegmark at the 6th Northeast String Cosmology Meeting organized by the Institute for Strings, Cosmology and Astroparticle Physics here at Columbia.

Wilczek’s talk was entitled “Enlightenment, Knowledge, Ignorance, Temptation”. He explained that these corresponded to categorizing parameters of physical theories according to whether life depended on them or not and whether we have a good idea for what determines them or not. Choosing the two possible answers to these two questions gives four cases:

Enlightenment: Parameters that life depends on, and we think we have a good idea about what determines them. Here his example was the proton mass, very small on the Planck scale, but we think we know why: logarithmic running of coupling constants.

Knowledge: Parameters that life doesn’t depend on, and we think we have a good idea about what determines them. One example he gave was strong CP violation, which is irrelevant to life, but very small, perhaps because of axions.

Ignorance: Parameters that life doesn’t depend on, and we don’t have a good idea about what determines them. This includes most of the standard model parameters, as well as just about all parameters in theories that go beyond the standard model.

Temptation: Parameters that life depends on, and we don’t have a good idea about what determines them. The examples he gave were the electron and up and down quark masses.

He said that his talk would concentrate on “Temptation”, the temptation being that of using anthropic argumentation. He noted that David Gross believes this is a dangerous opiate, causing people to just give up instead of really solving problems. The one anti-anthropic point he made was to put up a graphic showing agreement of the lattice QCD spectrum calculations with experiment, saying the lesson was that sometimes real calculations turned out to be possible even though people had at times doubted this. So one should try and “limit the damage”, not go wild and use anthropics inappropriately, trying to save as much beautiful physics as one can even when anthropic reasoning is forced on us.

The rest of his talk though showed a significant amount of enthusiasm for the new anthropism. He referred to people like his co-author Rees who have been promoting the anthropic point of view for years as “unhonored prophets”. Given the paucity of experimental data relevant to explaining where things like standard model parameters come from, he said that at least anthropics gives lots of new questions so one has something to do when one gets up each day which might be fruitful. He attacked the idea of using “pure thought”, without consulting the physical world, saying this hasn’t worked, not 20 years ago, not now, not in the future. I presume he had string theory in mind when he said this, noting out loud that it might annoy some people in the room.

The main idea about anthropics he was trying to push is that anthropic calculations were “just conditional probability”, making much of the equation


for the probability of observing some particular value p of parameters, given some underlying theory in which they are only determined probabilistically by some probability distribution fprior(p). The second factor fselec(p) is supposed to represent “selection effects”, and it is here that anthropic calculations supposedly have their role. In the paper the authors argue that “Including selection effects is no more optional than the correct use of logic”. The standard way physics has traditionally been done, one hopes that the underlying theory determines p (i.e. fprior(p) is a delta-function), making selection effects irrelevant in this context. The authors attack this point of view, writing:

to elevate this hope into an assumption would, ironically, be to push the anthropic principle to a hedonistic extreme, suggesting that nature must be devised so as to make mathematical physicists happy.

At no point in his or Tegmark’s talks, or anywhere in their paper, do they address the central problem with the anthopic principle, that there’s a huge issue about whether you can get falsifiable predictions out of it, and thus whether you’re really doing science. In this context, the nature of the problem is that if fprior(p) is not peaked somewhere but is flat (or more or less flat), then everything just depends on fselec(p), but if you calculate it anthropically, all you are doing is seeing what you can conclude from known laws of physics and the fact that we exist. In the end what will come out of this kind of calculation is some probability distribution that better be non-zero for the values of the parameters we observe, otherwise you’ve done the calculation wrong.

There is a particular sort of physical model one can hope to falsify this way. If one assumes our universe is a randomly chosen point in a “multiverse” of possibilities, and looks at an observable that is supposed to have a more or less flat probability distribution in the ensemble given by the multiverse, then one can argue that we should be at some region of parameter space containing the bulk of the probability in the anthropically determined fselec(p), not far out in some tail where the probability distribution is vanishingly small. There are plenty of examples of this already. The proton lifetime is absurdly long compared to bounds from anthropic constraints, so any model of a multiverse that doesn’t have some structure built into it to generically sufficiently suppress proton decay is ruled out. This includes the string theory landscape, so one of the many mysteries of the whole anthropic landscape story is why its proponents don’t take their own arguments seriously and admit that their model has been falsified already. It also applies to Tegmark’s favorite idea, that of the existence of a Level IV multiverse of all possible mathematical structures, an idea he also promotes in the paper with Wilczek.

Wilczek also discussed one particular axion cosmology model in which fprior(p) can be calculated. In these models one has the relation
$$\xi_c\sim f_a^4\sin ^2\frac{\theta_0}{2}$$
for the axion dark matter density in terms of the Peccei-Quinn symmetry breaking scale and the misalignment angle of the axion field at the Peccei-Quinn symmetry breaking phase transition. To make this agree with the observed dark matter density, if one assumes the misalignment angle is some random angle then the Peccei-Quinn scale has to be about 1012GeV. If one wants to make the Peccei-Quinn scale the GUT or Planck scale, one has to find some reason for the misalignment angle to be very small. The proposal here is that this happens for anthropic reasons, since if the angle were not small it would cause an amount of dark matter incompatible with our existence. For these small angles the above formula implies that the probability distribution for the dark matter density caused by such axions satisfies
$$f_{\text{prior}}(\xi)\sim \frac{1}{\sqrt \xi}$$

The Tegmark et. al. paper contains an elaborate calculation of fselec for the dark matter density, involving all sorts of “anthropic” considerations which goes on for eleven pages or so and involves a bafflingly long list of considerations about galaxy, star and planet formation, as well as many possible dangers that could have disrupted the evolution of life, such as disruption of the Oort cloud of comets. I’ll freely admit to not having taken the time to follow this argument. The end result for fselec as a function of $\sqrt\xi$ is a probability distribution with the measured dark energy corresponding to something close to the peak.

I’m not sure exactly what conclusions one can or should draw from this calculation. So many different facts about our specific universe are being folded into this that it’s not clear to me that there isn’t some circular reasoning going on. This is a general problem with “anthropic” arguments: if you assume that life couldn’t exist if the universe was much different than it is, you smuggle all sorts of information about the way the world is into your “anthropic” calculation, after which it is not too surprising that it “predicts” the universe has more or less the properties you observe.

What we really care about in these arguments is whether they can be used to extract any information whatsoever about fprior, the physics we are trying to get at. In this axion cosmology case we have a prediction for this distribution and the calculation shows this is consistent with the observed dark energy density, but as far as I can tell, all sorts of other quite different distributions would work too. So, I’m still confused about exactly what this calculation has told us about the underlying axion cosmology physics that it is supposed to address, other than that it is not obviously completely inconsistent.

Tegmark’s talk at Columbia was titled “Measuring and Predicting Cosmological Parameters”. The “measuring part” was a summary of some of the impressive experimental evidence for the standard cosmological model. The “predicting” part was pretty much pure promotion of anthropism, including a long section on reasons why the electroweak symmetry breaking scale is anthropic and some comments making fun of David Gross (“even he couldn’t predict the distance from the earth to the sun. Laughter…”). The only actual “predictions” mentioned were the results about the axion cosmology model mentioned above and described in detail in the Tegmark et. al paper, as well as the well-known Weinberg anthropic “prediction” for the cosmological constant.

All in all, I found these two talks and the Tegmark et. al. paper pretty disturbing. They seem to me to be part of a highly ideological effort to sell the Anthropic Principle as science. The paper devotes two pages to a detailed list of standard model parameters, and makes various statements about the probability distribution function on this large number of parameters, even though it has nothing to say about almost all of them, and I think there’s a strong argument that the anthropic program inherently will never have anything useful to say about most of these parameters. Many of Wilczek’s remarks were more modest, but the paper he has signed his name to is highly immodest in its claims for anthropism. Together with Weinberg and Susskind’s anthropic campaigns, it seems to me that more and more theorists are going to join this bandwagon. Neither Wilczek nor Tegmark are string theorists (and Wilczek is clearly somewhat skeptical about the whole idea), but there seems to be an unholy alliance brewing between them and Susskind and his followers. The only prominent person in the field standing up to this publicly is David Gross, and it is very worrying to see how little support he is getting.

Update: A preprint by Frank Wilczek corresponding to his talk last week entitled Enlightenment, Knowledge, Ignorance, Temptation has appeared. It is a contribution to the same conference as the one Weinberg contributed Living in the Multiverse to, I gather in honor of Martin Rees. Wilczek’s preprint announces a “new zeitgeist”, that anthropic arguments are in the ascendancy. One quite strange thing in the preprint is that he suggests an anthropic explanation for the long proton lifetime in terms of doing anthropic calculations involving future observers.

He does say there are drawbacks to the new order (a loss of precision and of targets to calculate), but on the whole he seems to embrace the new anthropic paradigm rather whole-heartedly, seeing it as a lesson in humility for those who had the hubris to believe it was possible to understand more about the universe through “pure thought.”

Update: Two of the authors of the paper discussed here (Aguirre and Tegmark) wrote in with some comments that are well worth reading (as well as those from Smolin and others about his own proposal). Aguirre points to an interesting paper of his On making predictions in a multiverse (see also an earlier paper with Tegmark), which addresses some of the conceptual issues that were bothering me about this sort of calculation. It points out many of the problems with this kind of calculation, and I don’t really share the author’s optimism that they can be overcome.

Lee Smolin mentioned to me a somewhat related workshop that was held this past summer at the Perimeter Institute, on the topic of Evolving Laws, especially “do the laws of nature evolve in time?” Audio of the discussions at the workshop is available

This entry was posted in Uncategorized. Bookmark the permalink.

107 Responses to Wilczek Goes Anthropic

  1. Aaron Bergman says:

    But I would still classify her as an apologist for ST.

    How depressing, that we must classify everyone.

  2. Adrian said :

    >I can’t read French but I trust you to be right about this one.

    you don’t have to : there is a link to an english version at the top of the page.

  3. Arun says:

    Yes, Aaron, since we have little hope of saying “correct” and “incorrect” about people’s work, we are reduced to classifying the people, sort of like what happens in any humanities department. The level of objectivity that characterized particle physics is being eroded.

  4. Who says:

    Arun:…**since we have little hope of saying “correct” and “incorrect” about people’s work, we are reduced to… level of objectivity … is being eroded.**

    maybe there is a ray of hope regarding falsifiability. judging by what Smolin said earlier, string ideas were constructed to have thorough Poincaré invariance, naturally, and so in 2008 if GLAST finds some energy dependence of the speed of gammarays, that will falsify string. I presume that string can then be discarded as physics for that reason, if for no other.

    conversely there are several non-string approaches to quantum gravity which do naturally and unavoidably predict energy dependence of the speed of photons—I think Smolin-style LQG does, for one. If GLAST does NOT find energy dependence in arrival time of gammaray burst photons that would seem to me like a good reason to discard Smolin-type LQG and any other QG approach with that feature.

    By the way this does not affect CDT as far as I know.

    I looked at the Stanford GLAST website recently and it appeared that it was still on schedule for a launch in 2007. Perhaps someone will wish to correct me, but that does seem to introduce a little “correct” and “incorrect” into the picture—which Arun found lacking.

  5. island says:

    Anthropic “Principle”?
    One thing that is certainly unbounded is human hubris.

    I think that it’s equally arrogant to think that we can detatch ourselves from nature to presume that we can remove ourselves from the natural process to conclude that nature doesn’t specially produce intelligent life for the thermodynamic contributions that intelligence enables. As soon as you do that, then you realize how stupid that it is to think that we humans could be alone in that effort if there’s a real thermodynamic need for it.

    Especially when there is writing-on-the-wall evidence that the leap from apes to humans enabled us to take advantage of energy sources that we would not be able to tap into otherwise. Etceteras and so on, the list of stuff that humans do in this effort is about as endless as the growing list of anthropic coincidences.

    And then start looking from there… ASK the questions, don’t *automatically* knee-jerk react with lame representations that are designed to ignore the full implications of the principle.

    Some how it never occured to any one that instead of the world being perfectly tuned for our existence, it is actually the case that instead our existence is ferfectly tuned for this world, not the other way around, just like the case of oxygen on earth.

    That means that we’re a part of the local ecobalance, and you’re right, so don’t assume that we’re any different than squirrils that bury nuts when we warm the climate, this counterbalances the *cumulative* accelerating runaway tendency toward glaciation that is predicted by Milankovitch models, which have been verified spectrographically from ice core samples taken in Greenland.

    This balance between cumulative runaway tendencies, the greenhouse effect, vs the tendency toward glaciation… is yet another anthropic coincidence for which we are *innately* contributing players… go figure?

    Deal with it, you don’t have a choice, so don’t be so arrogant as to presume that we can fool mother nature, the tug-o-war between “big business” and the “green movement” self-regulates the process, not the biggest brains in science, who *would* have us stagnate and die. Deal with it, we’re not detatched from the process, we’re specaially required ‘fungi’ that design “fairy-rings” per our inherent predisposition so do so. Nothing more… NOR LESS CONNECTED.

    Bostrom introduced the Self Sampling Assumption in an attempted extension of the Copernican Principle into the time domain, but it has been proven that this is of no real-world significance to cosmology, because theories of everything that have constrained prarameters guarantee the emergence of man in a world in which man is known to exist. And that’s where the ill-considered idea from skeptics comes from, they then conclude that “just so” isn’t a significant factor, because that’s a given or we simply aren’t going to be here. I say that it is ill-considered reasoning because this line of thinking fails to take into account the fact that “just so” falls between cumulatively runaway tendencies.

    The anthropic principle fell from only one of these weirdly balanced coincidences, the vast number of these life-permitting “ecobalanced” conditions that have been discovered since that time serve to compound the significance exponentially by orders of magnitude with each additional coincidence that is discovered. So it isn’t even close to rational to offer up, multiverse-like “what if” scenarios about other forms of life and conditions that MIGHT exist elsewhere.

    One thing that string theory teaches is that you can lose touch with real physics in imaginary universes, so be very careful when you project beyond what is observed to “what-if” things are different elsewhere in the universe scenarios, because you’re appealing to religion when you do that. Just because we don’t know something isn’t an excuse to assume whatever you want.

    Nick Bostrom’s ideas are interesting and relevant where apparently chaotic, (subtly determined?), scenarios are applicable, but he fails to take into account the fact that an anthropic explanation for the fine-tuning of the universal constants is supposed to be embedded into humans by the universal scale mechanism that enables or requires human existence. That justifies the selection effect, in this case, because anthropic bias is supposed to be an innate characteristic of the universe.

    He correctly notes that there can be areas of low entropy, (which are necessary to life), within the greater whole of our entropic, expanding universe, but he failed to equate the predominant entropic prejudice of our universe to the anthropic bias, as supported by conventional Big Bang Theory and The Standard Model of Particle Physics.

    The anthropic principle notes that “Anthropic Bias” is, by definition, the natural expression for universal scale favoritism toward humans.

    The principle becomes a powerful form of support for our best theories if we are thermodynamically connected to the process and there is no way to set yourself apart from this because the underlying direction of all action in a big bang induced expanding universe is ultimately entropic, per conventional Big Bang Theory as supported by the latest confirmed observational evidence.

    Barring quantum fantasies… any occurrence within the system is, therefore, a result of the tuning of the constants that were set at t=10^-43. This includes humans in all their glory, and the weak entropic anthropic argument would support this via the fact that it is observationally proven that the human is comparitively one of nature’s more preferred methods for efficiently dissipating energy at a regulated but increasing rate.

    Dicke got his coincidence from Dirac’s Large Numbers Hypothesis, but Dirac’s cosmological model was wrong… gravity falling off with expansion. It is logical to think that repairing Dirac’s cosmolgy would necessarily complete and clarify the anthropic principle, while removing the tautologous nature of it. Oh that’s right we’ve accepted assumptions that take us beyond all that, no wonder Einstein died believing what he couldn’t prove, rather than to accept that crap.

    If you stick to the observed universe like the AP is telling you to do, then you can only derive that a multiverse is for dreamers, and so are infinities, uncertainty, absolute cosmic singularities… etc. idealizations. We have to distinguish between an idealized vacuum and the lowest actual energy state, which is non-zero.

    It’s equally arrogant to assume that there isn’t some good reason for the anthropic principle in one universe as it is to intentionally bury OBVIOUS significance, (as Lenny pointed out), in many imagined universes.

    Quit pretending that you don’t see the writing on the wall.

    At least Lenny has the guts to admit that the “appearance” of purpose in nature is real.

    Deal with it.

  6. Michael Bacon says:

    It’s far too skeptical to say that when you project beyond what is observed you’re appealing to religion. What we know about the world is far greater than what we observe. More interesting is the idea that if GLAST finds some energy depedence of the speed of gammarays, that will falsify string theory. If not, non-string approaches like Smolin-style LQGs, which can accomodate energy dependence, are faced with a similar situation. I was wondering what folks thought of that?

  7. island says:

    It’s far too skeptical to say that when you project beyond what is observed you’re appealing to religion.

    That’s true. I worded it improperly and should have said that the anthropic principle is telling us that we shouldn’t project beyond what is observed to assume that idealizations can actually exist… or you’re appealing to religion.

  8. Michael Bacon says:

    Not sure what you mean by “assume” and “idealizations”. Nevertheless, regardless of the meanings, there is much beyond what is observed that actually exists.

  9. Chris W. says:

    Hey, look at the papers linked by ‘Dissident’ in his comment on Cosmic Variance (“Where the dark matter is”). Many of you are aware of the recent work of Lauscher and Reuter (latest in hep-th/0511260). Reuter, et al, are arguing for observable manifestations of renormalization effects in QG at galactic and larger scales. Along with the connections of the same work to recent results in CDT, I find all this very encouraging. It seems to me that Wilczek of all people should be paying close attention to this stuff instead of wasting his time with the Multiverse and AP.

  10. Chris W. says:

    PS (credit where credit is due): I believe Aaron Bergman has been closely following work in CDT as well as the papers of Lauscher and Reuter. Any comments, Aaron?

  11. Aaron Bergman says:

    I haven’t been following very closely — I have my own research, after all. I read a bit on CDT, and while it’s intriguing, it’s still feels very preliminary and somewhat ad hoc to me. A lot of people believe that any such UV fixed point is going to be extremely sensitive to the matter content in the theory, so it’s not clear what significance a pure GR calculation would have. In that, as best I know, this is the first time anyone’s every found something that’s not horribly crumpled up, I’ll be paying attention to where it’s going. It’d be nice to see some black hole stuff with it.

    I haven’t looked at the Reuter stuff myself, but Jacques has, and it seems pretty persuasive to me.

  12. island says:

    …there is much beyond what is observed that actually exists

    Like what, goblins? or did I stumble through a side door into the church of infinities?

    I’ll leave a tip in the plate when they pass it around, the point that I made was that the observational evidence indicates that you’d have to fix Dirac’s Cosmology to see if it repairs his Large Numbers Hypothesis before you can make that claim, since we are talking about the affect that an entropic anthropic cosmological principle would have on this model.

    A failure to look for evidence is popular where the AP is concerned, but that’s all starting to change now that the string theorists have started abusing it to support their belief system, like creationists do.

    Better lose the lame arguments that are designed to downplay its significance though, because they won’t fly, they just make you appear as antifanatics who don’t care what the actual science is.

  13. It is not clear to me whether the framework below can be implemented in string theory. If this could be so, then will string theory accommodate Lorentz violation as well? Does it depend on how far string theory reproduces the standard model?

    This is taken from Mattingly’s review (gr-qc/0502097):

    The most conservative approach for a framework in which to test Lorentz violation from quantum gravity is that of effective field theory (EFT). Both the standard model and relativity can be considered EFT’s and the EFT framework can easily incorporate Lorentz violation via the introduction of extra tensors. Furthermore, in many systems where the fundamental degrees of freedom are qualitatively different than the low energy degrees of freedom, EFT applies and give correct results up to some high energy scale. Hence following the usual guideline of starting with known physics, EFT is an obvious place to start looking for Lorentz violation.


    “(…)the first place to look from an EFT perspective is all possible renormalizable Lorentz violating terms that can be added to the standard model. In [Ref] Colladay and Kostelecky derived just such a theory in flat space – the so-called (minimal) Standard Model Extension (mSME).

    [Ref] Colladay, D., and Kosteleck´y, V.A., “Lorentz-violating extension of the standard model”, Phys. Rev. D, 58, 1160

    Thank you,

  14. Brett says:


    The original motivation for developing the standard model extension came from string theory! Alan Kostelecky was an expert in string field theory, and he and Stuart Samuel wrote several papers in which they found minima of the string potential with odd properties. Perhaps the oddest property was that some of the minima had spontaneous Lorentz violation. (I think it turns out that none of the Lorentz-violating minima are actually the global vaucua of the theories, though, unfortunately).

    Moving to effective field theory was motivated in large part by the fact that there are no realistic string theories. (There are no sufficiently realistic quantum gravity theories of any type.) But, knowing that string theory or some other high-scale physics could break Lorentz violation, it was immediately natural to consider how this could be encoded in an effective field theory. Having an effective field theory of Lorentz violation allows us to translate empirical verifications of relativity into bounds on specific coefficients, and these effective field theory bounds should not be sensitive to the underlying physics that gives rise to the Lorentz violation.

    The effective field theory has now been developed for both quantum field theory and GR. The GR part is very interesting, because it apparently does not allow for explicit Lorentz symmetry breaking; any Lorentz violation must be spontaneously induced. There are a few other conditions on what form the Lorentz violation can take, but there’s still a lot of very simple questions whose answers are unknown.

  15. Who says:

    about falsifiability, something interesting came up in QG recently

    there is this different but related question which is sometimes asked string thinkers
    What would make you give up string theory?
    It was e.g. asked in the Panel Discussion at Toronto Strings ’05 and Steve Shenker replied “You’re not supposed to be asking that!”
    From string theorists I’ve never seen a serious answer.

    but Laurent Freidel recently gave a serious answer to that as regards LQG—he works on the spinfoam approach—in his invited talk at October Loops’05.

    spinfoam is a kind of path integral, or sum over histories, treatment of spacetime. Freidel has introduced matter in the 3D case and shown that an effective QFT appears as the zero-gravity limit. he gets feynman diagrams and vertex amplitudes out of the spinfoams in the limit as newton’s G goes to zero.

    he was reporting on the first steps of an effort to extend this to 4D.
    the requirement of getting the right effective QFT as a flat limit, of getting the correct feynman diagrams out of the spinfoam, should according to him determine a unique spinfoam QG model. IF IT DOES NOT he says, then he would stop researching spinfoam QG.

    There is a video of Laurent Freidel talk at Loops ’05. I downloaded it earlier and just watched it again. I was impressed—there is a kind of “falsifiability” here. It is not so much empirical (except that feynman diagram vertex amplitudes are supported empirically) as theoretical consistency. but he is obviously serious about it. for people who think like that and expect that, it is an exciting time in spinfoam.

    something has worked in the 3D case and they are starting to check it and if it does not work in 4D that means, for them, that spinfoam goes on the scrap heap. but if it does work then it passes a kind of test.

    see if you can download the Freidel talk from the Loops05 site. I think they may have taken the videos offline because I can’t download it now.

    the relevant preprint is
    and references therein

  16. Christine says:

    Dear Brett,

    Thank you for your comment.

    Best wishes

  17. Who says:

    Wilczek posted this today
    Enlightenment, Knowledge, Ignorance, Temptation
    Frank Wilczek
    10 pages, 5 figures. Summary talk at “Expectations of a Final Theory”, Trinity College, Cambridge, September 2005
    “I discuss the historical and conceptual roots of reasoning about the parameters of fundamental physics and cosmology based on selection effects. I argue concretely that such reasoning can and should be combined with arguments based on symmetry and dynamics; it supplements them, but does not replace them.”

  18. Chris W. says:

    I just read through Wilczek’s preprint. It is beautifully written; whatever one may think of the ultimate fate of these notions one could hardly ask for a better introduction. String theory and its current travails plays only a supporting role in it.

    Some thoughts occurred to me after I finished it:

    The so-called anthropic coincidences are derivable and interesting only because so much is governed by dynamical laws. We could hardly argue that a certain small set of parameters must be close to their measured values for life to exist unless those parameters were embedded in a theoretical structure that is largely unique, precisely formulated and testable, and broadly applicable. That is, if the structure of the world, and its changes from one moment to the next, were completely arbitrary then the notion of selection effects would have no meaning; we couldn’t sensibly reason about them. So the possible need to invoke cosmological selection effects, unless we approach it in a glib and superficial way, should force us to think deeply about the nature and basis of physical laws, as well as their relationship to the possibility of discovering them.*

    For these reasons I am deeply suspicious of assertions that are tantamount to saying that in the multiverse all physical laws are possible. This implies that the dynamical laws that we have formulated and successfully tested are in fact purely accidental—just aspects of one “point” in the multiverse—so that the very basis for analyzing the anthropic coincidences threatens to collapse. Everything about the observed universe becomes a selection effect, and we are forced to conclude that the laws, the fundamental constants, and the existence of life are one big arbitrarily selected package.

    (Of course I’m setting aside the fact that defining life, and identifying the features of the observed universe upon which life really depends, is still a fairly dicey enterprise.)

    [* Note that, in some sense, all living things must discover something about physical laws, through evolution as well in the adaptive behavior of individuals, in order to effectively negotiate their environments and survive long enough to reproduce.]

  19. Chris W. says:

    By the way, Sean Carroll’s latest post on Cosmic Variance nicely complements this discussion.

  20. Michael Bacon says:

    If spinfoam does work in 4D it adds evidence that it fits the quantum formalism and that the math surrounding the sum over histories approach is appropriate in the circumstances. I’m curious how the information representing the physical objects constituting the spinfoam, whatever its ultimate appearance, commute with their nearest neighbors. Perhaps quantum computation theory has something to add in this regard.

  21. Arun says:

    Which fundamental physical parameters, when changed to within a factor of 2, would make carbon-based life impossible?

  22. Arun says:

    Further question – would the “Standard Model” with a single generation of leptons and quarks instead of three (hence the scare quotes) be as consistent as the Standard Model? Would such a universe with such physics support life? While one generation instead of three would change details of the big bang and nucleosynthesis and so on, presumably a carbon atom doesn’t change all that much; planets and chemistry might still be possible? If the answer is yes, then doesn’t that put paid to the anthropic principle? i.e., if the anthropic principle can’t distinguish between one generation and three, then it isn’t of much use, is it?

  23. Max Tegmark says:

    Hi Mark,

    I recently discovered your blog entry about our paper, and found it both interesting and amusing. Next time you spot me at a meeting, please stop by and say hello during a coffee break, as it would be fun to discuss these issues.

    My opinion is that this anthropic debate is so spirited and long-lived because the two sides are largely talking past each other, addressing different questions. In this spirit, I think it would help of we both state clearly what points we’re trying to make. The essence of what you write seems to be “the anthropic principle is unscientific and I don’t like it” -how would you phrase your core points, and what precisely do you mean by “the anthropic principle”?
    Specifically, how would you classify your critique?
    a) The calculations in our paper are uninteresting and a waste of time.
    b) Our paper contains incorrect statements.
    If it’s (a), then I respect your viewpoint and merely disagree with it.
    If it’s (b), then which specific statements to you object to?

    This may surprise you, but I don’t like using the phrase “anthropic principle”, because there are many different definitions of
    it floating around, all of which I believe to be either tautological or false. The question that I’m personally interested is how to confront a mathematical theory of physics (given by some Lagrangian or whatever) with observation, which I think you agree involves computing its predictions with selection effects taken into account.
    For example, what does the pre-inflationary axion model predict, and is it ruled out? A key goal of our paper was to addresses these questions.


  24. Who says:

    Dear Max,
    I thought it was a serious flaw in your paper that it failed to address Smolin’s cosmological natural selection proposal (CNS) and the accompanying arguments presented in

    Scientific Alternatives to the Anthropic Principle

    —exerpt from abstract—
    …We show however that it is still possible to make falsifiable predictions from theories of multiverses, if the ensemble predicted has certain properties specified here. An example of such a falsifiable multiverse theory is cosmological natural selection. It is reviewed here and it is argued that the theory remains unfalsified. But it is very vulnerable to falsification by current observations, which shows that it is a scientific theory. ,,,

    Since the CNS proposal is the simplest and most directly testable multiverse model, and it has not yet been refuted by observation as far as I know. I think it is incumbent on you to describe CNS and to say why you think it can be ruled out. Or explain why it is not testable by current astronomical observations, if you believe it is not.

    It might further the discussion if you did this.

    Cheers 😉

  25. woit says:

    Hi Max,

    Thanks a lot for taking the time to write here, I’d certainly enjoy talking to you in person about these issues sometime. Here are some comments, I hope they address the questions you asked.

    First, some context. My background is in mathematics and particle physics, not cosmology, and what motivates me is the idea of trying to find an improvement of the standard model, using new mathematical ideas. I think Wilczek would classify me as the sort who is trying to get somewhere by “pure thought”, something which he doesn’t believe will ever work. I’d claim that, given the lack of any new helpful experimental input, particle theorists don’t really have any choice at the moment except to go this admittedly much more difficult route. I’ve never found string theory a particularly appealing idea for unification (although it gives interesting insights into strongly coupled gauge theories and has led to some important new mathematics), and think the way it has completely dominated mathematically-minded particle theory for the last twenty years has been a disaster for the subject. By now it should be clear that it is simply a highly speculative idea that failed, and the field desperately needs to acknowledge this and move on to trying other things. The whole string theory landscape program, especially in its “anthropic” version, seems to me to be a retreat from the very idea of doing science, motivated by a refusal to admit failure. For more about this, read my recent review here of Susskind’s new book.

    Like just about any particle theorist, I’ve spent some time looking at what is going on in cosmology and hoping it will provide some new insights into how to get beyond the standard model. So far it seems to me cosmology has provided some interesting hints, but unfortunately they’re no more than hints. We’re agreed that what needs to be done here is to find ways of confronting our mathematical models with the real world. When dealing with models that involve a statistical ensemble of universes and observables that are only probabilistically determined, sure, one has to take into account selection effects.

    The scientific part of my critique of your paper was not that I thought any of it was incorrect. The major part of it, your anthropic calculation of the selection effect for the dark energy density was more complicated than I have the time or interest to follow. You end up with a probability distribution for the dark energy, with the observed value near the peak. What was unclear to me was exactly what conclusions you are claiming can be drawn from this. Ignoring the axion cosmology prior, it seems you’re just claiming that there’s an anthropic window in the dark energy density, and observations show we’re in the middle of it.

    But, as a particle physicist, what I really want to know is what is left over when you remove the selection effect. Exactly what does your calculation says about the axion cosmology model? Have you provided any new evidence for it other than that it’s not obviously inconsistent? Can you rule out a flat dark energy density prior in favor of the axion cosmology one? Your paper didn’t seem to seriously address these kind of questions, or the more general question of when you can expect to get genuine, falsifiable predictions out of this kind of calculation.

    That’s the scientific critique, there’s also a more general critique, one I made in my posting. Especially the early part of your paper seemed to me a heavily ideological push for the idea of anthropic determination of the standard model parameters, without anything to really back this up. Given the on-going disaster in particle theory these days due to the string theory landscape, I think this is really unhelpful.

    Anyway, thanks again for writing in. If you’d like to write something more than a short comment in response, I’d be more than happy to put it up as a new posting, so people would be more likely to see it and comment on it if they wish.


  26. Aaron Bergman says:

    For my part, I think probabilistic arguments across multiverses are nonsensical. Otherwise, I don’t see a satisfactory response to the doomsday argument. So, you can file me under (a).

  27. Who says:

    Peter to Max: “If you’d like to write something more than a short comment in response, I’d be more than happy to put it up as a new posting, so people would be more likely to see it and comment on it if they wish.”

    That would be great! Please consider doing this, Max!

  28. Max Tegmark says:

    Hi Peter,

    Thanks for your thoughful response. I’m glad to hear that you didn’t feel any of it was incorrect.

    I agree that it will be interesting of one can make stronger statements about the axion model – as described in the paper, this will require
    improved astrophysics calculations that I hope somebody will make. You described the early part of the paper as a heavily ideological push for the idea of anthropic determination of the standard model parameters. The intent was rather to push for an open mindset on the issue, since we frankly don’t know how many parameters will ultimately be computable from others.

    Regarding the CNS-critigue by “Who”: The pre-inflationary axion model is a complete physical theory, whereas Smolin’s “Cosmological natural selection” is not (we lack a mathematical description of how black holes spawn universes; see also http://arxiv.org/abs/hep-th/0407266). Moreover, my guess is that the hypothesis is ruled out by the low observed fluctuation level (~1/10^5), since raising it would lead to more black holes.

    Finally and most importantly, I’m glad that many of you disagree with my views! As argued in http://arxiv.org/abs/physics/0510188, I think that diversity in the physics community is more useful than an ideological monoculture, since it motivates physicists to tackle unsolved problems with a wide variety of approaches.


  29. Who says:

    Hi Max,
    personally I view your “low observed fluctuation level” not as a fundamental physical constant but more of an ad hoc result. It could be symptomatic of more basic parameters established prior to the beginning of expansion. But this is largely a matter of preference. It is personal opinion on your part (as you say) and equally on mine!

    I would have expected you to mention CNS in your paper and to say why, in your opinion, it is probably ruled out. This would have given others a convenient opportunity to argue that it has not yet been ruled out.

    There have been quite a few papers in the past year relating to the question of how black holes might spawn universes. Much of the work (QG modeling black hole collapse) was by people who reported in the Friday (14 October) session of Loops ’05. The main topic was LQC, but the same people work on quantum models of gravitational collapse and, for what it’s worth, the bounce mathematics has turned out to look rather similar in both cases.

    You say “we lack a mathematical description” and cite a 2004 paper by Leonard Susskind responding to Smolin, but, as I say, there has been quite a bit of mathematical description since then. I also do not believe Susskind’s rebuttal stands unchallenged.

    Be that as it may, my point is that the CNS idea has not been disposed of and should have been discussed. If you think it is not falsifiable, or that it has already been falsified, then you should at least give your reasons.

    In case you would like to look up the past year’s QG papers modeling gravitational collapse and bounce, I will post some arxiv numbers later. Some of the relevant authors, if you want to look up their recent work yourself, would be Abhay Ashtekar, Martin Bojowald, Viqar Husain, Oliver Winkler, Leonardo Modesto, Parampreet Singh.

    To be fair, one should note that Smolin’s CNS proposal is testable without reference to any specific mathematical description of black hole collapse and bounce. The CNS conjecture is that some reproductive/evolutionary mechanism has fine-tuned the constants for black hole production. CNS challenges us to find even one fundamental constant (hope you will pardon me if I decline to view your “low observed fluctuation level” as a fundamental constant) which if it were better tuned would result in substantially greater black hole abundance.
    That is something one can use to test—and possibly falsify—CNS, even before one has a complete mathematical theory of the conjunction of black hole and big bang events.


  30. Lee Smolin says:

    Dear Max and Who

    Thanks for mentioning the CNS proposal. The issue of the level of delta rho/rho-the fluctuation level-was discussed in detail and resolved in the first paper published on the model, Did the universe evolve?, Classical and Quantum Gravity 9 (1992) 173-191, summarized in Life of the Cosmos, ps 309-10.

    Given that this was the first theory based on the “landscape*” I would hope that people would look at the literature before dismissing it.

    The point is that in simple one field inflation models the fluctuation level is proportional to the inflaton self-coupling constant. However the number of e-foldings in inflation is related to (if I recall right) the inverse of the square root of the inflaton self-coupling. The result is that the cost of raising the self-coupling to produce primordial black holes or make more galaxies is that there are fewer e-foldings in inflation and the resulting universe is exponentially smaller. The result is that CNS predicts that the self-coupling should be as small as possible consisent with some steady level of black hole production. i.e. many more black holes are produced by a slow, but steady rate of black hole production in an exponentially larger universe than by a burst of primordial black hole production in an exponentially smaller universe.

    This seems to characterize our universe where most black holes are made as supernova remnants. It also leads to a prediction, which is that inflation is governed by one parameter and not by a complicated potential with more than one parameter in which the fluctuation amplitude and number of e-foldings would be independent. So were there any evidence for an inflaton potential governed by more than one parameter the theory would be ruled out. I made this prediction in 1992 and so far it has held up.

    I can endorse WHO’s comments: recently we have very good detailed support from the quantum theory of gravity that black hole and cosmological sinigularities bounce. There is not yet a detailed description of the parameters mutate in a bounce, but this is certainly plausible given current views of the “landscape”.

    Max, if these were your main objection to CNS, would you now agree that the theory is viable?

    I would go further, CNS is the only landscape theory proposed so far that makes falsifiable predictions, Should this not make it the leading
    candidate for an explanation of the choices of parameters in the case that the landscape is real?



    *and indeed the origin of the term, which comes from “fitness landscape”.

  31. Anthony Aguirre says:

    Hi Lee,

    This is a nice argument, but does not terribly ease my misgivings about CNS, which are approximately 5-fold.

    First, the initial field value will also come in to the number of e-foldings, and (absent a concrete model) that may or may not be correlated with the inflaton coupling.

    Second, other things could lead to ultra-numerous black holes, e.g. a very high baryon/photon ratio. This physics should be unrelated to inflation, and variable by tuning the coupling constants. Or, we could introduce tilt into the perturbation spectrum to create scads of primordial black holes. Or is there a size requirement?

    Third, I’ve never been clear on the ‘number of black holes’: per unit what? If per unit photon (or dark matter particle), we can easily maximize this by increasing the baryon (or dark matter) to photon ratio. If per unit volume, this is of course time-dependent. And in all cases we must worry about infinite universes (note that a universe starting out finite does not mean that it cannot spawn an infinite universe either, as in open inflation). These are more-or-less the same semi-intractible issues eternal inflation has to deal with.

    Fourth, CNS still requires, even if we accept the nascent models for creating baby universes from black holes, some way of passing-down constants with small variations.

    Last, CNS does not answer the question of why the universe supports life, unless black holes are alive. It merely transforms the question into ‘why does a high black-hole formation rate correlate with life?’. An alternative that *would* explain things, I would say, were if advanced civilzations were responsible for creating the baby universes. But that it pretty science-fictiony and I suspect you would rather avoid going down that path.

    Thus I would be very pleased if CNS could be made to work and create strongly-peaked probability distribution for the observed parameters. (Note, in regard to our recent paper that CNS is not, as I see it, an alternative to the scenario we present, but a special case of the ‘multiverse’ explanation in which p_prior is strongly peaked around certain parameter values related to black hole formation. Unless axions are involved in black holes, I’m not sure it would impact the argument). But it seems to me that there are even more missing pieces, and about as many very difficult problems of principle to deal with, than in the eternal inflation scenario.



  32. Anthony Aguirre says:

    Hi All,

    An interesting discussion, and I agree with Max that it is good for parties with very different viewpoints to discuss this in a way that is “not talking past each other”, i.e. making assumptions about what the other parties are actually saying. Those interested in a more detailed (and, I think, not particularly pro-‘anthropic’) discussion of what responses one might take to having non-unique constants, by one or two of the authors of the paper under discussion, might enjoy hep-th/0409072 or astro-ph/050651.


  33. Chris W. says:


    Was that was supposed to be astro-ph/0506519 (“On making predictions in a multiverse: conundrums, dangers, and coincidences”)?

  34. woit says:


    Thanks for your very interesting comments. I’m sure the paper Chris mentions is the one you had in mind. It looks like it goes into much greater detail in addressing the kinds of questions that your more recent paper raised in my mind. I’ll add something to the main posting pointing to it.

  35. Aaron Bergman says:

    I’ll ask my usual question for people who make arguments along these lines? Is the human race going to end soon? Or, if you don’t like the Doomsday argument, then there’s Olum’s argument that we should be in some star spanning civilization.

    I just don’t see how the notion of probability makes any sense in these situations and, even if it did, what we gain from thinking about it.

  36. Lee Smolin says:

    Hi Antony,

    Thanks for your criticisms. Briefly, CNS works well for the parameters of the standard model of particle physics, and I don’t think this can be disregarded. Every variation of those parameters for which definite conclusions could be reached, the conclusion was that the number of black holes made decreased. The case is not as strong for some of the cosmological parameters. I don’t take these as important as we don’t which if any cosmological parameters are determined by parameters of a theory. But there are some answers to your points, which I’ll repeat and then address:

    “… the initial field value will also come in to the number of e-foldings, and (absent a concrete model) that may or may not be correlated with the inflaton coupling….”

    Fine, but the mechanism that determines the initial field may be independent of the inflaton coupling to leading order, if so then this decouples, i.e. for any initial field value my argument works.

    What about “… a very high baryon/photon ratio…?”

    If I recall correctly, this affects nucleosynthesis, and many other things, as this is a cold or tepid big bang. Ill check.

    “…Or, we could introduce tilt into the perturbation spectrum to create scads of primordial black holes..”

    The question again is what the consequences are for other processes, Ill think about this.

    “…I’ve never been clear on the ‘number of black holes’: per unit what?”

    The answer is per unit bounce, which means per unit black hole in the previous universe.

    “Fourth, CNS still requires, even if we accept the nascent models for creating baby universes from black holes, some way of passing-down constants with small variations….”

    Granted. This remains to be filled in. But still it is possible to deduce falsifiable predictions. This is not surprising, Darwin could make predictions in spite of being ignorant of DNA and its replication.

    “…Last, CNS does not answer the question of why the universe supports life, unless black holes are alive. It merely transforms the question into ‘why does a high black-hole formation rate correlate with life?’…”

    There is a clear correlation. You only get a lot of black hole production in universes with many massive stars. You only get many massive stars when the initial mass function (IMF) is power law out to many solar masses rather than exponentially damped. While this is incompletely understood, there is evidence the high end of the IMF is power-law because there are processes which efficiently cool giant molecular clouds (GMC’s) and because the GMC’s are well shielded from star light. Both the cooling and shielding involve carbon and oxygen. (This is described in some detail in Life of the Cosmos). Hence the chemical complexity life requires may exist because it is needed for a universe to efficiently produce stars massive enough to leave black hole remnants.

    “…An alternative that *would* explain things, I would say, were if advanced civilzations were responsible for creating the baby universes. But that it pretty science-fictiony and I suspect you would rather avoid going down that path….”

    I don’t. But Harrison (of Harrison-Zeldovich) has written a paper which goes down that path, as has Louis Crane.

    I hope this helps,



  37. ksh95 says:

    “…Darwin could make predictions in spite of being ignorant of DNA and its replication…”

    Darwin had evidence that during a “species-bounce” a chicken would not mutate into a donkey. The important word here is small. Without small changes one is left with a random mess.

    “…CNS does not answer the question of why the universe supports life…”

    Why should CNS (or any theory for that matter) answer *that* question. An equally valid question could be: why does ksh95 like soup?

  38. Anthony Aguirre says:

    Chris W.:

    That is indeed the article; sorry about the typo.


    This is probably a discussion that we should have over a nice meal sometime, and hopefully I can make it out to PI soon to do so. I would love to understand how CNS could work, and would be happy to be convinced. Perhaps my biggest stumbling block now, as I think about it, is the invocation of inflation. If inflation is involved, then it is easy for inflation to be future eternal, and if this were possible for *any* set of constants, then universes involving eternal inflation, which spawn infinitely many black holes, would immediately dominate the ensemble, and we are back where we started. If you have a good idea for allowing inflation while precluding the possibility of eternal inflation, that would be wonderful (as much as I enjoy thinking about eternal inflation, I would not mourn it’s passing, I have to say 😉 But, for example, in the case with possible multiple vacua, I see no way to avoid it from occuring at least for some parameter values — and I think evolution would find a way to exploit this mechanism for infinite reproduction!



  39. Moshe says:

    Peter, since this discussion is going in your territory, I am curious what is your opinion of cosmological natural selection as an alternative to anthropic reasoning. Feel free to ignore this if you haven’t looked at it in sufficient detail.

    Happy holidays!



  40. woit says:

    Hi Moshe,

    I haven’t looked at CNS in enough detail to have an intelligent comment on it. The crucial question for anything like this is whether it generates testable predictions. Here Smolin claims it does, and his discussion here with Aguirre looked like it could be very illuminating, but I haven’t taken the time to really understand the arguments on both sides (life is short, and it’s exam period here these days…)

    In general, what I really care about and am willing to invest time in trying to carefully understand, are new physical ideas that explain something about particle theory, or new mathematical ideas that might somehow be useful in better understanding particle theory. This means there are a lot of topics in cosmology and quantum gravity that I’ve never studied in a really serious way, CNS is one of them.

  41. Lee Smolin says:


    To Antony on eternal inflation. I understand the arguments that go from inflation to eternal inflation but I find myself unconvinced of them, especially in the context in which inflation would arise in a small region to the future of the bounce of a black hole singularity. I would trust inflation in general much more if we had solved the cosmological constant problems. The cosmological constant problem arises not in a fundamental theory but only at the effective field theory level and, as pointed out by Dreyer in hep-th/0409048 there may be a physical mechanism such as proposed by he or Volovick which dynamically adjusts the vacuum non-perturbatively close to zero, in which case there is no inflation. Of course then I would lose the argument I made, but there is no problem in principle as the horizon problem is already solved by the fact that the universe arises from a bounce.

    On the other hand, WE KNOW that there are many black holes-perhaps as many as 10^19 in our Hubble volume. And we have good theoretical evidence that black hole singularities bounce. So only one element is missing for the CNS scenario-that the changes in vacuum resulting results in small changes in standard model parameters.

    For the time being we must assume this-although it is possible this assumption may be tested soon. But once we do we get quite a bit.

    My impression, if I can say so, is that many cosmologists undervalue the positive successes of CNS. It EXPLAINS otherwise mysterious features of our universe such as the setting of the parameters to make carbon and oxygen abundent-not because of life but because of their role in cooling GMC’s. It also EXPLAINS the hierarchy problem and the scale of the weak interactions-because these can also be understood to be tuned to extremize black hole production. Further, it EXPLAINS two otherwise improbable features of glaxies: why the IMF for star formation is power law and why disk galaxies maintain a steady rate of massive star formation.

    Moreover CNS makes a few real predictions: that the upper mass limit of neutron stars is less than 1.6 solar masses and that inflation is governed by one parameter. Made in 1992 these predictions could easily have been falsified but they have stood up.

    CNS makes these genuine explanations and predictions without having to invoke the AP, whereas eternal inflation requires invoking strongly the AP just to be plausible.

    It seems to me CNS has a much longer list of successes than eternal inflation. which so far as I can tell explains NOTHING about the particle physics standard model parameters and which makes no fallsifiable predictions.

    There are always many theories with attractive features, my understanding of how science is supposed to work is we are to pick out of the list of otherewise attractive theories those few that genuinly explain and predict over those that don’t.

    The problems I set out to solve were 1) why the neutron is a bit heavier than the proton, 2) why their difference is comparable to the electron mass, 3) the value of the fermi constant, 4) the value of the strange quark mass, 4) the hierarchy problem. These are real problems which have been around for decades. So far as I know, CNS is the only explanation proposed so far that is both genuinly explanatory and genuinly falsifiable in terms of ongoing observations.

    If you ask me to discount all this because of a highly speculative theory of the very early universe that explains nothing and makes no falsifiable predictions, that may easily dissapear when the cosmological constant problem is solved, I am afraid I don’t think this is consistent with the methodology of science as I understand it. I understand reasonable people may differ, and I have the greatest respect for you, Max and other cosmologists, but I must admit also I am puzzled by your apparent judgements of what is more and less reliable in present theory.

    Also, to Michael Bacon: Methods from quantum computation are being used to solve the problem you mention, which is how distinct particle states emerge from a spin foam. See D. W. Kribs, F. Markopoulou, Geometry from quantum particles, gr-qc/0510052.



  42. Moshe says:

    Thanks Peter, and good luck with finalizing the semester.

  43. Who says:

    in his preceding post Lee Smolin says

    The problems I set out to solve were 1) why the neutron is a bit heavier than the proton, 2) why their difference is comparable to the electron mass, 3) the value of the fermi constant, 4) the value of the strange quark mass, 4) the hierarchy problem. These are real problems which have been around for decades. So far as I know, CNS is the only explanation proposed so far that is both genuinly explanatory and genuinely falsifiable in terms of ongoing observations.

    [MY COMMENT] this is a key paragraph and I want to connect the dots and see what I understand and what I dont. Maybe get some help explicating, if possible. The above connects with page 31 of the paper (Scientific Alternatives to the AP, hep-th/0407213). Here is an exerpt from Smolin’s paper:

    The crucial conditions necessary for forming many black holes as the result of massive star formation are,

    1. There should be at least a few light stable nuclei, up to helium at least, so that gravitational collapse leads to long lived, stable stars.

    2. Carbon and oxygen nuclei should be stable, so that giant molecular clouds formand cool efficiently, giving rise to the efficient formation of stars massive enough to give rise to black holes.

    3. The number of massive stars is increased by feedback processes by which massive star formation catalyzes more massive star formation. This is called “selfpropagated star formation, and there is good evidence that it makes a significant contribution to the number of massive stars produced. This requires a separation of time scales between the time scale required for star formation and the lifetime of the massive stars. This requires, among other things, that nucleosynthesis should not proceed too far, so that the universe is dominated by long lived hydrogen burning stars.

    4. Feedback processes involved in star formation also require that supernovas should eject enough energy and material to catalyze formation of massive stars, but not so much that there are not many supernova remnants over the upper mass limit for stable neutron stars.

    5. The parameters governing nuclear physics should be tuned, as much as possible consistent with the forgoing, so that the upper mass limit of neutron stars is as low as possible.

    The study of conditions 1) to 4) leads to the conclusion that the number of black holes produced in galaxies will be decreased by any of the following changes in the low energy parameters:

    • A reversal of the sign of ∆m = m_neutron − m_proton.

    • A small increase in ∆m (compared to m_neutron will destabilize helium and carbon.

    • An increase in m_electron of order m_electron itself, will destabilize helium and carbon.

    • An increase in m_neutrino …, will destabilize helium and carbon.

    • A small increase in α will destabilize all nuclei.

    • A small decrease in α_strong, the strong coupling constant, will destabilize all nuclei.

    • An increase or decrease in G_Fermi of order unity will decrease the energy output of supernovas. One sign will lead to a universe dominated by helium.

    [my comment: not sure the Delta and alpha are going to print. Some of the above seems readily understandable. Some, not. In one case where I failed to understand, I shortened the quote by elision. Refer to the original. Would be glad if anyone volunteered to clarify, instead of my having to, even the more self-evident points here.]

  44. Pingback: Not Even Wrong » Blog Archive » Trackback Censorship

  45. Frank Wilczek says:


    I’d like to encourage readers interested in my views on these subjects to read to original papers, which are rather more nuanced and substantial than the summaries presented here. Thanks, and happy holidays.

  46. woit says:

    I’d like to second Wilczek’s encouragement to all to read the original papers discussed here. In this posting, as in just about all others, what you are getting here is not an accurate, objective summary of a talk, article or book, but my own commentary on one or more specific aspects, written under the assumption that readers will read the original source for themselves.

    In addition, in cases like this particular one, it should be taken into account that my commentary is very much not objective, but highly colored by certain specific concerns which the regular readers of this blog should be well aware of.

  47. Who says:

    Hello Frank Wilczek,
    I’m a long-time fan of your writings—like the combination of ideas and style.

    Here I am responding directly to the Tegmark et al paper
    “Dimensionless constants…”
    of which you are co-author.

    What disturbs me about this paper is that it does not include a discussion of Smolin CNS, as described in
    “Scientific alternatives to the anthropic principle”

    Would you please explain why the CNS hypothesis does not merit consideration when one discusses the explanation or non-explanation of basic dimensionless constants? Has CNS, in your view, already been falsified by observation?

  48. woit says:


    I probably shouldn’t try and answer for Wilczek, who I gather from his comment is not particularly happy with my summary of parts of his papers and talk, but there’s an obvious answer to your question. His paper with Tegmark is not a review of the various attempts to get scientific predictions out of anthropic arguments. It not only doesn’t mention Smolin’s CNS hypothesis, but it also doesn’t discuss the far more popular string theory anthropic landscape program being pursued by many string theorists. The paper is specifically about the dark matter density, looking at axion cosmology and WIMPs as hypotheses for the origin of dark matter, as well as relevant anthropic selection effects. Given what they are trying to do in that paper, I don’t see any particular reason the authors should be addressing the CNS hypothesis there.

  49. Who says:


    let me be a bit more specific about where I think the omission occurred and then either you or Wilczek can judge if my objection is valid. In the Tegmark et al paper, on page 1, in the introduction section1B, quote:

    —quote from Tegmark et al—

    B. The origin of the dimensionless numbers

    So why do we observe these 31 parameters to have the particular values listed in Table1? Interest in that question has grown with the gradual realization that some of these parameters appear fine-tuned for life, in the sense that small relative changes to their values would result in dramatic qualitative changes that could preclude intelligent life, and hence the very possibility of reflective observation. As discussed extensively elsewhere [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], there are four common responses to this realization:

    1. Fluke: Any apparent fine-tuning is a fluke and is best ignored.

    2. Multiverse: These parameters vary across an ensemble of physically realized and (for all practical purposes) parallel universes, and we find ourselves in one where life is possible.

    3. Design: Our universe is somehow created or simulated with parameters chosen to allow life.

    4. Fecundity: There is no fine-tuning, because intelligent life of some form will emerge under extremely varied circumstances.

    Options 1, 2, and 4 tend to be preferred by physicists, with recent developments in inflation and high-energy theory giving new popularity to option 2.

    Like relativity theory and quantum mechanics, the theory deepening our understanding of the nature of physical reality. First of all, inflation is generically eternal [25, 26, 27, 28, 29, 30, 31], so that even though inflation has ended in the part of space that we inhabit, it still continues elsewhere and will ultimately produce…

    …More dramatically, a common feature of much string theory related model building is that there is a “landscape” of solutions, corresponding to spacetime configurations involving different values of both seemingly continuous parameters (Table 1) and discrete parameters…


    Peter, you say an obvious answer to why CNS was not mentioned is that
    His [Wilczek’s] paper with Tegmark is not a review of the various attempts to get scientific predictions out of anthropic arguments.” But I would reply that CNS is not an attempt to get scientific predictions out of anthropic arguments. The theory has no anthropic character at all—it doesnt refer to life or consciousness, or depend logically on their existence.

    However the paper does review 4 possible ways of reacting to the “realization that some…parameters appear fine-tuned for life.”

    I believe that until or unless CNS is refuted it should be included among the possible responses, and that it does not fit neatly into any one of the four listed by the paper. I suppose that according to the CNS view it can be considered a fluke that conscious life can arise where evolution has tuned the constants for abundant black holes. And that in this view there is a single universe capable of branching where, as in the case of a stellar mass black hole, gravitational collapse leads to a bounce. So it is perhaps closest to responses 1 and 2—but not adequately represented by either!

  50. Aaron Bergman says:

    CNS is subsumed in #2.

    Hope this helps!

Comments are closed.