A few weeks ago one Nobel prize winner put out an article promoting the idea of adopting anthropic reasoning as a new paradigm of how to do theoretical physics. More recently another Nobelist, Frank Wilczek, has to some degree followed suit. Wilczek is one of four authors on a new paper entitled Dimensionless constants, cosmology and other dark matters which first appeared on the arXiv November 29th, then in a slightly revised version on December 8. The other authors are Tegmark, Aguirre and Rees, with Tegmark’s name appearing first indicating it’s more his work than that of his co-authors.
I wasn’t sure quite what to make of this paper when it first came out, especially how much it reflected Wilczek’s own point of view on anthropism. Last Friday I attended talks by Wilczek and Tegmark at the 6th Northeast String Cosmology Meeting organized by the Institute for Strings, Cosmology and Astroparticle Physics here at Columbia.
Wilczek’s talk was entitled “Enlightenment, Knowledge, Ignorance, Temptation”. He explained that these corresponded to categorizing parameters of physical theories according to whether life depended on them or not and whether we have a good idea for what determines them or not. Choosing the two possible answers to these two questions gives four cases:
Enlightenment: Parameters that life depends on, and we think we have a good idea about what determines them. Here his example was the proton mass, very small on the Planck scale, but we think we know why: logarithmic running of coupling constants.
Knowledge: Parameters that life doesn’t depend on, and we think we have a good idea about what determines them. One example he gave was strong CP violation, which is irrelevant to life, but very small, perhaps because of axions.
Ignorance: Parameters that life doesn’t depend on, and we don’t have a good idea about what determines them. This includes most of the standard model parameters, as well as just about all parameters in theories that go beyond the standard model.
Temptation: Parameters that life depends on, and we don’t have a good idea about what determines them. The examples he gave were the electron and up and down quark masses.
He said that his talk would concentrate on “Temptation”, the temptation being that of using anthropic argumentation. He noted that David Gross believes this is a dangerous opiate, causing people to just give up instead of really solving problems. The one anti-anthropic point he made was to put up a graphic showing agreement of the lattice QCD spectrum calculations with experiment, saying the lesson was that sometimes real calculations turned out to be possible even though people had at times doubted this. So one should try and “limit the damage”, not go wild and use anthropics inappropriately, trying to save as much beautiful physics as one can even when anthropic reasoning is forced on us.
The rest of his talk though showed a significant amount of enthusiasm for the new anthropism. He referred to people like his co-author Rees who have been promoting the anthropic point of view for years as “unhonored prophets”. Given the paucity of experimental data relevant to explaining where things like standard model parameters come from, he said that at least anthropics gives lots of new questions so one has something to do when one gets up each day which might be fruitful. He attacked the idea of using “pure thought”, without consulting the physical world, saying this hasn’t worked, not 20 years ago, not now, not in the future. I presume he had string theory in mind when he said this, noting out loud that it might annoy some people in the room.
The main idea about anthropics he was trying to push is that anthropic calculations were “just conditional probability”, making much of the equation
for the probability of observing some particular value p of parameters, given some underlying theory in which they are only determined probabilistically by some probability distribution fprior(p). The second factor fselec(p) is supposed to represent “selection effects”, and it is here that anthropic calculations supposedly have their role. In the paper the authors argue that “Including selection effects is no more optional than the correct use of logic”. The standard way physics has traditionally been done, one hopes that the underlying theory determines p (i.e. fprior(p) is a delta-function), making selection effects irrelevant in this context. The authors attack this point of view, writing:
to elevate this hope into an assumption would, ironically, be to push the anthropic principle to a hedonistic extreme, suggesting that nature must be devised so as to make mathematical physicists happy.
At no point in his or Tegmark’s talks, or anywhere in their paper, do they address the central problem with the anthopic principle, that there’s a huge issue about whether you can get falsifiable predictions out of it, and thus whether you’re really doing science. In this context, the nature of the problem is that if fprior(p) is not peaked somewhere but is flat (or more or less flat), then everything just depends on fselec(p), but if you calculate it anthropically, all you are doing is seeing what you can conclude from known laws of physics and the fact that we exist. In the end what will come out of this kind of calculation is some probability distribution that better be non-zero for the values of the parameters we observe, otherwise you’ve done the calculation wrong.
There is a particular sort of physical model one can hope to falsify this way. If one assumes our universe is a randomly chosen point in a “multiverse” of possibilities, and looks at an observable that is supposed to have a more or less flat probability distribution in the ensemble given by the multiverse, then one can argue that we should be at some region of parameter space containing the bulk of the probability in the anthropically determined fselec(p), not far out in some tail where the probability distribution is vanishingly small. There are plenty of examples of this already. The proton lifetime is absurdly long compared to bounds from anthropic constraints, so any model of a multiverse that doesn’t have some structure built into it to generically sufficiently suppress proton decay is ruled out. This includes the string theory landscape, so one of the many mysteries of the whole anthropic landscape story is why its proponents don’t take their own arguments seriously and admit that their model has been falsified already. It also applies to Tegmark’s favorite idea, that of the existence of a Level IV multiverse of all possible mathematical structures, an idea he also promotes in the paper with Wilczek.
Wilczek also discussed one particular axion cosmology model in which fprior(p) can be calculated. In these models one has the relation
for the axion dark matter density in terms of the Peccei-Quinn symmetry breaking scale and the misalignment angle of the axion field at the Peccei-Quinn symmetry breaking phase transition. To make this agree with the observed dark matter density, if one assumes the misalignment angle is some random angle then the Peccei-Quinn scale has to be about 1012GeV. If one wants to make the Peccei-Quinn scale the GUT or Planck scale, one has to find some reason for the misalignment angle to be very small. The proposal here is that this happens for anthropic reasons, since if the angle were not small it would cause an amount of dark matter incompatible with our existence. For these small angles the above formula implies that the probability distribution for the dark matter density caused by such axions satisfies
The Tegmark et. al. paper contains an elaborate calculation of fselec for the dark matter density, involving all sorts of “anthropic” considerations which goes on for eleven pages or so and involves a bafflingly long list of considerations about galaxy, star and planet formation, as well as many possible dangers that could have disrupted the evolution of life, such as disruption of the Oort cloud of comets. I’ll freely admit to not having taken the time to follow this argument. The end result for fselec as a function ofis a probability distribution with the measured dark energy corresponding to something close to the peak.
I’m not sure exactly what conclusions one can or should draw from this calculation. So many different facts about our specific universe are being folded into this that it’s not clear to me that there isn’t some circular reasoning going on. This is a general problem with “anthropic” arguments: if you assume that life couldn’t exist if the universe was much different than it is, you smuggle all sorts of information about the way the world is into your “anthropic” calculation, after which it is not too surprising that it “predicts” the universe has more or less the properties you observe.
What we really care about in these arguments is whether they can be used to extract any information whatsoever about fprior, the physics we are trying to get at. In this axion cosmology case we have a prediction for this distribution and the calculation shows this is consistent with the observed dark energy density, but as far as I can tell, all sorts of other quite different distributions would work too. So, I’m still confused about exactly what this calculation has told us about the underlying axion cosmology physics that it is supposed to address, other than that it is not obviously completely inconsistent.
Tegmark’s talk at Columbia was titled “Measuring and Predicting Cosmological Parameters”. The “measuring part” was a summary of some of the impressive experimental evidence for the standard cosmological model. The “predicting” part was pretty much pure promotion of anthropism, including a long section on reasons why the electroweak symmetry breaking scale is anthropic and some comments making fun of David Gross (“even he couldn’t predict the distance from the earth to the sun. Laughter…”). The only actual “predictions” mentioned were the results about the axion cosmology model mentioned above and described in detail in the Tegmark et. al paper, as well as the well-known Weinberg anthropic “prediction” for the cosmological constant.
All in all, I found these two talks and the Tegmark et. al. paper pretty disturbing. They seem to me to be part of a highly ideological effort to sell the Anthropic Principle as science. The paper devotes two pages to a detailed list of standard model parameters, and makes various statements about the probability distribution function on this large number of parameters, even though it has nothing to say about almost all of them, and I think there’s a strong argument that the anthropic program inherently will never have anything useful to say about most of these parameters. Many of Wilczek’s remarks were more modest, but the paper he has signed his name to is highly immodest in its claims for anthropism. Together with Weinberg and Susskind’s anthropic campaigns, it seems to me that more and more theorists are going to join this bandwagon. Neither Wilczek nor Tegmark are string theorists (and Wilczek is clearly somewhat skeptical about the whole idea), but there seems to be an unholy alliance brewing between them and Susskind and his followers. The only prominent person in the field standing up to this publicly is David Gross, and it is very worrying to see how little support he is getting.
Update: A preprint by Frank Wilczek corresponding to his talk last week entitled Enlightenment, Knowledge, Ignorance, Temptation has appeared. It is a contribution to the same conference as the one Weinberg contributed Living in the Multiverse to, I gather in honor of Martin Rees. Wilczek’s preprint announces a “new zeitgeist”, that anthropic arguments are in the ascendancy. One quite strange thing in the preprint is that he suggests an anthropic explanation for the long proton lifetime in terms of doing anthropic calculations involving future observers.
He does say there are drawbacks to the new order (a loss of precision and of targets to calculate), but on the whole he seems to embrace the new anthropic paradigm rather whole-heartedly, seeing it as a lesson in humility for those who had the hubris to believe it was possible to understand more about the universe through “pure thought.”
Update: Two of the authors of the paper discussed here (Aguirre and Tegmark) wrote in with some comments that are well worth reading (as well as those from Smolin and others about his own proposal). Aguirre points to an interesting paper of his On making predictions in a multiverse (see also an earlier paper with Tegmark), which addresses some of the conceptual issues that were bothering me about this sort of calculation. It points out many of the problems with this kind of calculation, and I don’t really share the author’s optimism that they can be overcome.
Lee Smolin mentioned to me a somewhat related workshop that was held this past summer at the Perimeter Institute, on the topic of Evolving Laws, especially “do the laws of nature evolve in time?” Audio of the discussions at the workshop is available