A few weeks ago one Nobel prize winner put out an article promoting the idea of adopting anthropic reasoning as a new paradigm of how to do theoretical physics. More recently another Nobelist, Frank Wilczek, has to some degree followed suit. Wilczek is one of four authors on a new paper entitled Dimensionless constants, cosmology and other dark matters which first appeared on the arXiv November 29th, then in a slightly revised version on December 8. The other authors are Tegmark, Aguirre and Rees, with Tegmark’s name appearing first indicating it’s more his work than that of his co-authors.

I wasn’t sure quite what to make of this paper when it first came out, especially how much it reflected Wilczek’s own point of view on anthropism. Last Friday I attended talks by Wilczek and Tegmark at the 6th Northeast String Cosmology Meeting organized by the Institute for Strings, Cosmology and Astroparticle Physics here at Columbia.

Wilczek’s talk was entitled “Enlightenment, Knowledge, Ignorance, Temptation”. He explained that these corresponded to categorizing parameters of physical theories according to whether life depended on them or not and whether we have a good idea for what determines them or not. Choosing the two possible answers to these two questions gives four cases:

Enlightenment: Parameters that life depends on, and we think we have a good idea about what determines them. Here his example was the proton mass, very small on the Planck scale, but we think we know why: logarithmic running of coupling constants.

Knowledge: Parameters that life doesn’t depend on, and we think we have a good idea about what determines them. One example he gave was strong CP violation, which is irrelevant to life, but very small, perhaps because of axions.

Ignorance: Parameters that life doesn’t depend on, and we don’t have a good idea about what determines them. This includes most of the standard model parameters, as well as just about all parameters in theories that go beyond the standard model.

Temptation: Parameters that life depends on, and we don’t have a good idea about what determines them. The examples he gave were the electron and up and down quark masses.

He said that his talk would concentrate on “Temptation”, the temptation being that of using anthropic argumentation. He noted that David Gross believes this is a dangerous opiate, causing people to just give up instead of really solving problems. The one anti-anthropic point he made was to put up a graphic showing agreement of the lattice QCD spectrum calculations with experiment, saying the lesson was that sometimes real calculations turned out to be possible even though people had at times doubted this. So one should try and “limit the damage”, not go wild and use anthropics inappropriately, trying to save as much beautiful physics as one can even when anthropic reasoning is forced on us.

The rest of his talk though showed a significant amount of enthusiasm for the new anthropism. He referred to people like his co-author Rees who have been promoting the anthropic point of view for years as “unhonored prophets”. Given the paucity of experimental data relevant to explaining where things like standard model parameters come from, he said that at least anthropics gives lots of new questions so one has something to do when one gets up each day which might be fruitful. He attacked the idea of using “pure thought”, without consulting the physical world, saying this hasn’t worked, not 20 years ago, not now, not in the future. I presume he had string theory in mind when he said this, noting out loud that it might annoy some people in the room.

The main idea about anthropics he was trying to push is that anthropic calculations were “just conditional probability”, making much of the equation

f(**p**)=f_{prior}(**p**)f_{selec}(**p**)

for the probability of observing some particular value **p** of parameters, given some underlying theory in which they are only determined probabilistically by some probability distribution f_{prior}(**p**). The second factor f_{selec}(**p**) is supposed to represent “selection effects”, and it is here that anthropic calculations supposedly have their role. In the paper the authors argue that “Including selection effects is no more optional than the correct use of logic”. The standard way physics has traditionally been done, one hopes that the underlying theory determines **p** (i.e. f_{prior}(**p**) is a delta-function), making selection effects irrelevant in this context. The authors attack this point of view, writing:

*to elevate this hope into an assumption would, ironically, be to push the anthropic principle to a hedonistic extreme, suggesting that nature must be devised so as to make mathematical physicists happy.*

At no point in his or Tegmark’s talks, or anywhere in their paper, do they address the central problem with the anthopic principle, that there’s a huge issue about whether you can get falsifiable predictions out of it, and thus whether you’re really doing science. In this context, the nature of the problem is that if f_{prior}(**p**) is not peaked somewhere but is flat (or more or less flat), then everything just depends on f_{selec}(**p**), but if you calculate it anthropically, all you are doing is seeing what you can conclude from known laws of physics and the fact that we exist. In the end what will come out of this kind of calculation is some probability distribution that better be non-zero for the values of the parameters we observe, otherwise you’ve done the calculation wrong.

There is a particular sort of physical model one can hope to falsify this way. If one assumes our universe is a randomly chosen point in a “multiverse” of possibilities, and looks at an observable that is supposed to have a more or less flat probability distribution in the ensemble given by the multiverse, then one can argue that we should be at some region of parameter space containing the bulk of the probability in the anthropically determined f_{selec}(**p**), not far out in some tail where the probability distribution is vanishingly small. There are plenty of examples of this already. The proton lifetime is absurdly long compared to bounds from anthropic constraints, so any model of a multiverse that doesn’t have some structure built into it to generically sufficiently suppress proton decay is ruled out. This includes the string theory landscape, so one of the many mysteries of the whole anthropic landscape story is why its proponents don’t take their own arguments seriously and admit that their model has been falsified already. It also applies to Tegmark’s favorite idea, that of the existence of a Level IV multiverse of all possible mathematical structures, an idea he also promotes in the paper with Wilczek.

Wilczek also discussed one particular axion cosmology model in which f_{prior}(**p**) can be calculated. In these models one has the relation

$$\xi_c\sim f_a^4\sin ^2\frac{\theta_0}{2}$$

for the axion dark matter density in terms of the Peccei-Quinn symmetry breaking scale and the misalignment angle of the axion field at the Peccei-Quinn symmetry breaking phase transition. To make this agree with the observed dark matter density, if one assumes the misalignment angle is some random angle then the Peccei-Quinn scale has to be about 10^{12}GeV. If one wants to make the Peccei-Quinn scale the GUT or Planck scale, one has to find some reason for the misalignment angle to be very small. The proposal here is that this happens for anthropic reasons, since if the angle were not small it would cause an amount of dark matter incompatible with our existence. For these small angles the above formula implies that the probability distribution for the dark matter density caused by such axions satisfies

$$f_{\text{prior}}(\xi)\sim \frac{1}{\sqrt \xi}$$

The Tegmark et. al. paper contains an elaborate calculation of f_{selec} for the dark matter density, involving all sorts of “anthropic” considerations which goes on for eleven pages or so and involves a bafflingly long list of considerations about galaxy, star and planet formation, as well as many possible dangers that could have disrupted the evolution of life, such as disruption of the Oort cloud of comets. I’ll freely admit to not having taken the time to follow this argument. The end result for f_{selec} as a function of $\sqrt\xi$ is a probability distribution with the measured dark energy corresponding to something close to the peak.

I’m not sure exactly what conclusions one can or should draw from this calculation. So many different facts about our specific universe are being folded into this that it’s not clear to me that there isn’t some circular reasoning going on. This is a general problem with “anthropic” arguments: if you assume that life couldn’t exist if the universe was much different than it is, you smuggle all sorts of information about the way the world is into your “anthropic” calculation, after which it is not too surprising that it “predicts” the universe has more or less the properties you observe.

What we really care about in these arguments is whether they can be used to extract any information whatsoever about f_{prior}, the physics we are trying to get at. In this axion cosmology case we have a prediction for this distribution and the calculation shows this is consistent with the observed dark energy density, but as far as I can tell, all sorts of other quite different distributions would work too. So, I’m still confused about exactly what this calculation has told us about the underlying axion cosmology physics that it is supposed to address, other than that it is not obviously completely inconsistent.

Tegmark’s talk at Columbia was titled “Measuring and Predicting Cosmological Parameters”. The “measuring part” was a summary of some of the impressive experimental evidence for the standard cosmological model. The “predicting” part was pretty much pure promotion of anthropism, including a long section on reasons why the electroweak symmetry breaking scale is anthropic and some comments making fun of David Gross (“even he couldn’t predict the distance from the earth to the sun. Laughter…”). The only actual “predictions” mentioned were the results about the axion cosmology model mentioned above and described in detail in the Tegmark et. al paper, as well as the well-known Weinberg anthropic “prediction” for the cosmological constant.

All in all, I found these two talks and the Tegmark et. al. paper pretty disturbing. They seem to me to be part of a highly ideological effort to sell the Anthropic Principle as science. The paper devotes two pages to a detailed list of standard model parameters, and makes various statements about the probability distribution function on this large number of parameters, even though it has nothing to say about almost all of them, and I think there’s a strong argument that the anthropic program inherently will never have anything useful to say about most of these parameters. Many of Wilczek’s remarks were more modest, but the paper he has signed his name to is highly immodest in its claims for anthropism. Together with Weinberg and Susskind’s anthropic campaigns, it seems to me that more and more theorists are going to join this bandwagon. Neither Wilczek nor Tegmark are string theorists (and Wilczek is clearly somewhat skeptical about the whole idea), but there seems to be an unholy alliance brewing between them and Susskind and his followers. The only prominent person in the field standing up to this publicly is David Gross, and it is very worrying to see how little support he is getting.

**Update:** A preprint by Frank Wilczek corresponding to his talk last week entitled Enlightenment, Knowledge, Ignorance, Temptation has appeared. It is a contribution to the same conference as the one Weinberg contributed Living in the Multiverse to, I gather in honor of Martin Rees. Wilczek’s preprint announces a “new zeitgeist”, that anthropic arguments are in the ascendancy. One quite strange thing in the preprint is that he suggests an anthropic explanation for the long proton lifetime in terms of doing anthropic calculations involving future observers.

He does say there are drawbacks to the new order (a loss of precision and of targets to calculate), but on the whole he seems to embrace the new anthropic paradigm rather whole-heartedly, seeing it as a lesson in humility for those who had the hubris to believe it was possible to understand more about the universe through “pure thought.”

**Update:** Two of the authors of the paper discussed here (Aguirre and Tegmark) wrote in with some comments that are well worth reading (as well as those from Smolin and others about his own proposal). Aguirre points to an interesting paper of his On making predictions in a multiverse (see also an earlier paper with Tegmark), which addresses some of the conceptual issues that were bothering me about this sort of calculation. It points out many of the problems with this kind of calculation, and I don’t really share the author’s optimism that they can be overcome.

Lee Smolin mentioned to me a somewhat related workshop that was held this past summer at the Perimeter Institute, on the topic of Evolving Laws, especially “do the laws of nature evolve in time?” Audio of the discussions at the workshop is available

God, I’m so sick of this crap. And people wonder why I do math these days….

The proton lifetime, by the way, has been often mentioned by people against anthropic reasoning. It is not alone in Wilczek’s “ignorance” category. The existence of such parameters is one of the reason that Nima would like to claim that only superrenormalizable couplings are anthropically selected. It’s also was some of the motivation for his “friendly landscape” paper.

Remember, BTW, the vote in Toronto. Most people don’t go for this stuff.

The anthropic arguments remind me of the ones you will encounter from people into bible codes. Typically, they have found some code, and then the calculate the probability that a book like the bible would at random produce this code. Of course, it turns out to be really small, thus “proving” divine intervention. The point is of course that they have singled out precisely the code they already found, and in any random string of letters, they might very well not have found this particular code, but they probably would have found another code, which they then would have argued was of divine origin.

So the anthropic people have found the code that is life in the universe, and in particular the kind of life that exists here on Earth. And it seems really unlikely to them, implying not a divine origin of the universe, but an almost infinite amount of existing universes. But life, and sometimes really bizarre forms of life, are known to flourish in the most unlikely conditions here on Earth. Why not in another kind of universe? All needed is some process of replication, variation, and some kind of selection, maybe plus minus some conditions on the mutation rate, et cetera. Have the anthropic people been able to produce anything towards a proof that no kind of life is possible in some hypothetical universe? If they did, they’d at least have a methodology.

Finally, why single out life? Why not single out the Iraq war or the act of seeing a squirrel? Or why not concern oneself with universes in which it is possible to go out and drink beer?

It may be worth mentioning that besides a Nobelist, Wilczek is editor in chief of Elsevier’s Annals of Physics. Let’s just hope that his anthropic leanings do not permeate the journal’s content.

Apart from the points Peter mentions against anthropic reasoning I see two others. The first : anthropic reasoning necessarily begins this way : there is intelligent life–> there is life –> there is carbon chemistry. After this, one can make so-called predictions which I think are just very indirect measurements of some observables which remind me (in the best case) of the very clever way Einstein measured avogadro number thanks to the fact that the sky is blue. To call this sort of thing a ‘prediction’ one has to be confident in the chain of implications I mentioned above, which is very weak indeed since it is based mainly on our lack of imagination and on our very poor understanding on what is intelligence. One also has to assume the existence of a multiplicity of uninhabited universes and use the indifference principle, which is well known to lead to crazy conclusions on some occasions. For instance, suppose I am struck with sudden amnesia about who I am. I could argue that I am chinese by using this principle. However looking around me would soon provide me with clues that I’m in fact french. I feel that it’s the same with anthropic argument. That is, you assume the standard model+GR is the end of the story, you vary parameters as if they were independent, knowing it will give crazy results if they happen not to be so etc… Perhaps this is the only thing to do when you give up any hope of finding a more precise theory. Except that ‘giving up’ is not precisely the right attitude for a scientist, I think.

Everyone is tired (in cosmology at least) of talking about the anthropic principle. Everytime it comes up everyone rolls their eyes and mutters something about philosophy and that its ‘boring’.

When approaching a problem as a working scientist, I see no valid reason to think of fprior as anything other than a delta function. If I can’t calculate something, I then assume its either b/c im stupid or b/c I have insufficient information about the system, such that it might naively appear that fprior is smeared out into a nontrivial ensemble.

Historically the amounts of time a physics problem did not involve a fprior delta function and instead was something else is vanishingly small compared to the times that there is a pure deterministic or selection effect at play.

So it seems to me, from a historical perspective, this is just another example of people jumping the gun and giving up too early.

And perhaps its really not so surprising is it? We are implicitly making an intellectual leap of tens of orders of magnitude between observable physics and intellectual gaming, does it astonish anyone then that there is a understanding gap, such that you could seperate unknowns away into the prior distributions?

It would be interesting if anthropic reasoning could predict something we do not yet know. E.g. the exact Higgs mass.

As long as anthropics only ‘predict’ what we know already, the whole philosophy is not very convincing.

Does anybody want to comment on this paper?

http://xxx.lanl.gov/abs/physics/0512062

From the abstract: “I make some comments about brane world scenarios and their potential to strengthen the Fermi Paradox.”

In the text: “My solution to the Fermi paradox is compatible with the

speculations that some UFO’s could be true alien spacecrafts.”

The classification Wilczek does is a good idea in order to distinghish anthropicism from empiricism. To me, antropic reasonment is a trick to introduce empirical measurements (under the disguise of “life normal conditions”) inside the theory.

I join Gross’ objections; any ad-hoc theory is an opiate if it is not perceived as ad-hoc, and it can cause retirement from the fight. Mattews’ pill killing Barrow. I was not surprised by Weinberg take on anthropicism; his book promotes Effective Field Theory, which is a lesser narcotic but with similar effects. I am not sure about Wilczek, even after reading your report.

Please, Wolfgang and others, no space aliens and UFOs here. The anthropic stuff is bad enough.

Aaron,

Which crap are you sick of? Peter’s crap? The anthropic crap? Wilczek’s crap?

Anthropic “just so” stories.

Peter,

OK. But I think this type of paper (about the ‘subanthropic principle’) is the logical next step in the line of anthropic reasoning.

By the way, when will Frank Tipler and the ‘Physics of Immortality’ re-appear?

It’s disappointing that so many otherwise smart people are caving into this landscape rubbish. For one it will not ever yield one prediction whatsoever. The only real hope string theory or its new incarnation as the yet to be discovered M-theory is for it to have enough symmetry to find a unique vacuum. I am very partial to many of the ideas and discovers surrounding string theory but if I came across it now I’d think twice before putting in all the effort.

I think the reason for all this landscape crap is that string theorists have yet to make the next big discovery in superstrings. They say one comes along every ten years or so. Since AdS CFT not much has happened from a fundamental standpoint. It seems to me that string theory is foundering because we’re reached an impass. String/M-theory will not move forward until the next major piece of the puzzle has been discovered. The next major step is to create a consistent second quantized covariant version of string/m-theory. String theory in the 60’s gave way to QCD. Perhaps the current string theory/M-atrix theories will give way to a local field theory with enough symmetry to give a single unique true vacuum state. Or perhaps not. It really gives me the willies when some of the most intelligent men who’ve walked the earth invoke anthropic arguments in the context of some supposed string landscape.

JDB,

Field theories with lots of symmetry tend to have more vacua, not fewer. Very crudely, more symmetry equals more representations.

I don’t understand Wilczek’s categories. He claims to know that life doesn’t depend on the smallness of CP? How could he possibly know this? Does it get any more pathetic than this? I am sad to see Wilczek go in this direction.

Jason, what makes you think that once Super String Theory is able to find a unique vacuum and make predictions, things will be OK, then? You guys have worked on the crackpot theory for too long and forget that a science theory not only needs to make predictions, but more important, make

CORRECTpredictions. The day that super string theory makes predictions will also be the day that the idea is dead. Because there is only one in 10^500 chance your “unique vacuum” will actually match the reality world. The world is 3+1 dimensions! There has been nothing suggesting anything different from that fact.It will take a truely wrong time before super string theory will eventually be able to make a prediction at all. The

real tragedyis by then you will realize how wrong you guys were and what a waste of time and effort it had been, to dwindle on a crackpot idea for so long, just because it was a paid day time job to research crackpots.Off topic:

Lubos just posted about Riemann’s hypothesis and

states that “a proof may possibly follow from string theory”.

I have greatest respect for Wilczek but I don’t understand where he is going with this. This material would be acceptable as a speculative afterthought chapter in a popular book or as a dinner discussion.

There was a discussion on anthropic stuff between Dyson and Gould (the biologist) many years ago and it went like this:

Dyson: “The numerical coincidences are striking. All the fine balancing in QED, for example. Universe must had expected our arrival – I have this hope, I feel this way.”

Gould: “If you look on astrophysics models from the early 20th century, you can see now that the models were completely wrong. But then (as now) their authors were saying – ‘what an incredible coincidence, if this aether behaved a bit different and comets were less populous, we would would not be here, no life possible, so these parameters must have been pre-arranged!’ So there seems to be a strange alure to fall back on anthropism whenever we lack a real understanding.”

milkshake,

I had an Earth Science teacher who gushed about how amazing the world is. If the oxygen content of the atmosphere were sligthly different, we couldn’t exist. The world is perfectly tuned to our existence. Somehow it never occurred to him life existed here before oxygen.

Some how it never occured to any one that instead of the world being perfectly tuned for our existence, it is actually the case that instead our existence is ferfectly tuned for this world, not the other way around, just like the case of oxygen on earth.

Peter

Your point about the circularity of Anthropic reasoning is dead right I think. People complain about philosophy—but this is *bad* philosophy, bad logic. Interestingly I don’t know of a single professional philosopher who has backed this idea.

And as for explaining anthropic reasoning using conditional probability—that is truly a risky business. There are more abuses of conditional probability out there than all of us have had hot dinners.

cheers

Speaking of the conversation between Freeman Dyson and Stephen Jay Gould, and the gushings of Kris Krogh earth science teacher, NPR’s

Speaking of Faithis starting a two-part series on Albert Einstein this week (tonight on some NPR stations). Guests include Freeman Dyson and Paul Davies. The latter is described as an astrobiologist.As I write this, Dyson is saying that “nature is much more than a set of equations”—more like the rain forest surrounding a mountain peak than the pristine sterility of the peak itself. This immediately followed a turn of the discussion to Einstein’s break with his contemporaries on quantum mechanics.

(For what it’s worth, Dyson is on record as saying that he finds the complexity and diversity of nature more interesting than any prospect for the unification of its laws.)

Anthropic “Principle”?

One thing that is certainly unbounded is human hubris.

Even if one suspects that the existence of life could teach us something important about fundamental physics, it seems to me that the formulation of the Anthropic Principle as a selection effect is a woefully inadequate way to approach the problem. Given what we have learned about the basis of heredity and evolutionary change, why not ask a question like this:

Life depends for its continued existence on the fact that the laws of physics allow metastable configurations of matter, and also on the possibility of such configurations repairing and reproducing themselves, albeit with variations (errors) that produce changes of form and patterns of behavior in successive generations. To what extent does the allowed existence of such systems,

understood computationally, constrain or dictate the laws of physics?This problem formulation is reminiscent of questions about fundamental limits of computation, which have received considerable attention from workers in quantum computation and others. Of course it is by no means unexplored territory, and would require the input of some novel ideas to throw any additional light on our current problems. I don’t know whether this is possible, but the current approach seems only marginally better than numerology. In fact, the main motivation for it, aside from the simple logic of selection effects, seems to be that it allows string theorists, particle physicists, and astrophysicists to continue applying familiar concepts, techniques, and arguments in this new context, unsatisfying though the results may be. That’s easier than a deep reconsideration of fundamental assumptions and the development of fundamentally new ideas.

Talking about predictions (that is a strong word these days!), I am doing a survey on what quantum gravity models predict (again, the ugly word) on the GZK cutoff. I could not find (up to now, but still investigating) not even one single string theory paper solely dedicated to this problem. On the other hand, I did find at least 2 LQG papers on it (from the same authors, though):

For those who want to know more about this issue, just go to my blog page. In there, I list today several references of interest on this matter.

What does the Anthropic Principle say about the GZK cutoff? Does the cutoff exist because we are here to observe it? Or does the cutoff not exist because we are here to not observe it? (note: I am being sarcastic here. The AP never convinced me as a valid scientific approach. It really bores me).

Best wishes

Christine

Hi all,

As the blog post I’m responding to is now archived, I thought I’d weasel in on this one… I will be interviewing Lisa Randall shortly on her “Warped Passages” for my podcast program on authors, academics and intellectuals, and I’m eager for your input!

In the spirit of open-source research, I’m looking for good questions to ask her that are appropriate for an intelligent but mainstream (ie. non-scientific) audience. Please do pitch in, and of course I will credit you (and your blog) when I ask her your question(s)!

If you’d like to take a look at ThoughtCast, please go to http://www.thoughtcast.org, and thank you!

— Jenny

Life depends for its continued existence on the fact that the laws of physics allow metastable configurations of matterHere’s something else that bothers me about the AP and similar arguments. Who says that the above is true (stable matter, carbon based life, etc). It seem to me that simple complexity may be a neccessary and sufficient condition for life….Can complexity arise in quark-gluon plasma. How about gravitational based complexity in a universe where big G is much much larger….

If life is more diverse than our carbon based examples appear, Quantokens point becomes very relevent.

…instead of the world being perfectly tuned for our existence, it is actually the case that instead our existence is ferfectly tuned for this world…Here, the anthropic principal is even less “predictive” than before.The point being that all of this is a never-ending discussion, built on likelyhoods, I-believe-thats, it-could-be-possilbe-thats, and what-ifs.

We would do well to remember that congress only funds nuclear missiles, semiconductors, space craft, and productivity enhancements.

Well, I guess, on the bright side: at least they’re

startingto think about logic.Some of us began suspecting that all’s not well with Wilczek already after reading his review of “The Road to Reality”. Post-Nobel stress syndrome, maybe?

Peter,

Your right that Tegmark’s favorite idea – the existence of a Level IV multiverse of all possible mathematical structures – is subject to the same problems. What do you think generally about a Level III multiverse like that supposed to be underlying the physical computations performed by quantum computers?

Michael,

I’m not really very fond of many worlds interpretations of QM, and not sure exactly what Tegmark has in mind here. The formulation you give “underlying the physical computations performed by quantum computers” doesn’t seem to me to necessarily correspond to anything except QM itself, and I don’t see the necessity of bringing the multiverse into discussions of QM.

Jenny,

An interesting question for Randall might be to ask her what she thinks about the whole “Anthropic Principle” controversy, discussed in several blog entries here recently, including this one. No need to credit me or the blog with this question.

[Jenny: The following might prompt some questions for Lisa. It reinforces my suspicion that higher-dimensional theories lead into a quagmire not unlike the multiverse and anthropic arguments.]

New paper on arXiv.org:

Extra symmetry in the field equations in 5D with spatial spherical symmetry(gr-qc/0512067)From the text:

To Christine Dantas: What LQG implies for the GZK cutoff depends on whether the symmetry of the ground state shows broken Lorentz invariance or deformed Poincare invariance. The former is what Alfaro and Palma and other authors assume through their choice of an ansatz for the ground state. But there is increasing evidence (but no proof so far) for the latter conclusion. There is proof that Poincare invariance is deformed but not broken in 2+1 dimensional quantum gravity coupled to matter and there are heuristic arguments such as hep-th/0501091. If the ground state has deformed Poincare symmetry the expectation is that GZK threshold does not differ measurably from that of ordinary Lorentz invariant theories, as shown in several papers, including gr-qc/0312089, astro-ph/0008107 and gr-qc/0312124.

I agree its strange there are no papers making the obvious point that as string theory is constructed to be Poincare invariant it predicts that ordinary special relativity should hold to arbitrarily small distances. This would appear to be the only testsible predictions string theory is capable of making. The reason is perhaps that they would prefer to keep open the option of finding a consistent string vacuum with any symmetry that is observed experimentally.

Thanks,

Lee

Peter,

Thanks for the response. I’ll take not corresponding to “anything except QM itself” as a compliment.

I guess then that you would not agree with Deutsch and others that the quantum theory of computation is the clearest and simplest language and mathmatical formilism for setting out quantum theory itself?

I agree its strange there are no papers making the obvious point that as string theory is constructed to be Poincare invariant it predicts that ordinary special relativity should hold to arbitrarily small distances.It’s easy to break Lorentz invariance by turning on a background field.

Michael,

Sorry, but I’ve never spent much time reading about or thinking about the quantum theory of computation (or is it the theory of quantum computation?), so I don’t really have any views about it.

My views about quantum mechanics are different than some people’s. I think the basic formalism itself, embodied in the path integral and in the idea of describing the world in terms of a Hilbert space, with self-adjoint operators as observables, is something closely connected to representation theory, and extremely mathematically deep. Sure, the interpretational issues that arise as one tries to understand how classical behavior emerges from the quantum formalism are very tricky, but that’s no reason to search for a replacement for the QM formalism itself. There may be interesting issues arising from quantum computing, I’m just not very well informed about them.

Peter,

Thanks again. I agree that there is no reason to search for a replacement for the QM formalism itself.

Peter,

(slightly offtopic)

Do you know of any references which try to put path integrals on a more mathematical rigorous footing, in the field theory context without going to the euclideanized version of the path integral?

Thanks for the suggested questions — I really appreciate it. Also, I read an article about how this blog is quite a lightening rod. Hooray!!

-Jenny

I guess then that you would not agree with Deutsch and others that the quantum theory of computation is the clearest and simplest language and mathmatical formilism for setting out quantum theory itself?The clearest and simplest language for setting out quantum theory is quantum theory. Quantum computing is just an application of quantum theory.

Do you know of any references which try to put path integrals on a more mathematical rigorous footing, in the field theory context without going to the euclideanized version of the path integral?Ugh. I suppose there’s Glimm and Jaffe, but the impression I get from afar is that constructive field theory has been a spectacularly unsuccessful endeavor.

For the last, I suppose you could ask Lisa about how she fell off a mountain a few times, but it looks like that’s actually fairly widely covered. You could ask her how she feels about top-down vs. bottom-up approaches to physics. You could ask what she feels might happen when the LHC turns on in a few years.

I actually think the middle one might be the most interesting.

JC

There is a talk on this at KITP site by Graeme Segal of Oxford. You can watch the entire thing (and there are not so many interruptions in this one).

Cheers

Jenny,

I do not think that you should be interviewing String theorists, or apologists for String theory at all. The public are not completely stupid and the more exposure the String theorists get the more the public will realise that their hugely speculative idea has very little chance of connecting with reality. Those who are content to pursue ideas simply because they find them mathematically intriguing, and who show little or no interest in finding experimental verification, should not be held up as examples of application of the scientific method.

Find some genuine scientists. Molecular biology, for example, is an exciting area now – I am sure that you will find much more interesting material here.

Aaron says, “It’s easy to break Lorentz invariance by turning on a background field.”

In standard field theories, like Maxwell, it costs energy to break Lorentz or translation invariance, which is why for these theories the vacuum is the only Poincare invariant state. Are there explicit examples in string theory where it doesn’t cost energy to break Poincare invariance of the uncompactified dimensions? Also, are there consistent string vacua with deformed Poincare invariance? With Magueijo we began to investigate this and found partial results in hep-th/0401087.

Lee

Adrian Heathcote said :

>Interestingly I don’t know of a single professional philosopher who has backed this idea.

Well there is at least Nick Bostrom who apparently is writing a paper with Tegmark. Incidentally, Bostrom also support the simulation argument which according to me is subject to the very same kind of logical problems that anthropic reasoning.

As to Tegmark and his level IV multiverse, it’s an idea I find more interesting that level 1 to 3 multiverse for the following reason : you don’t care at all about what happens in multiverse 1-3, it does not constrain you at all, not more than what happen in a fiction, whereas you do care about what “happens” in the mathematical world. Our world would be very different if you changed some math formula/theorem. This is why I think of mathematical structures as more real than parallel universes.

Dr. Smolin,

Thank you very much for your comment on the GZK cutoff issue.

Best wishes

Christine

FB said

”Well there is at least Nick Bostrom who apparently is writing a paper with Tegmark. Incidentally, Bostrom also support the simulation argument which according to me is subject to the very same kind of logical problems that anthropic reasoning.”

I can’t read French but I trust you to be right about this one.

Hadn’t heard of Nick Bostrum previously but having looked at some of his stuff I feel no better about the AP.

BTW a sure sign that someone is talking unmitigated nonsense with the AP is when they say such things as ”the cosmological constant’s value is *caused* by the existence of humans.” This is the most foolish abuse of the concept of causation I know. There should actually be a law against it.

Hi,

I just want to understand something about anthropic reasoning. Suppose I find that the probability of finding value of X close to what we observe is 99%. So what? Suppose I find that the probability of finding value of X close to what we observe is 10^{-100}%. Then what?

Is it that beneath this kind of reasoning is the metaphysical assumption that our universe is probable? If so how to justified this assumption?

Pingback: 【格志】 无神论者

Those who are content to pursue ideas simply because they find them mathematically intriguing, and who show little or no interest in finding experimental verification, should not be held up as examples of application of the scientific method.And what exactly makes you think that Lisa Randall fits this description?

Are there explicit examples in string theory where it doesn’t cost energy to break Poincare invariance of the uncompactified dimensions?I was thinking of the D-branes with a Moyal product which is a limit of a background with a B-field.

Also, are there consistent string vacua with deformed Poincare invariance?Not that I know of. Some people have thought about q-deformed de Sitter symmetries, though.

Those who are content to pursue ideas simply because they find them mathematically intriguing, and who show little or no interest in finding experimental verification, should not be held up as examples of application of the scientific method.– And what exactly makes you think that Lisa Randall fits this description?Not as guilty as many, I agree, and not to be bracketed with those who advocate “anthropic” capitulation. But I would still classify her as an apologist for ST.