There’s an article in this week’s Nature by Geoff Brumfiel entitled Outrageous Fortune about the anthropic Landscape debate. The particle physicists quoted are ones whose views are well-known: Susskind, Weinberg, Polchinski, Arkani-Hamed and Maldacena all line up in favor of the anthropic Landscape (with a caveat from Maldacena: “I really hope we have a better idea in the future”). Lisa Randall thinks accepting it is premature, that a better understanding of string theory will get rid of the Landscape, saying “You really need to explore alternatives before taking such radical leaps of faith.” All in all, Brumfiel finds “… in the overlapping circles of cosmology and string theory, the concept of a landscape of universes is becoming the dominant view.”

The only physicist quoted who recognizes that the Landscape is pseudo-science is David Gross. “It’s impossible to disprove” he says, and notes that because we can’t falsify the idea it’s not science. He sees the origin of this nonsense in string theorist’s inability to predict anything despite huge efforts over more than 20 years: “‘People in string theory are very frustrated, as am I, by our inability to be more predictive after all these years,’ he says. But that’s no excuse for using such ‘bizarre science’, he warns. ‘It is a dangerous business.'”

I continue to find it shocking that the many journalists who have been writing stories like this don’t seem to be able to locate any leading particle theorist other than Gross willing to publicly say that this is just not science.

For more about this controversy, take a look at the talks by Nima Arkani-Hamed given today at the Jerusalem Winter School on the topic of “The Landscape and the LHC”. The first of these was nearly an hour and a half of general anthropic landscape philosophy without any real content. It was repeatedly interrupted by challenges from a couple people in the audience, I think David Gross and Nati Seiberg. Unfortunately one couldn’t really hear the questions they were asking, just Arkani-Hamed’s responses. I only had time today to look at the beginning part of the second talk, which was about the idea of split supersymmetry.

**Update:** One of the more unusual aspects of this story is that, while much of the particle theory establishment is giving in to irrationality, Lubos Motl is here the voice of reason. I completely agree with his recent comments on this article. For some discussion of the relation of this to the Intelligent Design debate, see remarks by David Heddle and by Jonathan Witt of the Discovery Institute.

Hi Jack,

I will explain why showing S to be false is not as complicated as you might think.

for starters, in his paper Smolin explains why simply observing a neutron star above a certain mass would falsify S.

Just take a look at the paper

http://arxiv.org/hep-th/0407213

around page 33

he has worked out one way of falsifying the theory, saving us the trouble.

but more generally, look at what S actually says and you will see that it is rather straightforward to falsify it by empirical observation

——here’s part of your comment—-

You say, “S is not a premise, but a statement which Smolin challenges us to show is false—-by empirical means” ….

Proving a negative is a high bar to jump – much more difficult than proving a positive….

——–endquote———

I will explain that in THIS case proving the negative of S is NOT a high bar to jump! It is more like proving a positive.

Here is what S says:

P is the parameter space—-if there are 31 dimensionless constants going into the standard models of particle physics and cosmology then P is just a 31 dimensional space—-a point p in P is a list of 31 numbers.

F(p) measures the abundance of black holes. To falsify the statement all you need to do is FIND ONE DIRECTION IN WHICH TO CHANGE p SUCH THAT F(p) INCREASES.

The statement says that changing p (from the measured value) in ANY direction will cause the abundance of BH to DECREASE.

To refute that you just need to find ONE direction in which a change will make it INCREASE. Just find

one uphill directionand you’ve done it! The theory is toast!

The changes involved have to be significant in the sense of exceeding noise or measurement uncertainties, that is discussed somewhat in the paper, but this is the broad outlines.

Who,

It appears that you are misunderstanding direction in usual 3D with any direction in Smolin P space which is very, very different.

Directions, gradients, etc. in P space cannot be measured, therein Smolin premise is so not scientific as antrophic string.

Juan R.

Center for CANONICAL |SCIENCE)

I thought someone might be amused by the suggestion that Smolin’s parameter space is R^31. The dimensionality is taken from the yearend 2005 paper

Dimensionless constants, cosmology and other dark mattersby Tegmark, Aguirre, Rees, and Wilczek.http://arxiv.org/astro-ph/0511774

There was some discussion of that paper here at Peter’s blog including posts by Smolin, Aguirre, and Tegmark.

http://www.math.columbia.edu/~woit/wordpress/?p=310

Smolin doesn’t say how many independent fundamental constants there are in the combined standard models of particle physics and cosmology—naively speaking the number of knobs on the universe-machine. But Tegmark et al give a list of 31 dimensionless numbers.

Anyway, suppose

is that space of dimensionless parameters, and supposePis the list of parameters which we measure. Then the statement of the theory that you have to falsify—-the hypothesis—is this:pyou can’t find a small change inpthat makes F(p) increase(as in all such situations, there is the usual need for reasonableness and good-faith because measurement and calculation involve uncertainty, you would have to show that F(p) increases *significantly* as some parameter is varied)

The most puzzling objection to Smolin’s proposal which I have seen so far is the one offered by Anthony Aguirre in posts #101 and #105 of the thread referred to above. They are easy to find because the thread has 107 comments.

Here is Aguirre’s main point from #105:

Surprisingly, since this comes from Aguirre, I do not see that the objection is relevant. Aguirre argues that SOME black hole might accidentally have an infinite number of offspring, but he does not contend that this is TYPICAL. On the other hand, Smolin’s hypothesis concerns a local maximum, assuming mediocrity. It is not affected by rare instances of “eternal inflation”, should they occur. So Smolin’s fitness function F(p), the number of black holes in a single generation, is typically finite. The hypothesis is that it evolves towards a local maximum and (rare instances of eternal inflation notwithstanding) will typically BE at a local maximum. But this can be CHECKED. So the challenge stands:

can you show a change in some parameter which would result in our spacetime having more black holes?Who

Thanks for your comments. I downloaded the paper. I will read it but I suspect it will take me a week or two to get through it.

Hi Jack,

I am glad you have downloaded the paper and intend to have a look at it. I may be able to save you time by giving some specific page references.

Only a part of the paper is about Smolin’s black hole natural selection idea and the opportunities to test it. We can narrow the focus some.

To recap, the paper I referred you to is:

http://arxiv.org/abs/hep-th/0407213

[b]Scientific alternatives to the anthropic principle[/b]

(Contribution to “Universe or Multiverse”, ed. by Bernard Carr et. al., to be published by Cambridge University Press)

The section on cosmological natural selection begins on page 28

with the section “5.2 Natural Selection”

Section “6 Predictions of Natural Selection” begins on page 30

and runs through page 36.

So I believe that everything relevant is contained in pages 28 thru 36. I look forward to any comments you have, particularly about the testability.

The testability issue speaks to the main point made by David Gross in the remarks which Peter quoted at the beginning of this thread.

“It’s impossible to disprove” he says, and notes that because we can’t falsify the idea it’s not science.”Smolin has offered an hypothesis with some potential for explaining the values of key constants which, however, IS subject to disproof. So it IS science by David Gross standards. This testable hypothesis ought to be mentioned along with the prospect (which Weinberg and Wilczek appear willing to contemplate) of GIVING UP amid string landscape’s welter of possible vacua. The hypothesis may indeed be incorrect and it may be possible to falsify it—perhaps that should be first on our list of things to do.

Each time the discussion becomes more interesting,

Who said,

and now adds

Anyway, suppose P is that space of dimensionless parameters, and suppose p is the list of parameters which we measure. Then the statement of the theory that you have to falsify—-the hypothesis—is this:

you can’t find a small change in p that makes F(p) increase

Therefore, I admit now that you are informed that P is not 3D space. Then the you may aware that experimental methodology would be:

1) Measure the number of BH in our universe (or region). So far i know still nobody measured one and, however, Smolin curiously talk how if we found BHs each day.

2) Take p_1 = G for our universe. Then [i]vary[/i] the value of G in our laboratory and the point p in ‘phase’ space changes to p’.

3) Now measure again the number of BH in this new universe

4) Repeat 1)-3) for rest of parameters from p_2 to p_31

5) If our universe is optimized for the production of BH then the S premise is correct.

May I continue with this nonsense?

—

Anthropic nonsense: universe is optimized for human life.

New ‘BHolic’ nonsense: universe is optimized for BHs.

—

Juan R.

Center for CANONICAL |SCIENCE)

Hello Juan, you say

I am not sure you have correctly stated the Anthropic position (defeatists like Susskind may not actually claim “optimized for”, they may merely be asserting “compatible with”) human life.

The point made by Peter, in quoting David Gross at the beginning of this thread, is that whatever they are claiming is not science because it is not falsifiable. There is no way to demonstrate that the universe is NOT compatible with human life.

By contrast the BH natural selection conjecture IS FALSIFIABLE. This is the significant difference, and the main point I am stressing here.

I am waiting for you to read and comment on pages 28-36 of the paper because several ways to falsify it are discussed there.

The proposed tests of BH natural selection depend on some established physics that has been independently confirmed by experiment, such as Gen Rel, QED, QCD. I trust that you will grant that it is legitimate to use older well-established branches of physics in designing ways to test new theories.

the aim in not to prove that the theory is correct, by testing for optimality in every direction

the aim is to show that the theory is INCORRECT, by finding just ONE direction in which one can vary the parameters so as to increase BH abundance.

Scientific theories are never proven to be correct because they can never be tested in infinitely many cases. In 1919 when Eddington tested Gen Rel, he did not prove that Gen Rel is correct. He made an unsuccessful effort to show that it was incorrect. If he had observed the Pleiades to be in a substantially different position this would have falsified Gen Rel.

But apparently he didn’t and Gen Rel survived that test. And continues to survive. That is what scientific theories do: they CONTINUE TO SURVIVE.

And that is what Smolin’s BH natural selection theory is doing today. It continues to survive astronomical testing. The theory predicts that you cannot have a two solar mass neutron star—because if you could, then the topquark mass could be revised so as to make neutron stars more apt to collapse (this is discussed in the paper). But astronomers are constantly finding neutron stars and measuring their masses. Some day they may find one which can be reliably shown to have a mass of 2 solar or more. That would invalidate the BH natural selection theory.

This is only one of what I suspect are a great many possible ways to test, and potentially falsify, the theory.

Just as people have thought of several different ways to test Gen Rel itself, so they can invent a variety of ways to test this theory, and i hope they do 🙂

Your steps 1) through 5) are essentially an argument that optimality can never be proven and that BHNS can never be shown to be correct. Right! you can’t show that. This is not the issue. The point is that there are ways to show that BHNS is incorrect.

Urs,

I want to make sure I understand the sense of what you said in this thread earlier

http://www.math.columbia.edu/~woit/wordpress/?p=321#comment-7237

“The problem with the landscape discussion is not that it is a priori pointless to think about a theory in which the parameters of the standard model are not uniquely specified.”

I think this means that there IS a problem with the landscape discussion. But the problem is NOT that it involves considering a theory in which the standard model parameters are undetermined.

As I interpret (please correct me if I misunderstand) this rather complicated sentence it suggests that it is OK and quite scientific for a theory not to determine the fundamental constants.

Maybe this is is drifting towards a kind of “apologia” for string theory that says “hey string theory is OK and quite scientific, don’t criticize it because it fails to determine the fundamental constants! Lots of other OK theories do not do that!”

In the Susskind NYT Book Review thread,

http://www.math.columbia.edu/~woit/wordpress/?p=329#comment-7668

you blurred the distinction between testable scientific theories and string thinking by arguing that having an infinite space of solutions is GENERIC, so what is especially deficient about string?

Urs 18 Jan 3:28P

“I tried to point out how a theory generically has infinitely many solutions. So it’s not clear to me why it should be a problem if some theory has only finitely many.”

I think the message is: why should string be considered unscientific if theories generically have an infinite space of solutions?

It helps to keep examples of alternatives in mind. General Relativity is an example of a theory with instant falsifiability. If it had happened to predict the wrong light-bending angle, checked in 1919, then it is simply wrong. If it predicted the wrong clock speeding with altitude, constantly checked by the GPS, then it is simply wrong.

As you pointed out, Gen Rel has an INFINITE SPACE OF SOLUTIONS—I believe these are associated with different possible initial conditions and the key point is SO WHAT? because that does not affect the instant falsifiability. The theory can still make predictions that refute it if they are not observed.

Maybe we are dealing with a sort of apologetics here and some of this could be smoke or red herring. What matters is the testability/falsifiability and not the “infinite solution space” or the “undetermined parameters”

A theory can have infinite solution space and undetermined parameters and still commit to decisive predictions. If string is non-committal and accomodating to any future experimental result then that is probably the problem. And it doesnt help, or is at best only a distraction, to point out that other scientific theories have undetermined parameters and infinite solution spaces.

Basically this amplifies the David Gross quote in the original post.

Only once you have detectors that are sensitive enough to see the required effects.

**Only once you have detectors that are sensitive enough to see the required effects.**

I am not sure how to interpret this, Urs. I think that when Gen Rel was published in 1915 there WERE already detectors able to see the effects.

Or, if there were none already sitting on the shelf, one could see a clear way to build them. One could say in good faith that it was reasonably practical to test—it was in actual practice VULNERABLE to empirical disproof.

I see a moral obligation for theoreticians to make their theories be vulnerable. Because the unity of the scientific community depends on the ability of members to settle their differences of opinion by empirical means.

If a theoretical construct is not vulnerable, it does not seem like proper science to me. This depends on people being reasonable and discussing in good faith. If one can see how to build an instrument within reasonable time and cost, then as long as one can see the way to do it then it does not have to be already available when one makes the theory. One wants to see that the theoreticians are at least sincerely TRYING to make their theory vulnerable to practical tests.

At this point it would help to compare string with some other QG approaches—because it involves a judgement call of what is reasonable to expect. Currently my testcase example is Laurent Freidel paper hep-th/0512113. It looks to me that if the result is extended to 4D then it is falsifiable with instruments that are planned or underconstruction. Please say if you disagree, since i would value your point of view on that very much.

the reason for considering alternatives is because one has to judge what is reasonable to expect (pure logic outside the context of alternatives does not seem to make for a successful discussion)

best wishes

Sure. But in 1815 there were not.

Today, entire high energy physics is suffering from the lack of good detectors. Theory is far ahead of experiment, unfortunately.

“Theory is far ahead of experiment, unfortunately.”

Indeed! I thought it was the other way round, with theory being unable to catch up with experiments. Perhaps I’ve missed a paper on arXiv.org that predicted the masses of quarks, coupling constants, and other Standard Model parameters. How careless of me.