Progress on increasing luminosity at the LHC has been going extremely well, with peak luminosity a few moments ago over 7×10^{32}cm^{-2}s^{-1}. So far integrated luminosity is over 200 pb^{-1}, well on the way to the extremely conservative nominal goal for the year of 1000 pb^{-1}. By fall, with the shutdown of the Tevatron at the end of FY 2011, the LHC experiments should be in a position to start overtaking the Tevatron and seeing evidence of a standard model Higgs if it is there.

It’s still very early to know how Fermilab will do in the US FY2012 budget, other than that the Tevatron will definitely not be there. However, the Obama administration is supportive in its budget proposal, and this document from the Republicans running the relevant House committee is encouraging for HEP research. Democrats and Republicans seem to agree that science research is a good thing in general, and HEP research is not one of the categories that annoys Republicans and that they suggest cutting (applied research that could be done by private companies, climate science research, environmental research, ITER). One member of the committee is freshman Republican Randy Hultgren, who represents the district that includes Fermilab, and he added his own addendum to the report, emphasizing support for HEP research. Hopefully the Republicans will want to help re-elect him by getting him anything he asks for…

With the bizarre US budgeting process of recent years though, whatever the appropriate Congressional Committee decides may turn out to be irrelevant, with last minute budget cuts appearing from mysterious sources to get things under whatever numbers end up being agreed to.

The New Yorker has a profile this week of David Deutsch. I still can’t figure out what his argument is that if a quantum computer works, that means there are multiple universes.

Lots of people are asking me what I think of ‘t Hooft’s new paper. The answer so far is just that I don’t understand it. He’s doing something unusual with how he handles conformal symmetry, and I think one needs an expert on that to weigh in.

Mathoverflow continues to amaze me, providing the sort of high-quality discussion that the internet was always supposed to provide, but rarely did. For example, see this recent question, which asks about the relationship between two different ways of encoding the geometry of a manifold. One way to do this is to choose a metric, the other is to choose a connection on the frame bundle. For arbitrary bundles, there’s an infinity of possible connections and they have nothing to do with the metric, but the frame bundle carries extra structure (the vierbeins, in physicist’s language). Given a metric, this extra structure can be used to pick out a unique connection (called the Levi-Civita connection), which satisfies two conditions: orthogonality and zero torsion. The question asked is about whether one can go the other way: given a connection, is there a unique metric for which it is the Levi-Civita connection?

The answers given include one by Fields Medalist Bill Thurston, whose comments reflects his background as a topologist, another is by MSRI director Robert Bryant, whose answer is that of a geometer, one who has delved deeply into the subject, including its roots in the work of Elié Cartan. The fact of the matter is that the relationship between these two structures is not one-to-one, for reasons that are well explained. This may be of interest to physicists thinking about the quantization of gravity. In that subject, one basic question is that of which fundamental variable to pick to “quantize”, and the conventional choice is the metric, even though in non-gravitational physics, the conventional choice is the connection. Philosophically though, the gauge symmetry involved in gravity is something like local translation symmetry, and the right analogy of a Yang-Mills connection might be not a connection on the frame bundle, but something like the vierbein, but that’s a whole other story….

I’d like to echo how amazing and wonderful it is to see answers by Bill Thurston and Robert Bryant to a question on MathOverflow, not only because each one is a world class mathematician but because each brings a completely different perspective to the same question. It allows the rest of us, especially students, to see how differently the same question can be approached by different mathematicians.

Regarding ‘t Hooft’s new paper:
I always thought that this (local conformal symmetry)
is the only direction in which further progress could be
made on the theory side. (The whole mass/higgs and
hierarchy problems are closely related to this)
It’s nice that he’s working on that topic.

The physics stackexchange site allows homework level questions, too, and should rather be compared to the math stackexchange site than to math overflow. The success of math overflow can be attributed to the strict moderation, allowing high quality questions only – and of course the existence of a consensus about the meaning of “high quality” in the global math community in the first place. Is there a similar consensus in the physics community or even in the hep-th community? I don’t thinkt so…

If you support a certain glamorously worded version of the many-worlds interpretation of quantum mechanics, the ability for a quantum computer to put itself in a superposition of a bunch of states, do interesting things in each of those states, and then ‘recombine’ and cough up the answer to a problem would be evidence for ‘many universes’. But so would various simpler applications of the superposition principle. I don’t think it’s really that big a deal. It’s just a way of talking.

If you don’t like this way of talking, you can just translate it into an interpretation that you like better – if you do it right, you’ll never disagree on any observed phenomena. That means you can argue endlessly about which way of talking is ‘right’ and never reach a conclusion. But it also means you don’t have to.

For various reasons the physics stack exchange site is not likely to ever have the same quality as Mathoverflow ( I don’t think it’s the level of questions, but the quality of the answers which is the issue, but that’s for another place). There is a proposal to have higher quality physics site devoted mainly to research questions here:

Once it has enough supporters we’ll see exactly what is needed to have high quality discussion, but I understand the intention is to aggressively pursue something like the MO model. I suspect it won’t be easy…

“How does quantum computation shed light on the existence of many worlds?”

He responded as follows:

“Say we decide to factorise a 10,000-digit integer, the product of two very large primes. That number cannot be expressed as a product of factors by any conceivable classical computer. Even if you took all the matter in the observable universe and turned it into a computer and then ran that computer for the age of the universe, it wouldn’t come close to scratching the surface of factorising that number. But a quantum computer could factorise that easily in seconds or minutes. How can that happen?

Anyone who isn’t a solipsist has to say the answer was produced by some physical process. We know there isn’t enough computing power in this universe to obtain the answer, so something more is going on than what we can directly see. At that point, logically, we have already accepted the many-worlds structure. The way the quantum computer works is: the universe differentiates itself into multiple universes and each one performs a different sub-computation. The number of sub-computations is vastly more than the number of atoms in the visible universe. Then they pool their results to get the answer. Anyone who denies the existence of parallel universes has to explain how the factorisation process works.”

This was a simple response to a complicated question in the popular press. Of course, he goes into greater detail in his work, and especially in his two books.

That may not satisfy you — but it gives a pretty good outline of how his argument runs.

“Anyone who isn’t a solipsist has to say the answer was produced by some physical process. ” You know, like, maybe, the schrodinger equation in the observable universe?

Not including quantum mechanics in your definition of “computing power” seems a little restrictive. Is there a precise definition/reason for sticking to classical physics?

I did read that argument in the article, and it made no sense to me, still doesn’t. All it seems to give is that there is more to the universe than classical physics, not that there are other universes.

While I am here, seems to me the basic flaw in Deutsch argument is that it suggests a speedup in quantum computing far beyond what is actually attained. It is also strange that the argument does not need or make reference to decoherence, which is a necessary ingredient in any “many-world” view.

@Mike:
Deutsch’s argument is fallacious. They key term is “classical computer.” There do, in fact, exist physical processes working within this universe that can accomplish what he is stating: they are part of quantum mechanics. Unless you believe quantum mechanics requires multiple universes, this isn’t a proof. It is circular reasoning.

Moreover, he is forgetting that prime factorization has not been proven to be above polynomial – it is just that the asymptotically most efficient algorithm known is not polynomial. He would best stick to a fact that is well known, that simulation of a quantum system by a classical computer is, in fact, exponential in the number of quantum states.

As for the article: from the abstract (I don’t have access to the paper itself), I object to a few claims made in the New Yorker article itself. For example, “With one millionth of the hardware of an ordinary laptop, a quantum computer could store as many bits of information as there are particles in the universe.” That requires a very odd definition of storage. Storage usually implies retrieval. You cannot retrieve more than N classical bits out of an N-qubit system. Or “Tells how Deutsch came to propose a universal computer based on quantum physics, which would have calculating powers that Turing’s computer (even in theory) could not simulate.” This is kind of sloppy, since in theory a Turing machine could in fact simulate any quantum system – it may just take more than the age of the universe/more than the matter available in the universe to do so.

Of course, these could be just mistakes by the writer rather than by Deutsch himself.

I know you’ve seen at least part of the following as well. I think it’s informative, but of course, if you wish to delete it I understand and am not interested in cluttering up your blog.

Dan,

The the Schrodinger equation, as applied to a quantum computer, certainly “predicts” the right results, yes — but it doesn’t “explain” how they arise.

Deutsch has addressed this distinction by analogy to Einstein’s general theory of relativity:

“Our best theory of planetary motions is Einstein’s general theory of relativity, which, in the early twentieth century, superseded Newton’s theories of gravity and motion. It correctly predicts, in principle, not only all planetary motions but also all other effects of gravity to the limits of accuracy of our best measurements. For a theory to predict something “in principle” means that as a matter of logic the predictions follow from the theory, even if in practice the amount of computation that would be needed to generate some of the predictions is too large to be technologically feasible, or even too large to be physically possible in the universe as we find it.

Being able to predict things, or to describe them, however accurately, is not at all the same thing as understanding them. Predictions and descriptions in physics are often expressed as mathematical formulae. Suppose that I memorise the formula from which I could, if I had the time and inclination, calculate any planetary position that has been recorded in the astronomical archives. What exactly have I gained, compared with memorising those archives directly? The formula is easier to remember – but then, looking a number up in the archives may be even easier than calculating it from the formula. The real advantage of the formula is that it can be used in an infinity of cases beyond the archived data, for instance to predict the results of future observations. It may also state the historical positions of the planets more accurately, because the archives contain observational errors. Yet, even though the formula summarises infinitely more facts than the archives do, it expresses no more understanding of the motions of the planets. Facts cannot be understood just by being summarised in a formula, any more than by being listed on paper or memorised in a brain. They can be understood only by being explained. Fortunately, our best theories contain deep explanations as well as accurate predictions. For example, the general theory of relativity explains gravity in terms of a new, four-dimensional geometry of curved space and time. It explains how, precisely and in complete generality, this geometry affects and is affected by matter. That explanation is the entire content of the theory. Predictions about planetary motions are merely some of the consequences that we can deduce from the explanation.

Moreover, what makes the general theory of relativity so important is not that it can predict planetary motions a shade more accurately than Newton’s theory can. It is that it reveals and explains previously unsuspected aspects of reality, such as the curvature of space and time. This is typical of scientific explanation. Scientific theories explain the objects and phenomena of our experience in terms of an underlying reality which we do not experience directly. But the ability of a theory to explain what we experience is not its most valuable attribute. Its most valuable attribute is that it explains the fabric of reality itself. As we shall see, one of the most valuable, significant and also useful attributes of human thought generally, is its ability to reveal and explain the fabric of reality.

Yet some philosophers, and even some scientists, disparage the role of explanation in science. To them, the basic purpose of a scientific theory is not to explain anything, but to predict the outcomes of experiments: its entire content lies in its predictive formulae. They consider any consistent explanation that a theory may give for its predictions to be as good as any other, or as good as no explanation at all, so long as the predictions are true. This view is called instrumentalism (because it says that a theory is no more than an “instrument” for making predictions). To instrumentalists, the idea that science can enable us to understand the underlying reality that accounts for our observations, is a fallacy and a conceit. They do not see how anything that a scientific theory may say beyond predicting the outcomes of experiments can be more than empty words. Explanations, in particular, they regard as mere psychological props: a sort of fiction which we incorporate in theories to make them more memorable and entertaining. The Nobel prize-winning physicist Steven Weinberg was in an instrumentalist mood when he made the following extraordinary comment about Einstein’s explanation of gravity:

“The important thing is to be able to make predictions about images on the astronomers’ photographic plates, frequencies of spectral lines, and so on, and it simply doesn’t matter whether we ascribe these predictions to the physical effects of gravitational fields on the motion of planets and photons [as in pre-Einsteinian physics] or to a curvature of space and time.”

Weinberg and the other instrumentalists are mistaken. It does matter what we ascribe the images on astronomers’ photographic plates to. And it matters not only to theoretical physicists like myself, whose very motivation for formulating and studying theories is the desire to understand the world better. (I am sure that this is Weinberg’s motivation too: he is not really driven by an urge to predict images and spectra!) For even in purely practical applications, the explanatory power of a theory is paramount, and its predictive power only supplementary. If this seems surprising, imagine that an extraterrestrial scientist has visited the Earth and given us an ultra-high-technology “oracle” which can predict the outcome of any possible experiment but provides no explanations. According to the instrumentalists, once we had that oracle we should have no further use for scientific theories, except as a means of entertaining ourselves. But is that true? How would the oracle be used in practice? In some sense it would contain the knowledge necessary to build, say, an interstellar spaceship. But how exactly would that help us to build one? Or to build another oracle of the same kind? Or even a better mousetrap? The oracle only predicts the outcomes of experiments. Therefore, in order to use it at all, we must first know what experiments to ask it about. If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place. And if it predicted that the spaceship we had designed would explode on takeoff, it could not tell us how to prevent such an explosion. That would still be for us to work out. And before we could work it out, before we could even begin to improve the design in any way, we should have to understand, among other things, how the spaceship was supposed to work. Only then could we have any chance of discovering what might cause an explosion on takeoff. Prediction – even perfect, universal prediction – is simply no substitute for explanation.”

I don’t buy that either. Quantum mechanics “explains” what is going on perfectly well in this case, it’s a compelling model of physics, far more powerful and successful than the classical model. Insisting that anything that counts as an explanation must retain features of the classical model seems unreasonable. To me, multiple universes actually “explain” nothing in this case. They provide a way to look at things that some people might find attractive, but as far as I can tell they just introduce a lot of extraneous structure that raises more questions than it solves.

I think there is a false dichotomy here: it’s not “instrumentalists” vs. “many worlds interpreters”. It’s “many worlds interpreters” vs. people who do not find it compelling for whatever reason. It is disingenuous to present “many worlds” as the only viable interpretation or world-view.

“It is disingenuous to present “many worlds” as the only viable interpretation or world-view.”

No it isn’t. It might ultimately be wrong. But it’s certainly not disingenuous to compare the philosophical basis of the MWI to instrumentalism (only predictions are important) or solipsism (the calculations just happen in an abstract “black box” unrelated to any explanation of the actual physical entities and processes taking place).

I promise Peter — that will be my last comment on this point 🙂 Thanks again for allowing it to continue, probably much longer than you would like.

You concede that it might ultimately be wrong, yet you persist in denying that there are other interpretations that do not fall into instrumentalism or solipsism: consistent histories, pilot wave, stochastic mechanics, objective collapse, etc. All have their faults (the latter actually makes predictions that have been falsified), but they are not all less viable than many worlds, nor are they all less established than many worlds in the foundations of physics community, and it is therefore disingenuous of you (as it is for Deutsch) to present many worlds as the only world view.

I realize Peter does not want the comment section to turn into a debate on the interpretation of QM. But I can’t resist mentioning my own take on this. The many worlds interpretation seems to me to be to be the most direct way of interpreting the mathematical formalism of QM. As far as I understand it is just saying the ultimate reality is a wave function on configuration space. It is not adding anything new to the QM formalism it is just saying what it is. While pilot wave, stochastic mechanics, etc seems to me to be adding new physics which is not motivated by experiment.

Yes, unless it’s interesting and about David Deutsch, please don’t continue here with the usual sort of discussion of QM interpretational issues. Personally I find this kind of discussion just depressing, tedious and unenlightening. There are interesting issues here, but people seem devoted instead to over-simplified sloganeering. Enough.

From my perspective as an outsider to both fields, there is a very large cultural difference between math and theoretical physics. Several Field’s medalists not only contribute to mathoverflow but blog about technical mathematics as well. There is nothing comparable in the physics world, heck you can’t even get physicists to comment with their real names on blogs! My guess is that a physicsoverflow site will not work until physicists at the very top level of research are willing to actively participate. It’s a shame, because I think that such a site would be very useful.

@outside_math I am pretty sure MathOverflow would not have worked well during the Grundlagenstreit with Hilbert and Brouwer going after each other.

And PhysOverflow will not work well as long as the community disagrees if string theory is physics or wishful thinking.
As long as you have Lubos and others behave the way they do, any sane person will either stay away from such websites or not post with their good name.

I’ve been working on my own blog post about the New Yorker piece, but briefly: I agree with Peter that quantum computing has no direct implication for the Many-Worlds debate. (Ironically, Deutsch also agrees: he thinks that Many-Worlds is already an experimentally-established fact, regardless of whether quantum computers are built.)

Assuming the plausible conjecture that quantum computers can’t be efficiently simulated by classical ones (i.e., P≠BQP), a scalable quantum computer would show that Nature has “computing resources” that exponentially exceed anything described by classical physics in certain respects. But there’s then the further question of whether you want to describe those computing resources in terms of “parallel universes.”

(Incidentally, it’s not just factoring that’s not known to be hard for classical computers, but also simulating quantum mechanics in general.)

I used to be on an e-mail list of David Deutsch’s followers, to which the Big Man occasionally contributed himself. Most of his followers are, of course, non-physicists, and they did not much appreciate questions being raised about whether or not his views on physics can be justified.

I eventually got bored, but it was sociologically interesting to see a physicist who had real, live, and *very* intense followers!

I’ve also followed David’s work on MWI for many years. At one point in his career he explicitly recognized some of the technical problems with MWI: e.g., the “preferred-basis” problem and the “probability-measure” problem. He eventually decided that these were not problems, after all, though I was never able to understand his published reasons as to why he changed his mind.

Deutsch has written widely on politics, child-rearing (!), and, of course, various areas of science, and often has interesting, insightful comments on many of those subjects. However, I found myself a bit turned off by the constant implication that David had found the final answer to every question, answers that should not be questioned.

Anyway, if the “String Wars” ever end, and you find yourself drawn to further explorations in the side-lanes and by-ways of the sociology of physics, you will find much to explore in the Deutsch phenomena.

(1) The multiverse implications of the capabilities of a quantum computer makes no sense…since the latter relies on (essentially) nothing that Bohr did not know, and (2) (the short subject that you all ignore) it is navel-looking (if not hypocritical) to hope for a Fermi-lab district congressman to save HEP funding. Where is Aristophanes when we need him?

Peter and anothers,
See a ilnk to this dark matter conference at STSCI which is been webcast
live. See in particular the talk on LHC (Which discussesthe rumor about Higgs result) http://www.stsci.edu/institute/conference/spring2011

I’d like to echo how amazing and wonderful it is to see answers by Bill Thurston and Robert Bryant to a question on MathOverflow, not only because each one is a world class mathematician but because each brings a completely different perspective to the same question. It allows the rest of us, especially students, to see how differently the same question can be approached by different mathematicians.

Support the new site: http://physics.stackexchange.com/

The community has quite a few experts on hep, including a persistent criticizer of this blog.

The author thanks ….R. Bousso, …P. Mannheim, … for discussions.

huh.

I can tell you it’s going to be a controversial and not seriously taken paper without even reading it

@anon: so math has Fields medalists and phys has Lubos.

Kind of sums up the sorry state of physics if you ask me.

Regarding ‘t Hooft’s new paper:

I always thought that this (local conformal symmetry)

is the only direction in which further progress could be

made on the theory side. (The whole mass/higgs and

hierarchy problems are closely related to this)

It’s nice that he’s working on that topic.

The physics stackexchange site allows homework level questions, too, and should rather be compared to the math stackexchange site than to math overflow. The success of math overflow can be attributed to the strict moderation, allowing high quality questions only – and of course the existence of a consensus about the meaning of “high quality” in the global math community in the first place. Is there a similar consensus in the physics community or even in the hep-th community? I don’t thinkt so…

If you support a certain glamorously worded version of the many-worlds interpretation of quantum mechanics, the ability for a quantum computer to put itself in a superposition of a bunch of states, do interesting things in each of those states, and then ‘recombine’ and cough up the answer to a problem would be evidence for ‘many universes’. But so would various simpler applications of the superposition principle. I don’t think it’s really that big a deal. It’s just a way of talking.

If you don’t like this way of talking, you can just translate it into an interpretation that you like better – if you do it right, you’ll never disagree on any observed phenomena. That means you can argue endlessly about which way of talking is ‘right’ and never reach a conclusion. But it also means you don’t have to.

For various reasons the physics stack exchange site is not likely to ever have the same quality as Mathoverflow ( I don’t think it’s the level of questions, but the quality of the answers which is the issue, but that’s for another place). There is a proposal to have higher quality physics site devoted mainly to research questions here:

http://area51.stackexchange.com/proposals/23848/theoretical-physics

Once it has enough supporters we’ll see exactly what is needed to have high quality discussion, but I understand the intention is to aggressively pursue something like the MO model. I suspect it won’t be easy…

In a short interview Deutsch was once ask:

“How does quantum computation shed light on the existence of many worlds?”

He responded as follows:

“Say we decide to factorise a 10,000-digit integer, the product of two very large primes. That number cannot be expressed as a product of factors by any conceivable classical computer. Even if you took all the matter in the observable universe and turned it into a computer and then ran that computer for the age of the universe, it wouldn’t come close to scratching the surface of factorising that number. But a quantum computer could factorise that easily in seconds or minutes. How can that happen?

Anyone who isn’t a solipsist has to say the answer was produced by some physical process. We know there isn’t enough computing power in this universe to obtain the answer, so something more is going on than what we can directly see. At that point, logically, we have already accepted the many-worlds structure. The way the quantum computer works is: the universe differentiates itself into multiple universes and each one performs a different sub-computation. The number of sub-computations is vastly more than the number of atoms in the visible universe. Then they pool their results to get the answer. Anyone who denies the existence of parallel universes has to explain how the factorisation process works.”

This was a simple response to a complicated question in the popular press. Of course, he goes into greater detail in his work, and especially in his two books.

That may not satisfy you — but it gives a pretty good outline of how his argument runs.

“Anyone who isn’t a solipsist has to say the answer was produced by some physical process. ” You know, like, maybe, the schrodinger equation in the observable universe?

Not including quantum mechanics in your definition of “computing power” seems a little restrictive. Is there a precise definition/reason for sticking to classical physics?

Mike,

I did read that argument in the article, and it made no sense to me, still doesn’t. All it seems to give is that there is more to the universe than classical physics, not that there are other universes.

While I am here, seems to me the basic flaw in Deutsch argument is that it suggests a speedup in quantum computing far beyond what is actually attained. It is also strange that the argument does not need or make reference to decoherence, which is a necessary ingredient in any “many-world” view.

@Mike:

Deutsch’s argument is fallacious. They key term is “classical computer.” There do, in fact, exist physical processes working within this universe that can accomplish what he is stating: they are part of quantum mechanics. Unless you believe quantum mechanics requires multiple universes, this isn’t a proof. It is circular reasoning.

Moreover, he is forgetting that prime factorization has not been proven to be above polynomial – it is just that the asymptotically most efficient algorithm known is not polynomial. He would best stick to a fact that is well known, that simulation of a quantum system by a classical computer is, in fact, exponential in the number of quantum states.

As for the article: from the abstract (I don’t have access to the paper itself), I object to a few claims made in the New Yorker article itself. For example, “With one millionth of the hardware of an ordinary laptop, a quantum computer could store as many bits of information as there are particles in the universe.” That requires a very odd definition of storage. Storage usually implies retrieval. You cannot retrieve more than N classical bits out of an N-qubit system. Or “Tells how Deutsch came to propose a universal computer based on quantum physics, which would have calculating powers that Turing’s computer (even in theory) could not simulate.” This is kind of sloppy, since in theory a Turing machine could in fact simulate any quantum system – it may just take more than the age of the universe/more than the matter available in the universe to do so.

Of course, these could be just mistakes by the writer rather than by Deutsch himself.

Peter,

I know you’ve seen at least part of the following as well. I think it’s informative, but of course, if you wish to delete it I understand and am not interested in cluttering up your blog.

Dan,

The the Schrodinger equation, as applied to a quantum computer, certainly “predicts” the right results, yes — but it doesn’t “explain” how they arise.

Deutsch has addressed this distinction by analogy to Einstein’s general theory of relativity:

“Our best theory of planetary motions is Einstein’s general theory of relativity, which, in the early twentieth century, superseded Newton’s theories of gravity and motion. It correctly predicts, in principle, not only all planetary motions but also all other effects of gravity to the limits of accuracy of our best measurements. For a theory to predict something “in principle” means that as a matter of logic the predictions follow from the theory, even if in practice the amount of computation that would be needed to generate some of the predictions is too large to be technologically feasible, or even too large to be physically possible in the universe as we find it.

Being able to predict things, or to describe them, however accurately, is not at all the same thing as understanding them. Predictions and descriptions in physics are often expressed as mathematical formulae. Suppose that I memorise the formula from which I could, if I had the time and inclination, calculate any planetary position that has been recorded in the astronomical archives. What exactly have I gained, compared with memorising those archives directly? The formula is easier to remember – but then, looking a number up in the archives may be even easier than calculating it from the formula. The real advantage of the formula is that it can be used in an infinity of cases beyond the archived data, for instance to predict the results of future observations. It may also state the historical positions of the planets more accurately, because the archives contain observational errors. Yet, even though the formula summarises infinitely more facts than the archives do, it expresses no more understanding of the motions of the planets. Facts cannot be understood just by being summarised in a formula, any more than by being listed on paper or memorised in a brain. They can be understood only by being explained. Fortunately, our best theories contain deep explanations as well as accurate predictions. For example, the general theory of relativity explains gravity in terms of a new, four-dimensional geometry of curved space and time. It explains how, precisely and in complete generality, this geometry affects and is affected by matter. That explanation is the entire content of the theory. Predictions about planetary motions are merely some of the consequences that we can deduce from the explanation.

Moreover, what makes the general theory of relativity so important is not that it can predict planetary motions a shade more accurately than Newton’s theory can. It is that it reveals and explains previously unsuspected aspects of reality, such as the curvature of space and time. This is typical of scientific explanation. Scientific theories explain the objects and phenomena of our experience in terms of an underlying reality which we do not experience directly. But the ability of a theory to explain what we experience is not its most valuable attribute. Its most valuable attribute is that it explains the fabric of reality itself. As we shall see, one of the most valuable, significant and also useful attributes of human thought generally, is its ability to reveal and explain the fabric of reality.

Yet some philosophers, and even some scientists, disparage the role of explanation in science. To them, the basic purpose of a scientific theory is not to explain anything, but to predict the outcomes of experiments: its entire content lies in its predictive formulae. They consider any consistent explanation that a theory may give for its predictions to be as good as any other, or as good as no explanation at all, so long as the predictions are true. This view is called instrumentalism (because it says that a theory is no more than an “instrument” for making predictions). To instrumentalists, the idea that science can enable us to understand the underlying reality that accounts for our observations, is a fallacy and a conceit. They do not see how anything that a scientific theory may say beyond predicting the outcomes of experiments can be more than empty words. Explanations, in particular, they regard as mere psychological props: a sort of fiction which we incorporate in theories to make them more memorable and entertaining. The Nobel prize-winning physicist Steven Weinberg was in an instrumentalist mood when he made the following extraordinary comment about Einstein’s explanation of gravity:

“The important thing is to be able to make predictions about images on the astronomers’ photographic plates, frequencies of spectral lines, and so on, and it simply doesn’t matter whether we ascribe these predictions to the physical effects of gravitational fields on the motion of planets and photons [as in pre-Einsteinian physics] or to a curvature of space and time.”

Weinberg and the other instrumentalists are mistaken. It does matter what we ascribe the images on astronomers’ photographic plates to. And it matters not only to theoretical physicists like myself, whose very motivation for formulating and studying theories is the desire to understand the world better. (I am sure that this is Weinberg’s motivation too: he is not really driven by an urge to predict images and spectra!) For even in purely practical applications, the explanatory power of a theory is paramount, and its predictive power only supplementary. If this seems surprising, imagine that an extraterrestrial scientist has visited the Earth and given us an ultra-high-technology “oracle” which can predict the outcome of any possible experiment but provides no explanations. According to the instrumentalists, once we had that oracle we should have no further use for scientific theories, except as a means of entertaining ourselves. But is that true? How would the oracle be used in practice? In some sense it would contain the knowledge necessary to build, say, an interstellar spaceship. But how exactly would that help us to build one? Or to build another oracle of the same kind? Or even a better mousetrap? The oracle only predicts the outcomes of experiments. Therefore, in order to use it at all, we must first know what experiments to ask it about. If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place. And if it predicted that the spaceship we had designed would explode on takeoff, it could not tell us how to prevent such an explosion. That would still be for us to work out. And before we could work it out, before we could even begin to improve the design in any way, we should have to understand, among other things, how the spaceship was supposed to work. Only then could we have any chance of discovering what might cause an explosion on takeoff. Prediction – even perfect, universal prediction – is simply no substitute for explanation.”

Mike,

I don’t buy that either. Quantum mechanics “explains” what is going on perfectly well in this case, it’s a compelling model of physics, far more powerful and successful than the classical model. Insisting that anything that counts as an explanation must retain features of the classical model seems unreasonable. To me, multiple universes actually “explain” nothing in this case. They provide a way to look at things that some people might find attractive, but as far as I can tell they just introduce a lot of extraneous structure that raises more questions than it solves.

Peter,

I understand your view — I guess we just disagree on what consitutes a good “explanation”. Thanks for the chance to discuss.

I think there is a false dichotomy here: it’s not “instrumentalists” vs. “many worlds interpreters”. It’s “many worlds interpreters” vs. people who do not find it compelling for whatever reason. It is disingenuous to present “many worlds” as the only viable interpretation or world-view.

“It is disingenuous to present “many worlds” as the only viable interpretation or world-view.”

No it isn’t. It might ultimately be wrong. But it’s certainly not disingenuous to compare the philosophical basis of the MWI to instrumentalism (only predictions are important) or solipsism (the calculations just happen in an abstract “black box” unrelated to any explanation of the actual physical entities and processes taking place).

I promise Peter — that will be my last comment on this point 🙂 Thanks again for allowing it to continue, probably much longer than you would like.

You concede that it might ultimately be wrong, yet you persist in denying that there are other interpretations that do not fall into instrumentalism or solipsism: consistent histories, pilot wave, stochastic mechanics, objective collapse, etc. All have their faults (the latter actually makes predictions that have been falsified), but they are not all less viable than many worlds, nor are they all less established than many worlds in the foundations of physics community, and it is therefore disingenuous of you (as it is for Deutsch) to present many worlds as the only world view.

I realize Peter does not want the comment section to turn into a debate on the interpretation of QM. But I can’t resist mentioning my own take on this. The many worlds interpretation seems to me to be to be the most direct way of interpreting the mathematical formalism of QM. As far as I understand it is just saying the ultimate reality is a wave function on configuration space. It is not adding anything new to the QM formalism it is just saying what it is. While pilot wave, stochastic mechanics, etc seems to me to be adding new physics which is not motivated by experiment.

Physicsphile (and others),

Yes, unless it’s interesting and about David Deutsch, please don’t continue here with the usual sort of discussion of QM interpretational issues. Personally I find this kind of discussion just depressing, tedious and unenlightening. There are interesting issues here, but people seem devoted instead to over-simplified sloganeering. Enough.

Moshe,

From my perspective as an outsider to both fields, there is a very large cultural difference between math and theoretical physics. Several Field’s medalists not only contribute to mathoverflow but blog about technical mathematics as well. There is nothing comparable in the physics world, heck you can’t even get physicists to comment with their real names on blogs! My guess is that a physicsoverflow site will not work until physicists at the very top level of research are willing to actively participate. It’s a shame, because I think that such a site would be very useful.

@outside_math I am pretty sure MathOverflow would not have worked well during the Grundlagenstreit with Hilbert and Brouwer going after each other.

And PhysOverflow will not work well as long as the community disagrees if string theory is physics or wishful thinking.

As long as you have Lubos and others behave the way they do, any sane person will either stay away from such websites or not post with their good name.

If anyone of the quantum-computing-interested gentlemen or ladies is at CMU on April 29th at 1630 EDT ( I won’t):

http://www.cmu.edu/mcs/news/pressreleases/2011/4_25_Buhl2011.html

MIT’s Scott Aaronson to Present 2011 Buhl Lecture – “Quantum Computing and the Limits of the Efficiently Computable”.

Should be interesting.

Yatima: Thanks for the publicity! 🙂

I’ve been working on my own blog post about the New Yorker piece, but briefly: I agree with Peter that quantum computing has no

directimplication for the Many-Worlds debate. (Ironically, Deutsch also agrees: he thinks that Many-Worlds is already an experimentally-established fact, regardless of whether quantum computers are built.)Assumingthe plausible conjecture that quantum computers can’t be efficiently simulated by classical ones (i.e., P≠BQP), a scalable quantum computer would show that Nature has “computing resources” that exponentially exceed anything described by classical physics in certain respects. But there’s then the further question of whether you want to describe those computing resources in terms of “parallel universes.”(Incidentally, it’s not just factoring that’s not known to be hard for classical computers, but also simulating quantum mechanics in general.)

Peter,

I used to be on an e-mail list of David Deutsch’s followers, to which the Big Man occasionally contributed himself. Most of his followers are, of course, non-physicists, and they did not much appreciate questions being raised about whether or not his views on physics can be justified.

I eventually got bored, but it was sociologically interesting to see a physicist who had real, live, and *very* intense followers!

I’ve also followed David’s work on MWI for many years. At one point in his career he explicitly recognized some of the technical problems with MWI: e.g., the “preferred-basis” problem and the “probability-measure” problem. He eventually decided that these were not problems, after all, though I was never able to understand his published reasons as to why he changed his mind.

Deutsch has written widely on politics, child-rearing (!), and, of course, various areas of science, and often has interesting, insightful comments on many of those subjects. However, I found myself a bit turned off by the constant implication that David had found the final answer to every question, answers that should not be questioned.

Anyway, if the “String Wars” ever end, and you find yourself drawn to further explorations in the side-lanes and by-ways of the sociology of physics, you will find much to explore in the Deutsch phenomena.

Dave Miller in Sacramento

Does “The New Yorker” magazine make you dumber?

Yes. The article rhapsodic about Deutsch’s views proves it.

(1) The multiverse implications of the capabilities of a quantum computer makes no sense…since the latter relies on (essentially) nothing that Bohr did not know, and (2) (the short subject that you all ignore) it is navel-looking (if not hypocritical) to hope for a Fermi-lab district congressman to save HEP funding. Where is Aristophanes when we need him?

Thanks for the link to the discussion on connection coefficients. I really do wonder what the “right” quantity to quantize is.

Pingback: Shtetl-Optimized » Blog Archive » Better late than never

Peter and anothers,

See a ilnk to this dark matter conference at STSCI which is been webcast

live. See in particular the talk on LHC (Which discussesthe rumor about Higgs result)

http://www.stsci.edu/institute/conference/spring2011