There’s a new popular book out this week about the interpretation of quantum mechanics, Adam Becker’s *What is Real?: The Unfinished Quest for the Meaning of Quantum Physics*. Ever since my high school days, the topic of quantum mechanics and what it really means has been a source of deep fascination to me, and usually I’m a sucker for any book such as this one. It’s well-written and contains some stories I had never encountered before in the wealth of other things I’ve read over the years.

Unfortunately though, the author has decided to take a point of view on this topic that I think is quite problematic. To get an idea of the problem, here’s some of the promotional text for the book (yes, I know that this kind of text sometimes is exaggerated for effect):

A mishmash of solipsism and poor reasoning, [the] Copenhagen [interpretation] claims that questions about the fundamental nature of reality are meaningless. Albert Einstein and others were skeptical of Copenhagen when it was first developed. But buoyed by political expediency, personal attacks, and the research priorities of the military industrial complex, the Copenhagen interpretation has enjoyed undue acceptance for nearly a century.

The text then goes to describe Bohm, Everett and Bell as the “quantum rebels” trying to fight the good cause against Copenhagen.

Part of the problem with this good vs. evil story is that, as the book itself explains, it’s not at all clear what the “Copenhagen interpretation” actually is, other than a generic name for the point of view the generation of theorists such as Bohr, Heisenberg, Pauli, Wigner and von Neumann developed as they struggled to reconcile quantum and classical mechanics. They weren’t solipsists with poor reasoning skills, but trying to come to terms with the extremely non-trivial and difficult problem of how the classical physics formalism we use to describe observations emerges out of the more fundamental quantum mechanical formalism. They found a workable set of rules to describe what the theory implied for results of measurements (collapse of the state vector with probabilities given by the Born rule), and these rules are in every textbook. That there is a “measurement problem” is something that most everyone was aware of, with Schrodinger’s cat example making it clear. Typically, for the good reason that it’s complicated and they have other topics they need to cover, textbooks don’t go into this in any depth (other than often telling about the cat).

As usual these days, the alternative to Copenhagen being proposed is a simplistic version of Everett’s “Many Worlds”: the answer to the measurement problem is that the multiverse did it. The idea that one would also like the measurement apparatus to be described by quantum mechanics is taken to be a radical and daring insight. The Copenhagen papering over of the measurement problem by “collapse occurs, but we don’t know how” is replaced by “the wavefunction of the universe splits, but we don’t know how”. Becker pretty much ignores the problems with this “explanation”, other than mentioning that one needs to explain the resulting probability measure.

String theory, inflation and the cosmological multiverse are then brought in as supporting Many Worlds (e.g. that probability measure problem is just like the measure problem of multiverse cosmology). There’s the usual straw man argument that those unhappy with the multiverse explanation are just ignorant Popperazzi, unaware of the subtleties of the falsifiability criterion:

Ultimately, arguments against a multiverse purportedly based on falsifiability are really arguments based on ignorance and taste: some physicists are unaware of the history and philosophy of their own field and find multiverse theories unpalatable. But that does not mean that multiverse theories are unscientific.

For a much better version of the same story and much more serious popular treatment of the measurement problem, I recommend a relatively short book that is now over 20 years old, David Lindley’s *Where does the Weirdness Go?*. Lindley’s explanation of Copenhagen vs. Many Worlds is short and to the point:

The problem with Copenhagen is that it leaves measurement unexplained; how does a measurement select one outcome from many? Everett’s proposal keeps all outcomes alive, but this simply substitutes one problem for another: how does a measurement split apart parallel outcomes that were previously in intimate contact? In neither case is the physical mechanism of measurement accounted for; both employ sleight of hand at the crucial moment.

Lindley ends with a discussion of the importance of the notion of decoherence (pioneered by Dieter Zeh) for understanding how classical behavior emerges from quantum mechanics. For a more recent serious take on the issues involved, I’d recommend reading something by Wojciech Zurek, for instance this article, a version of which was published in Physics Today. Trying to figure out what “interpretation” Zurek subscribes to, I notice that he refers to an “existential interpretation” in some of his papers. I don’t really know what that means. Unlike most discussions of “interpretations”, Zurek seems to be getting at the real physical issues involved, so I think I’ll adopt his (whatever it means) as my chosen “interpretation”.

**Update**: For another take on much the same subject, out in the UK now is Philip Ball’s *Beyond Weird*. The US version will be out in the fall, and I think I’ll wait until then to take a look. In the meantime, Natalie Wolchover has a review at Nature.

**Update**: There’s a new review of What is Real? at Nature.

**Update**: Jim Holt points out that David Albert has a review of the Becker book in the latest New York Review of Books. I just read a print copy last night, presumably it should appear online soon here [Review now available here].

**Update**: Some comments from Adam Becker, the author of the book.

I won’t try to rebut everything Peter has said about my book—there are some things we simply disagree about—but I would like to clear up two statements he makes about the book that are possibly misleading:

Peter says that I claim the answer to the measurement problem is that “the multiverse did it.” But I don’t advocate for the many-worlds interpretation in my book. I merely lay it out as one of the reasonable available options for interpreting quantum mechanics (and I discuss some of its flaws as well). I do spend a fair bit of time talking about it, but that’s largely because my book takes a historical approach to the subject, and many-worlds has played a particularly important role in the history of quantum foundations. But there are other interpretations that have played similarly important roles, such as pilot-wave theory, and I spend a lot of time talking about those interpretations too. I am not a partisan of any particular interpretation of quantum mechanics.

Second, I don’t think it’s quite fair to say that I paint Bohr as a villain. I mention several times in my book that Bohr was rather unclear in his writing, and that sussing out his true views is dicey. But what matters more that Bohr’s actual views is what later generations of physicists generally took his views to be, and the way Bohr’s work was uncritically invoked as a response to reasonable questions about the foundations of quantum mechanics. It’s true that this subtlety is lost in the jacket flap copy, but that’s publishing for you.

Also, for what it’s worth, I do like talking about reality as it relates to quantum mechanics. But I suppose that’s hardly surprising, given that I just wrote a book on quantum foundations titled “What Is Real?”. I’d be happy to discuss all of this further over email if anyone is interested (though I’m pretty busy at the moment and it might take me some time to respond).

My 13 year old son has been reading some of the introductory texts on QM and his reaction on discovering Everett was to ask me, “So Everett’s Many Worlds has the same problem as the Copenhagen interpretation, plus an enormous number of additional universes that are not detectable. How is that better?”.

Good kid, but I’m going to need to find him a math tutor soon because he is overtaking my undergrad education.

Give me a little more time. I hope to be able to set the record straight in a book to be published probably in 2020. You can read the preamble here: http://www.jimbaggott.com/articles/a-game-of-theories/

Dear Prof. Woit,

I don’t understand what you mean by “the wavefunction of the universe splits, but we don’t know how”. We know exactly how; this is the whole point of Many-Worlds: it is just normal Hamiltonian evolution. What do you think is lacking?

It’s fine to say that ‘in either interpretation, we don’t know what happens at the moment of measurement’. But I disagree that this leaves MW and Copenhagen on an equal footing. Doesn’t Copenhagen posit an additional _mechanism_, that of ‘collapse’, whereas Everett’s view wouldn’t?

And are there not additional problems with Copenhagen, such as the ‘selection’ of a single branch is the only non-deterministic element of the whole system?

@Carl the non-detectability of the ‘other universes’ follows naturally from how we understand QM to work, it doesn’t count as a piece of evidence against it if we wouldn’t _expect_ to be able to detect them. An unreasonable swish of Occam’s Razor.

“Popperazzi” (play on Italian “paparazzi”) is the preferred spelling.

https://www.philosophersmag.com/index.php/footnotes-to-plato/77-string-theory-vs-the-popperazzi “String Theory vs the Popperazzi” by Massimo Pigliucci, 2015

David Brown,

Thanks. Fixed.

Mateus Araujo,

Just saying “the Schrodinger equation does it” doesn’t solve the measurement problem (for instance, the preferred basis problem). All you’re doing is saying that, you don’t know how, but the Schrodinger equation is going to somehow give precisely the same implications for physics as the collapse postulate.

Ben Jones,

Copenhagen says collapse happens in a measurement, it’s silent on what the theory of collapse is (adding specific new physics to explain collapse is not Copenhagen but something else).

Peter, I do not intend to enter the details of your discussion, since I have got tired about it. (If somebody should be interested, he/she may look at my website http://www.zeh-hd.de – especially the first two papers under “Quantum Theory”.) However, as a historical remark let me point out that in my understanding, von Neumann and Wigner were never part of the Copenhagen interpretation – rather they objected to it more to less openly (starting in Como). When Wigner used the term “orthodox interpretation”, he exclusively meant von Neumann’s book (including the collapse as a physical process – not just a “normal” increase of information), and I am told that he complained that, as a consequence, he was never invited to Copenhagen. Bohr always disagreed with any attempt to analyze the measurement problem in physical terms.

Essentially, I agree with Mateus Aroujo and and Ben Jones

Regards, Dieter

This kind of discussion can get hopelessly confused very quickly. To my knowledge, the ‘collapse of the wavefunction’ was never part of the Copenhagen interpretation, which is based on some kind of unexplained ‘separation’ between the quantum and classical realms, or what John Bell would refer to as the ‘shifty split’. I believe the notion of a ‘collapse’ was introduced as a ‘projection postulate’ by von Neumann in his book Mathematical Foundations of Quantum Mechanics, first published in German in 1932 (in my English translation the projection postulate – a statistical, discontinuous process in contrast to the continuous, unitary evolution of the wavefunction – appears on p. 357). Influenced, I believe, by Leo Szilard, von Neumann speculates that the collapse might be triggered by the intervention of a (human) consciousness – what he refers to on p. 421 as the observer’s ‘abstract ego’.

All this really shouldn’t detract from the main point. The formalism is the formalism and we know it works (and we know furthermore that it doesn’t accommodate local or crypto non-local hidden variables). The formalism is, for now, empirically unassailable. All *interpretations* of the formalism are then exercises in metaphysics, based on different preconceptions of how we think reality could or should be, such as deterministic (‘God does not play dice’). Of course, the aim of such speculations is to open up the possibility that we might learn something new, and I believe extensions which seek to make the ‘collapse’ physical, through spacetime curvature and/or decoherence, are well motivated.

But until such time as one interpretation or extension can be demonstrated to be better than the other through empirical evidence, the debate (in my opinion) is a philosophical one. I’m just disappointed (and rather frustrated) by the apparent rise of a new breed of Many Worlds Taliban who claim – quite without any scientific justification – that the MWI is the only way and the one true faith.

Peter Woit,

There are dozens of papers explaining how you can model the measurement process in unitary quantum mechanics via entanglement and decoherence; Zeh’s and Żurek’s, among them. It’s not as if anybody is saying “the Schrödinger equation did it” without explaining how.

And the measurement problem is usually defined as the incompatibility between collapse and unitary evolution. So even just saying “the Schrödinger equation did it” does solve the problem, on this level. Of course, one must also explain emergence of classicality, permanence of records, probability, etc., but this is not the measurement problem.

hdz,

Thanks Dieter.

I should point out that Dieter Zeh is one of the major subjects of the book. He’s perhaps the best example of the story Becker wants to fit everything into: a “quantum rebel” who made significant advances in our understanding of foundations and the measurement problem, advances which were not initially recognized due to an entrenched “Copenhagen” ideology that denigrated any such work. Partly due to his work, I think for many decades now mindless Copenhagen ideology has not been such a problem (but we’re well on our way to mindless Many Worlds ideology being a major problem).

As he notes and the book describes, there’s controversy over who was on the Copenhagen bus, partly because it was never clear exactly what the Copenhagen Interpretation was, and partly because many people took somewhat different points of view at different times.

Mateus Araújo,

To me the (non-trivial) measurement problem is exactly the “emergence of classicality, permanence of records, probability, etc.,” problems you list, with entanglement and decoherence some of the insights needed to solve them. If you define the measurement problem as you do, then the solution you give by having quantum mechanics apply to the macroscopic world is a trivial one which most everyone expected anyway.

Peter Woit,

I’m claiming that this is not “my” definition of the measurement problem, but rather *the* standard definition. See for example Maudlin’s “Three measurement problems”, a canonical reference on the subject.

Of course, the solution *is* trivial; you merely need to give up collapse, or introduce hidden variables, or introduce physical collapse. The problem is that most people don’t want to do any of these, hence they are stuck with the measurement problem.

Look, I agree wholeheartedly with you that the interesting problems are those I listed; I’m just insisting on narrow definitions and standard terminology, otherwise it becomes impossible to talk to each other.

Mateus Araújo,

This whole subject is full of endless and heated arguments over problems that are meaningless and/or lacking in substance. Your definition of “the measurement problem” seems to me to insist on sticking to the substance-free aspect of a substantive problem, thus encouraging substance-free-discussion. What’s the correct term to refer to the substantive problem?

By the way, on the meaningless side, Becker’s book has a lot about “What is Real?” and claims that the problem with Copenhagen is that it denies the reality of the microscopic world, a discussion I thought it best to just ignore.

Peter Woit,

Indeed it is! And I think a great many heated and meaningless arguments happen because the parties are talking about different things =)

But I don’t know which problem you refer to as “the substantive problem”. Do you have in mind a collective name for the three I listed? I don’t think there exists a standard name for this set, except maybe as “problems of the Many-Worlds interpretation”. They are different problems, that are usually studied separately, and referred to by their individual names.

Mateus Araújo,

I guess I always thought “what happens to the cat?” was the substantive measurement problem, so I’m looking for a name for that.

Asher Peres once wrote that there are at least as many Copenhagen Interpretations as people who use the term, probably more. Among other problems, saying “

theCopenhagen Interpretation” glosses over substantial differences between Heisenberg and Bohr.Despite this issue being discussed by Pauli, von Neumann, ….

Peter Woit,

You want unitarity to hold, so that atoms are described by quantum mechanics, and you want collapse to happen, so that the cat is definitely dead, but you can’t have both: the measurement problem!

And the solution is again trivial, threefold. Give up on collapse: there are worlds with live cats and worlds with dead cats. Introduce hidden variables: the cat was always dead, you just didn’t know it. Introduce physical collapse: the superposition collapsed on the level of the geiger counter, so that the cat was always dead.

Maybe the measurement problem is like the problem of having your cake and eating it too. You know it’s impossible, but you really really want to!

I think we can agree that this is all a lot of fun. I’ve been studying these problems for more than 25 years, and I can discern relatively little progress in this time. But I’ve come to believe that it is helpful to distinguish between ‘reality’ (however you want to think about it) and the reality or otherwise of the *representation* we use to describe it.

We can likely all agree that the moon is there when nobody looks, and even that invisible entities like electrons really do exist independently of observation (‘if you can spray them then they are real’, as philosopher Ian Hacking once declared). But this doesn’t necessarily mean that the concepts we use in our representation of this reality should be taken literally as real, physical things. If we choose to agree that the wavefunction isn’t real (as Carlo Rovelli argues in his relational interpretation), then the QM formalism is simply an algorithm for coding what we know about a physical system so that can make successful predictions. All the mystery then goes away – there is no problem of non-locality, no collapse of the wavefunction, and no ‘spooky’ action at a distance.

I don’t particularly like this interpretation, as it is obviously instrumentalist and can’t answer our burning questions about how nature actually does that. But it does help to convince me that this endless debate over interpretation is really a philosophical debate, driven by everybody’s very different views on what ‘reality’ ought to be like. And, as such, we’re unlikely to see a resolution anytime soon…

Jim,

I’d rather do almost anything with my time than try and moderate a discussion of what is “real” and what isn’t.

Any further discussion of ontology will be ruthlessly suppressed.

Trying to stick to physical issues, I have always wanted to see proponents of the many worlds interpretation derive the spectrum of the hydrogen atom from some clearly defined postulates (or solve some other basic physical problem). Until then it is not clear to me what the theory actually is saying. If anyone can refer to such a calculation I would be glad. Clearly it is not enough to postulate the unitary evolution of the Schrödinger equation, but I have never been able to pinpoint the other postulates of the many worlds interpretation.

Thanks for the nice post and discussion. I’m an experimentalist, who is more comfortable with opamps than operators. I have perhaps a naive question; Do any of these books discuss the DeBroglie-Bohm (Pilot wave) theory? Since all these interpretations are just a matter of taste, I find this hidden variable type theory easiest to swallow. In a double slit, the particle ‘knows’ about both slits, but still goes through (only) one of them. Is there any reason not to be happy with this picture?

Per Östborn,

The calculation in Many Worlds is exactly the same textbook calculation as in Copenhagen. It’s the same Schrodinger equation and you solve for its energy eigenvalues the same way. That is the problem: there’s no difference from the standard QM textbook.

The Many Worlds people might claim as an advantage over Copenhagen that you can imagine doing much harder calculations about how a Hydrogen atom interacts with its environment during a measurement process, giving insight into the question of why we see energy eigenstates and the Born rule. My point of view would be that Copenhagen never said you shouldn’t do exactly those calculations if you wanted to better understand what was happening during a measurement, it was just a rule for when you couldn’t do such calculations.

George Herold,

Becker’s book has a long and detailed section about the story of Bohm and Bohmian mechanics which you might find of interest. Personally I don’t find Bohmian mechanics compelling, both for reasons Becker mentions and for others having to with the much greater mathematical simplicity of the conventional formalism when applied to our best fundamental theories.

Sorry, but I really don’t want to carry on more discussion here of Bohmian mechanics. I realize a lot of people are interested in it, but I’m not at all interested, so such a discussion should be conducted elsewhere.

Dear Peter and all,

Can I suggest a basic distinction between different approaches to quantum foundations?

The first class hypothesizes that the standard quantum mechanics is incomplete as a physical theory, because it fails to give a complete description of individual phenomena. If so, the theory requires a completion, that is a theory which incorporates additional degrees of freedom, and/or additional dynamics. Pilot wave theory and physical collapse models are examples of these.

The second class of approaches take it as given that the theory is complete, so the foundational puzzles are to be addressed by modifying how we interpret the equations of the theory.

People engaged in the first class of approaches are trying to solve a very different kind of research problem than those in the second class. I would submit that both are worth pursuing, but that progress in physics will eventually depend on the success of the first kind of approach.

Thanks Lee,

That’s a very useful distinction. The Becker book contains a lot of material about such attempts to complete quantum mechanics with new physics that would entail a different resolution of the measurement problem. I didn’t discuss these here, largely because I’m much less optimistic than you that the search for this kind of completion will be fruitful.

Dear Peter,

It is not clear to me, but my impression is that MWI proponents claim that some of the standard postulates of QM can be removed (or replaced by other ones). Then their calculations from first principles must be different from the textbook ones, since they are forbidden to use the removed postulates, of course.

As you write, they seem to want to gain insight into why we see energy eigenvalues and why we should use the Born rule. This suggests that they argue that these features of QM can be derived from a smaller or different set of postulates than the standard one.

Per Östborn,

I’m no expert and don’t want to get into the details of exactly what “Many Worlds” is, I think there are many versions. One aspect of it though is yes, the idea that you can derive the relation to classical observables and the Born rule, not postulate them. There’s active research and debate on the extent to which you really can do this. I just want to point out that you naturally ask exactly the same questions in Copenhagen, whenever you decide you want to analyze in more detail what’s happening during a measurement. It’s exactly the same equation and physics.

I think the most interesting Copenhagen/anti-Copenhagen split was between Heisenberg, Bohr, and others on the one hand and Einstein, Schrodinger, and to some extent de Broglie on the other. There was a generational split and Heisenberg’s views may have prevailed because the older physicists died off.

I think I’m going to have to have that printed on a T-shirt.

I think it would be fair to say that the acceptance of the Copenhagen interpretation, or what various people took it to be, was substantially a function of the high regard in which Bohr and Heisenberg were held as pioneers in the development of quantum mechanics, combined with a strong desire to just “get on with it”—find a way to state problems and do calculations that most people felt they understood well enough to do research and make sensible progress.

Later, when a great deal of research had been done and generally accepted progress had been made, it’s understandable that the “yes, but…” questions about quantum theory would be resurgent, especially with the conundrums of quantum gravity looming in the background, along with the growing exploration of quantum phenomena at mesoscopic scales.

To my mind the current situation is reminiscent in some ways of Mach’s objections to the conventional understanding of Newtonian dynamics at a time (the mid- to late 19th century) when such concerns had little apparent significance for most working scientists. All this appeared in a different light with the advent of special and general relativity.

Peter Woit,

I think this is pretty much correct; one can also simply postulate the Born rule in Many-Worlds, as done in single-world theories, but it feels a bit wrong, as one should also explain what probabilities are. Hence the interest in deriving the Born rule in Many-Worlds, as this should shed some light on the issue. One can, though, adapt these derivations also to single-world quantum mechanics, as done by Saunders here.

I don’t think, however, that the derivations we have are completely satisfactory, and I’m personally trying to improve on them.

David Mermin, who didn’t particularly like the Copenhagen interpretation, found himself driven to use it when he was teaching quantum mechanics to computer scientists who didn’t know much about physics. (See his article

Copenhagen Computation.)Bohr, at Copenhagen, similarly was teaching quantum mechanics to physicists who didn’t know much about quantum mechanics. So maybe this explained why he gravitated towards the Copenhagen interpretation.

And then maybe Bohr came up with all this weird extraneous philosophy to try to convince himself that what he was teaching them actually made some sense.

Jim Baggott (or anyone else for that matter):

You attribute “collapse” or reduction of the wavepacket to von Neumann (1932). A clear statement (albiet much shorter) is in Dirac’s Principles, Sect 10. Unfortunately our University Library foolishly left the first edition (1930) on the open shelves, and it has long since vanished, so I’ve only traced this statement back to the 2nd ed… My question to anyone who can lay hands on a copy is, is this bit in the original edition, or did Dirac add it after reading von Neumann?

As I probably learned from one of Jim’s books, the “reduction” of the wavepacket

is Heisenberg’s terminology, from the uncertainty principle paper, so von Neumann or Dirac were in any case just sharpening up Heisenberg’s insight.

It is a bit hard to know how to comment on a discussion of a book called “What is Real?” when it has been asserted that

“I’d rather do almost anything with my time than try and moderate a discussion of what is “real” and what isn’t.

Any further discussion of ontology will be ruthlessly suppressed.”

The question “What is real?” just is the question “What exists?” which is in turn just the question “What is the true physical ontology?” which is identical to the question “Which physical theory is true?”. Peter Woit begins by writing “Ever since my high school days, the topic of quantum mechanics and what it really means has been a source of deep fascination to me…”. But that just is the question: What might the empirical success of the quantum formalism imply about what is real? or What exists? or What is the ontology of the world? To say you are interested in understanding the implications of quantum mechanics for physical reality but then ruthlessly suppress discussions of ontology is either to be flatly self-contradictory or to misunderstand the meaning of “ontology” or of “real”. That is also reflected in the quite explicit rejection of any discussion of two of the three possible solutions to the Measurement Problem: pilot wave theories and objective collapse theories.

Has Foundations made real progress since Copenhagen? Absolutely! We have two key theorems: Bell’s theorem and the PBR theorem. The first tells us that non-locality is here to stay, so Einstein’s main complaint about the “standard” account—namely its spooky action-at-a-distance—cannot be avoided. Anyone who thinks that they have a way around it is mistaken. That lesson has not yet been learned, even though Bell’s result is half a century old. PBR proves that the wavefunction assigned to a system reflects some real physical aspect of the individual system: two systems assigned different wavefunctions are physically different. So if Carlo Rovelli or the QBists or Rob Spekkens thinks that the wavefunction isn’t real, in the sense that it does not reflect some real physical feature of an individual system, then PBR has proven them wrong. Bell and PBR lay waste to many approaches to understanding quantum theory, including Copenhagen and QBism. Jim Baggot suggests that we can “choose to agree” that the wavefunction isn’t real and somehow thereby eliminate non-locality and spooky action at a distance. No you can’t, for reasons given by both Bell and PBR. A theorem is a theorem, and we are not free to ignore it.

The choices are: additional (non-hidden!) variables (e.g. Bohm); objective collapses (e.g. GRW); Many Worlds (e.g. Everett). That’s it. Anything else has been ruled out by the empirical success of the quantum predictions. And this is no more a “philosophical” debate than any other dispute in physics is. It is a debate about what the correct physical understanding of the world is. Only once these real advances in our understanding have been generally acknowledged will we be in a position to make further progress.

Tim Maudlin,

Your comment has some similarities to Lee Smolin’s, wanting to focus on the “real” question of whether QM is all there is (his “second class”, your “Everett” choice) or new physics is needed (his “first class”, your Bohm or GRW). While there’s a lot in Becker’s book about the history of “new physics” proposals, and you and Smolin are quite right that this is a true fundamental question, more significant than empty discussions of “interpretations” corresponding to identical physics, the problem here is that I’m just not interested. As with any proposals for “new physics”, people make their own evaluations based on different experiences, and it’s a good thing that others who see things differently think more about such proposals.

This is a case where no such new physics proposals come with any experimental evidence in their favor, so personal criteria of what is worth spending time on are all one has. My own criteria for paying attention to speculative proposals weight heavily the mathematical structures involved. The proposals I’ve seen for supplementing QM invoke mathematical structures that to me seem quite a bit more complicated and ugly than the ones of QM, thus my lack of interest. Maybe someday someone will come up with a different proposal that’s more appealing. Til then, I’ll keep deciding not to spend more time thinking about such things.

I do not always agree with Tim Maudlin, but in this case I do almost completely. Clearly, the philosophical debate about reality is worthless for physicists, but physicists usually (and tacitly) understand it in the sense of a conceptually consistent description of “Nature” (of what we observe with our senses). So it is certainly not compatible with complementarity. Of course, everybody is free to propose new concepts and theories, but I have decided to wait until such author presents some empirical success (what else can I do?). Mathematical structures have to be formally consistent, Peter, but that is no sufficient argument for their application to the empirical world. (Just consider Tegmark’s unreasonable Level IV of multiverses.) This is not only an argument against Sting Theory! (I have written a comment against Strings (or M-theory) in German long before “Not even wrong” appeared; it can be found on my website under “Deutsche Texte”.)

So my conclusion from Tim’s three choices are that the first two ones are well possible (though not more), but we have to wait for empirical support. Then only Everett remains (until being falsified) as yet as a global unitary theory. Let me add that the concept of decoherence, which has clearly been experimentally confirmed, was derived from an extension of unitarity to the environment, while Everett is its further extension to the observer. So why is it “unmotivated” or incomplete? In my opinion there are no arguments but only emotions against Everett! So please take some time to study my website!

Thanks Peter for pointing that the MWI is basically an inversion of the standard one, an evidence rarely mentioned. So the ‘real’ problem remains untouched: what are probabilities? Actually, surprisingly few physicists seem to be aware of the existence of Bertand’s pardox, something which has disturbing implications. The stance ‘it doesn’t matter what probabilities are as long as we know how to calculate them’ or saying ‘there is an axiomatic for them’ are just ways to avoid the question. Does something physical collapse or is it our ignorance that ends in a measurement? It is rather obvious that a meaningful distinction between a description and its referent (to avoid saying ‘reality’) can become badly twisted when it is not clear what is not known.

Peter Shor,

You wrote:

Does

anyoneever teach Intro QM without implicitly using the Copenhagen interpretation?Or, to put it the other way around, does anyone teach Intro QM using MWI or the Bohm model?

To connect with your own work, it does seem to me that MWI is the “natural” way to think about quantum computing. Not that I think MWI is true: I think it has two fatal flaws — the “preferred basis” problem that Peter Woit has mentioned and the probability-measure problem.

Do you yourself see any affinity between MWI and quantum computing?

Dave Miller

@Paddy

Here is the first edition of Dirac for you to check: https://archive.org/details/in.ernet.dli.2015.177580

All,

Commenter atreat wants to argue with Tim Maudlin, but also points to a 319 comment thread at Scott Aaronson’s blog which includes (besides a lot of other interesting comments) extensive discussion with Maudlin about these topics. I encourage those interested in this argument to visit

https://www.scottaaronson.com/blog/?p=3628

and not to try and restart it here.

To me, the more interesting question is not about whether theory is complete, but… what experiment will displace or extend QFT as the underpinning of all microscopic theory? David Mermin sometimes pointed out that the fact we still use QFT with no new constants of nature was a failing of experimental high energy physics. I tend to agree with him, but still… increasing energy is still an enormously attractive route to finding real chinks in QFT. Also, delayed choice types of measurements are attractive. Unattractive to me… characterizing experiment as a servant of whatever trend is popular in the theory community… strings, multiverses, etc.

Paddy,

It’s certainly true that the idea that a quantum system ‘jumps’ into some eigenstate as the result of an observation or measurement is implicit in the early history of quantum mechanics, arguably as far back as Bohr’s 1913 atomic theory. Despite statements by Heisenberg and Dirac implying such ‘jumps’ as part of the measurement process, the reason we know this as ‘von Neumann’s theory of measurement’ is because he was the first to put it forward as an axiom of the theory, the collapse or projection being a statistical ‘process of the first kind’ vs. the unitary evolution of the wavefunction as a causal ‘process of the second kind’. I tend to trust Max Jammer on this – see his ‘Philosophy of Quantum Mechanics’, p. 474. Incidentally, von Neumann treated the apparatus as a quantum system, so departed from the Copenhagen interpretation’s ‘shifty split’ between quantum/classical domains.

Peter,

I sense the discussion of your review of Adam Becker’s book is now pretty well exhausted, but if you will allow I’d like to make a couple of final observations. I fully understand why you like to keep the discussion focused on what you consider to be meaningful questions – as posed here by Lee Smolin and to an extent by Tim Maudlin. I don’t want to make this seem too black-and-white but, generally, if you don’t think the wavefunction is ‘real’ then you’re likely to be satisfied that quantum mechanics is complete (Copenhagen, Rovelli, consistent histories, QBism). But if you prefer to think that the wavefunction is physically real (Einstein, Schrodinger, Bell, Leggett) then obviously something is missing – quantum mechanics is incomplete and the task is to find ways to reinterpret or extend the theory to explain away the mystery, and hopefully deepen our understanding of nature at the same time. Although the MWI in principle adds nothing to the QM formalism, I’d argue that it ‘completes’ QM by adding an infinite or near-infinite number of parallel universes. But all this comes back to a judgement about the ontological status of the wavefunction and hence a choice of philosophical position.

I don’t think any kind of ‘new mathematics’ will help to resolve these questions. In fact, my studies of the historical development of QM suggest instead a triumph of physical intuition over mathematical rigour and consistency, which is why von Neumann felt the need to step in and fix things in 1932. My money is therefore on new physics, as and when this might become available, perhaps as part of the search for evidence to support a quantum theory of gravity.

Can I leave you with a last thought? In the famous experiments of Alain Aspect and his colleagues in the early 80s, they prepared an entangled two-photon state using cascade emission from excited Ca atoms. We write this as a linear superposition of left and right circularly polarised photons, based on our experience with this kind of system. The apparatus then measures correlations between horizontally and vertically polarised photon pairs, detected within a short time window. So we *re-write the wavefunction in terms of the measurement eigenstates*, and we use the modulus-squares of the projection amplitudes to tell us what to expect. Now I can’t look at this procedure without getting a very bad feeling that all we’re really doing here is mathematically *coding* our knowledge in a way that gives us correct predictions (which we then confirm through experience). I really don’t like it, but I confess I’m very worried.

Jim,

My semi-joking threat to suppress ontology is due to the fact that I seriously don’t know what you or others mean when they say something is/is not “real”, or “physically real”. I have the same problem with Lee Smolin when he writes about time being “real/not real”. Hanging one’s treatment of a complex issue on this highly ambiguous four-letter word seems to me to just obscure issues (I started to write “real issues”…).

About MWI, I’m on board with the “everything evolves according to Schrodinger’s eq.” part, not so much with the “add an infinite number of parallel universes” part.

While I think new deep mathematical ideas may inspire progress on unification, in the case of the measurement problem, the argument from mathematical depth and beauty goes the other way. The basic QM formalism already is based on very deep and powerful mathematics. Here my problem is with things like Bohmian mechanics and dynamical collapse models which deface this beauty by adding ugly complexity not forced by experiment. The measurement problem to me is essentially the problem of understanding how the effective classical theory emerges from the fundamental quantum formalism. Mathematics may or may not be helpful here, I don’t know. The confusions I see in most discussions of QM interpretations aren’t ones that mathematics will resolve, they need to be resolved by careful examination of the physics one is discussing.

Some people seem to expect that the new physics needed to get quantum gravity will resolve the measurement problem, but I don’t see it. The measurement problem is there when you do a very low energy experiment, whereas the quantum gravity problem is about not understanding Planck-scale physics. I just don’t see how the quantization of gravitational degrees of freedom has anything at all to do with the measurement problem.

About the photon experiments. I don’t think the mysterious thing is the “re-write the wavefunction in terms of the measurement eigenstates” part, that is very simple and completely understood. The mystery is the “The apparatus then measures correlations between horizontally and vertically polarised photon pairs” part, where a macroscopic apparatus described in classical terms is coming into play.

Dear Peter,

I agree the statement “Time is real” is misleading and I regret using it. The more precise claim is that “time is fundamental” in the sense that there is no deeper formulation of the laws of physics that does not involve the evolution of the present state or configuration, by the continual creation of novel events out of present events. ie at no level is time emergent from a timeless formulation of fundamental physics. Were the Wheeler-deWitt equation correct this would not be the case.

I view my papers on the real ensemble formulation and relational hidden variables to be modest attempts to develop and explore new hypotheses to complete quantum mechanics.

To answer Peter, these are inspired by developments in quantum gravity, particularly the hypotheses that if time is not emergent, space may be emergent from a network of dynamically evolving relationships, as it is in several approaches to quantum gravity such as spin foam models, causal dynamical triangulations and causal set models.

But if space is emergent, so is locality and hence so must be non-locality. Hence we may aim to show the origin of Bell-non-locality from disorderings of locality which follows from the emergent nature of locality.

This may be a way to realize the idea, which is currently popular, but was first stated by Penrose in his papers on spin networks, that the geometry of space and quantum entanglement may have a common origin.

Peter,

I honestly think you’ve answered your own question. You say “I don’t think the mysterious thing is the re-write the wavefunction in terms of the measurement eigenstates part – that is very simple and completely understood”. So what is it that exists? A quantum state made of circularly polarised states? A quantum state made of linearly polarised states? Or both, but involving a process that gets from one to the other that doesn’t appear anywhere in the formalism except as a postulate, based on a mechanism we can’t fathom? Or are we just using these ‘states’ as a convenient way of connecting one physical situation with another?

I noticed an item in the latest quant-ph update that may be of interest to the folks participating here: a critical evaluation of Wallace’s attempt to get meaningful probabilities out of (neo-)Everettian QM.

Jim,

The question “what exists?” is just as ill-defined as “what is real?”. The fundamental issue here is that I think I understand completely what a “quantum state” is (any one of the mathematically isomorphic descriptions), but I don’t understand completely what a “physical situation” is (it may involve some system under study, some macroscopic apparatus, my consciousness, and how they are interacting ). Everett tells me that a “physical situation” is also just a quantum state, and I’m willing to believe him since I don’t have evidence against this, but that doesn’t change my lack of full understanding of the “physical situation” I’m somehow part of.

Another interesting article on the history of MWI by the same author here:

https://blogs.scientificamerican.com/observations/the-difficult-birth-of-the-many-worlds-interpretation-of-quantum-mechanics/

You write ‘The mystery is the “The apparatus then measures correlations between horizontally and vertically polarised photon pairs” part, where a macroscopic apparatus described in classical terms is coming into play.’

I fully agree with you in this. The problem is, explaining “how” this happens. I don’t mean in a philosophical sense, I mean describing it using, somehow, just plain quantum mechanics of increasingly complicated sets of interactions that end up in a macroscopic apparatus that everybody agrees is classical. This is doable. Its where the payoff is eventually going to come. The hard part is explaining it.

By increasingly complicated, I mean in terms of say measuring the energy of

a high energy photon slamming into an LHC calorimeter in terms of a cascade

of events. Each even is small in quantum energy terms, collectively they add up to the answer, including both quantum uncertainty and statistical uncertainty. Its not called a calorimeter for nothing … that says its classical.

Normally on blogs like this one my viewpoint usually gets moderated out.

When you are stuck in a philosophical morass, the solution, if one exists, is likely to come from experiment. There is a growing experimental community working on building quantum computers who are worrying about wave function collapse as a very practical matter. It will be interesting to see which interpretation that community gravitates to.