What’s Hard to Understand is Classical Mechanics, Not Quantum Mechanics

For a zeroth slogan about quantum mechanics, I’ve chosen

What’s hard to understand is classical mechanics, not quantum mechanics.

The slogan is labeled by zero because it’s preliminary to what I’ve been writing about here. It explains why I don’t intend to cover part of the standard story about quantum mechanics: it’s too hard, too poorly understood, and I’m not expert enough to do it justice.

While there’s a simple, beautiful and very deep mathematical structure for fundamental quantum mechanics, things get much more complicated when you try and use it to extract predictions for experiments involving macroscopic components. This is the subject of “measurement theory”, which gives probabilistic predictions about observables, with the basic statement the “Born rule”. This says that what one can observe are eigenvalues of certain operators, with probability of observation proportional to the norm-squared of the eigenvector. How this behavior of a macroscopic experimental apparatus described in classical terms emerges from the fundamental QM formalism is what is hard to understand, not the fundamental formalism itself. This is what the slogan is trying to point to.

When I first started studying quantum mechanics, I spent a lot of time reading about the “philosophy” of QM and about interpretational issues (e.g., what happens to Schrodinger’s famous cat?). After many years of this I finally lost interest, because these discussions never seemed to go anywhere, getting lost in a haze of complex attempts to relate the QM formalism to natural language and our intuitions about everyday physics. To this day, this is an active field, but one that to a large extent has been left by the way-side as a whole new area of physics has emerged that grapples with the real issues in a more concrete way.

The problem though is that I’m just knowledgeable enough about this area of physics to know that I’ve far too little expertise to do it justice. Instead of attempting this, let me just provide a random list of things to read that give some idea of what I’m trying to refer to.

Other suggestions of where to learn more from those better informed than me are welcome.

I don’t think the point of view I take about this is at all unusual, maybe it’s even the mainstream view in physics. The state of a system is given by a vector in Hilbert space, evolving according to the Schrodinger equation. This remains true when you consider the system you are observing together with the experimental apparatus. But a typical macroscopic experimental apparatus is an absurdly complicated quantum system, making the analysis of what happens and how classical behavior emerges a very difficult problem. As our technology improves and we have better and better ways to create larger coherent quantum systems, thinking about such systems I suspect will lead to better insight into the old “interpretational” issues.

From what I can see of this though, the question of the fundamental mathematical formalism of QM decouples from these hard issues. I know others see things quite differently, but I personally just don’t see evidence that the problem of better understanding the fundamental formalism (how do you quantize the metric degrees of freedom? how do these unify with the degrees of freedom of the SM?) has anything to do with the difficult issues described above. So, for now I’m trying to understand the simple problem, and leave the hard one to others.

: There’s a relevant conference going on this week.

: I’ve been pointed to another article that addresses in detail the issues referred to here, the recent Physics Reports Understanding quantum measurement from the solution of dynamical models, by Allahverdyan, Balian and Nieuwenhuizen.

This entry was posted in Favorite Old Posts, Quantum Mechanics. Bookmark the permalink.

46 Responses to What’s Hard to Understand is Classical Mechanics, Not Quantum Mechanics

  1. Bernhard says:

    This is nice. Your slogan is more or less the same as my first supervisor’s when I was still in undergrad school. He was an expert in dynamic systems and just couldn’t accept people wanted to “understand quantum mechanics without understanding classical mechanics” – which according to him, was way much harder. The “measurement problem” was his favorite subject.

    And finally; “But a typical macroscopic experimental apparatus is an absurdly complicated quantum system, making the analysis of what happens and how classical behavior emerges a very difficult problem. ” Was the main reason I didn’t first want to get involved with experimental physics when I was young. How the heck people believed what were they measuring…. ? 🙂

  2. Peter Morgan says:

    “But a typical macroscopic experimental apparatus is an absurdly complicated quantum system, making the analysis of what happens and how classical behavior emerges a very difficult problem.” When we enter into experimental contexts in which QFT is necessary for accurate description of the statistics of recorded measurement events, the complexity becomes even more impossible. My related (admittedly idiosyncratic) worry is that for sufficiently detailed classical theories we would expect nonlinearity whenever we consider elaborate waveforms, whereas the Wightman axioms (and practice more generally) require linearity when we consider elaborate waveforms (because quantum fields are required to be operator-valued distributions, even though such a structure is not necessary for us to be able to construct a Hilbert space).

    One doesn’t need to understand classical trajectories to understand that quantum theory describes the statistics of observed and (computer-)recorded discrete measurement events, which may sometimes be obviously lined up but more generally are separated by far enough and mixed-up enough that we can’t be sure that they’re on trajectories at all (but nonetheless there are statistics of events).

  3. Matt Leifer says:

    “About 4000 articles a year are appearing on the arXiv at quant-ph.”

    Come on dude! The vast, vast majority of those articles have nothing to do with the problem you are talking about. The largest portion are about quantum computing, quantum information et. al., there is a portion on the sorts of approach to quantum foundations that you have indicated you have no interest in, and then there is a bunch of people writing papers about how to solve the Schrodinger equation and related differential equations, i.e. basically doing traditional applied math sorts of things. That leaves probably not more than a dozen or so papers per year that you would actually judge as relevant.

    More broadly though, I disagree with you, as you might imagine. The whole literature on decoherence mostly ignores the problem of ontology, i.e. what actually exists. It is one thing to get out of the formalism something that looks approximately like the Liouville or Hamilton equations, but quite another to be able to argue that this means that there are things that actually exist in the world that approximately look like classical particles and fields obeying these equations.

    Further, there is the whole problem of interpreting experiments that are thoroughly within the quantum world, e.g. things like nonlocality and contextuality. You are not going to be able to understand those things just by looking at a classical limit. And we are actually making both conceptual and mathematical progress in studying these things. It is not just interminable debates that go round and round in circles, as it may have been when you last looked.

    Even disregarding all of this, you are just flat out wrong that understanding the emergence of classicality has nothing to do with the structure of the theory. The same ideas about group representations and so forth that go into understanding how to quantize a classical theory show up in a dual role in decoherence and related subjects. Ignoring one whole side of this seems a bit strange. For the category theorists out there, it is like studying a functor without studying the adjoint functor.

  4. Tim Campion says:

    A couple of nitpicks: In this post, when you describe the Born rule, I think you mean to say that the probability of measuring observable $O$ of a state $\psi$ to have value $\lambda$ is given by the modulus-squared of the projection of $\mid \psi \rangle$ into the eigenspace of $O$ with eigenvalue $\lambda$.

    Also, I started flipping through your notes and I noticed that on the bottom of page 12, your definitions of irreducible and indecomposable representations are flipped around from what I learned in Fulton & Harris, or what you can find on wikipedia.

  5. Peter Woit says:

    I think you’re missing the repeated parts of this posting where I explain that I’m well aware I know very little about what is going on in “quantum foundations” these days. The period I was referring to when I did make an attempt to follow this was back in the 1970s, quite likely before you were born. I was trying to be polite by not giving examples of what I meant by people going over the same kind of thing being discussed back then. To be slightly more specific I’ll just say I was thinking of several physicists and philosophers with a high public profile who go on about these issues in this way. The positive reference to the huge quant-ph literature was meant to indicate that there seem to me to be a wide variety of different sorts of interesting things going on with some relation to the “interpretation” question, too many for me to hope to follow.

    I repeat that I honestly would like to hear good suggestions for things to read to get a grasp of what people are working on, e.g. review/expository pieces aimed at surveying the subject for non-experts.

    We all have our own intellectual prejudices, one of mine is that I’ve never gotten any insight into anything from a discussion about ontology, but maybe I just haven’t read the right discussion.

    I don’t doubt that ideas about group representations and other aspects of the fundamental mathematical structures of the subject I’ve been writing about show up for instance in the analysis of decoherence. I just haven’t seen evidence though that thinking about decoherence feeds back into the kind of question I mentioned about fundamental structure. Again, quite possibly this is just because I don’t know that much about this, and if so I’d love to be enlightened.

  6. Peter Woit says:

    Thanks. You’re right, here in the blog posting what I wrote assumes all eigenvalues distinct.

    You’re also right about the notes. I was trying to keep things as simple as possible dealing with the “reducible but not decomposable” phenomenon as something beginners were encouraged to skip, but I messed that up. Will rewrite, maybe right now…

  7. ScentOfViolets says:

    We all have our own intellectual prejudices, one of mine is that I’ve never gotten any insight into anything from a discussion about ontology, but maybe I just haven’t read the right discussion.

    And I thought it was just me. Maybe ‘Shut up and calculate’ is subsumed in ‘I am now very bored with these sorts of discussions’. Really. In fact, let’s turn this around: has anybody ever gotten much in the way of insight in these types of discussions? Personally, I know a few people who have been motivated by them, but when pressed, they uniformly admit nothing much ever came out all the jaw-jaw.

  8. Chris W. says:

    I think Peter Morgan is touching on my nagging worry, which is properly understanding the complexities of the system being observed in conjunction with the experimental apparatus is, on the face of it, essential to making the observations, and properly drawing conclusions from them about the theory being tested.

    To put it another way, we don’t want to quantum mechanics to be just a formalism. We want it to be the basis of an empirical science, and grappling with the absurd complexities of actual experimental apparatus as quantum systems would be seem to be an unavoidable part of this. Of course, generations of physicists have been somehow negotiating these complexities day in and day out with a fair degree of self-assurance. The question is how they can manage to do this without kidding themselves.

    Maybe the answer is that they never know for sure that they aren’t kidding themselves. All they can do is think up and carry out various kinds of consistency and sanity checks, and design experiments that facilitate those checks. The basis of their confidence is the accumulated history of all that work. So the best policy is to attend to the specifics of such techniques and solutions, and avoid getting tangled up in “ultimate” questions that lead nowhere.

    Of course, this kind of practical response is great until it gets deeply stuck…

  9. Peter Woit says:

    “Shut up and calculate” is not my perspective, which is all about trying to understand why certain kinds of calculations work. The point of the posting was to identify a specific aspect of QM calculations that works for reasons I don’t understand (the Born rule). Understanding this seems to be a different sort of problem than that of those parts of theory where calculations work for reasons that I do think have a compelling explanation.

    The comments in the posting were intended to indicate what seem to me the reasons this is hard and I don’t understand it. I’m perfectly willing to believe others do though and could point me to something enlightening.

  10. abbyyorker says:

    I thought that the Everett long thesis was worth reading, even if you do not like many worlds.

  11. Als says:

    “I repeat that I honestly would like to hear good suggestions for things to read to get a grasp of what people are working on, e.g. review/expository pieces aimed at surveying the subject for non-experts.”

    If you like topos theory, you may want to read this paper explaining Isham’s research program : http://arxiv.org/pdf/0803.0417v1.pdf

  12. Justin says:


    Given your reaction to the comments on ‘t Hooft’s foundations of quantum thread, I think this thread was a bit dangerous. Hopefully it doesn’t turn into that thread all over again. I’ve browsed and known people who are interested in these foundational issues. I find a lot of them (not all by any means) dogmatically defend Einstein’s views on quantum mechanics, and therefore these discussions always lead to advertising hidden variables, most notoriously Bohmian mechanics. If you’re actually interested in getting into this stuff, I think you will have to work hard to distinguish serious attempts to understand what is going on versus countless papers containing large quantities of pseudoscience about ‘photons telepathically mind-reading backwards in time’ (that’s almost an exact quote from one person I know working on this stuff). I wish you the best of luck.

  13. Bee says:

    I’m not sure your distinction is between classical and quantum, it seems to be between small and large systems. It is typically the intermediate regime that is difficult, the quantum – classical transition, the microscopic-macroscopic transition, the regime where N is neither small nor infinity that is difficult.

    I change my mind on this every couple of weeks, but every once in a while I find it quite possible that our failure to find a theory of quantum gravity stems from us having missed something in the process of quantization, that maybe we’re just doing it wrong and at very high energies it works differently. Then again I think the string is the thing and then again I think asymptotically safe gravity is it, and then I start it all over again. So recently I’ve been doing some reading on statistical mechanics and it’s quite interesting how much progress has been made there in the ‘difficult’ regime of finite N, fluctuations and non-equilibrium and quantum things. Who knows, maybe that’s where the answer will come from.

  14. Ralph B says:

    I was reading Griffiths “Consistent Quantum Theory” when Wilzcek pronounced it “bracing.” I was delighted!

  15. Francois says:

    Dear Peter,

    Like Ralph B, I also liked Griffiths’ book, and along the same line of thougt, Roland Omnes’ review in Review of Modern Physics is quite clear. I recently tackled Bernard d’Espagnat’s book (Traité de Physique et de Philosophie… I gather from your postings you read french ?). It has two parts, one about physics (quantum physics really) one about philosophy of science. He has quite a philosophical approach throughout the whole book, but it is written in a pedagogical manner accessible to the non-philosopher.

  16. Jasper says:

    I’ve been studying the measurement problem for the past year, and I’m currently satisfied with decoherence as a solution to the preferred basis problem & “unobservability” of superpositions: how a measurement can only be performed in certain bases and why superpositions disappear in the macro setting. This is because of the form of possible interactions in physics. We cannot measure in the basis corresponding to non-local superpositions because all(/most) interactions are local, i.e., the environment becomes entangled with the local parts of a superposition. In the end this results in the fact that the predictions for the system while disregarding the environment are done using a diagonal density matrix in the basis defined by the system-environment interactions.
    Most importantly the timescale on which this occurs can be calculated given the system and the interactions with the environment.

    The unsolved problem of single well-defined outcomes: QM tells us the probabilities for the possible outcomes, but why does an experiment give a single outcome?
    In my opinion this cannot be solved in QM, since the system becomes more and more entangled with the environment and the measurement apparatus and all the options within this whole state never disappear because of unitary evolution in QM. To me this is an interpretational problem of QM.

    To me, it comes from wanting QM to describe single runs, which it seems it does not. Most importantly one has to think of what a pure state means. Experimentally these states are only defined by a class of preparation procedures and can only be obtained by first doing rigorous statistical testing of these procedures. Simply put, I think pure states should be regarded as representing an idealized(!) state of a system with the most definite property obtainable in quantum theory for some set of observables.

    The assumptions that are necessary to identify a single quantum system with a pure state:

    1. During the verification of the preparation procedure any possible deviation in the final measurement statistics due to a finite ensemble are neglected. Also, the final measurement statistics are dependent on additional assumptions on the quality of the measurement apparatus, such as the efficiency, the number of false detections (dark counts), and the dead time.
    2. The final selective measurement during the verification of the preparation procedure is assumed to be able to distinguish between all possible (non-degenerate) results perfectly.
    3. The fundamental and experimental uncertainties in the preparation procedure are neglected.
    4. The entanglement of the system with the preparation apparatus is neglected and the state of the system is characterized solely by the outcome indicated by the final measurement device instead of the full entangled state between preparation apparatus and system.
    5. The produced state is not affected by the presence or absence of the final measurement apparatus.
    6. The ubiquitous interactions between the system and the environment, which can also include the system’s internal degrees of freedom, are neglected.

    If the pure state is an idealization, it becomes quite difficult to talk about single runs, and it seems one can only talk about ensembles.

    In my opinion, QM is consistent with the view that although there are uncertainties about the individual events, however, the statistical results of many numbers of events are robust with respect to small changes in the conditions of the experimental setting.

    However, this begs the question if it is possible to devise a way to find out more about individual events.. To me this seems almost impossible.

  17. Xu Jia says:

    John Bell‘s work. We just don’t have such people as him having a clear thinking nowadays. I think if he could live longer and got his Nobel Prize in time, many things would be different. In his words, there are simply no such things as apparatus and “boundary between classical and quantum world”.

  18. Matt Leifer says:


    OK then, let me recommend at least one thing. I think you might like this long review article by Klaas Landsman http://arxiv.org/abs/quant-ph/0506082 The thing I like about it is that he emphasises the dual role of quantization and the emergence of classicality.

  19. Peter Woit says:

    Thanks, that’s great. You’re addressing exactly the sort of question that I’ve been wondering about and have had trouble finding a clear discussion of. If you can recommend where to read more about these issues that would be very helpful.
    As a crude summary though, do you think it’s fair to say that the source of supposed probabilistic nature of QM is that we only have probabilistic information about our experimental apparatus? Or does this summary miss something very important?

  20. Too Distinguished says:

    I second Xu Jia’s comment that John Bell had the clearest perspective on these issues. And I agree with what some others have said, that complicated-sounding questions about the quantum-classical transition and measurement all reduce to the basic question of what, exactly, exists. The mathematical formalism of quantum mechanics works great, but when it comes to describing reality, it seems obvious that there have to be non-local hidden variables.

  21. Peter Woit says:

    Thanks a lot! I look forward to reading the Landsman review carefully, he’s great in general on the issue of the mathematical structure of the theory, I’d forgotten that he’s written also on this topic.


    Thanks for the reference!

  22. Peter Woit says:

    Justin has a good point that these discussions can quickly degenerate into unenlightening discussions of hidden variables. The ‘t Hooft posting was about a “hidden variable” theory, this isn’t, so no more of this please.

  23. nick herbert says:

    We house-broke quantum reality
    Taught Schrödinger’s Cat to purr
    Now daily life’s more uncanny
    Than atoms ever were.

  24. manfred Requardt says:

    By the way, there does exist an alternative to decoherence by environment. See for example arXiv:1009.1220. Instead of the environment a restricted set of commuting macrovariables is introduced. It is like coarse graining and is based on a beautiful old paper by v.Neumann.

  25. Curious Mayhem says:

    This is certainly a compelling way to look at quantum mechanics, instead of the convoluted path that starts with classical mechanics. The problem has always been what von Neumann called R, the reduction upon measurement, not U, the simple unitary transformation in the Hilbert space.

    And the right approach to come from the other direction is the “decoherence” approach — it’s completely physical and can, in part, be tested in a laboratory. That sweeps away the stifling metaphysical cud-chewing of “interpretations” of QM, like Copenhagen, many worlds, etc.

  26. vmarko says:


    “I’ve been studying the measurement problem for the past year, and I’m currently satisfied with decoherence as a solution to the preferred basis problem”

    I’d second Peter’s request — can you point us to a good review which discusses this? No matter how various people phrase decoherence as a solution to the preferred basis problem, I always fail to understand it, and always find holes in their arguments.

    “This is because of the form of possible interactions in physics. We cannot measure in the basis corresponding to non-local superpositions because all(/most) interactions are local”

    And this is one of the things where I disagree most. People must not ignore quantum gravity — locality of interactions is based on the assumption of a classical spacetime manifold, i.e. that gravity is not quantized. As soon as you try to quantize gravity, you must allow for quantum superpositions of different spacetimes, and as a consequence locality goes out the window. And then the pointer basis problem comes back at you in its full glory, all over again.

    So whatever you do to fix the pointer basis, *do not* base your argument on locality!

    I would really, really like to see a serious review which systematically discusses how a pointer basis comes about, especially if “induced” by decoherence. And then I would probably have additional questions for the author. Any references, please?

    Best, 🙂

  27. James Cooper says:

    These are targeted at a layman, and the site does have a crackpotish design, but I found them very helpful in understanding decoherence and explaining why the world must behave quantum mechanically

  28. Jasper says:

    Before I respond to some of the questions, I just want to say I have the same problem as Sabine (Bee): “I change my mind on this every couple of weeks”.

    It’s fun to see you recommend the article of a my supervisor 😉 I’m a postdoc working with Klaas Landsman.


    “As a crude summary though, do you think it’s fair to say that the source of supposed probabilistic nature of QM is that we only have probabilistic information about our experimental apparatus?”
    I think that’s about right, but I’d rather say that if you wish to define a state of a single quantum system we need first to obtain probabilistic information about the preparation procedure and about the detection procedure. It is the robustness of this probabilistic information against known and unknown “small” changes in the procedures why we can say anything about quantum phenomena. It is hard to say what the exact conditions are in each run and this is often not mentioned. In most considerations in QM it is common to automatically jump to pure states of single quantum systems, without considering the preparation and detection procedures, but does this description really reflect nature?

    As an example of some of the above: experimenters in quantum optics mostly view the polarizer to be in a certain direction and that after all photons in a finite ensemble pass the polarizer they are verified by a subsequent detection procedure to always have a polarization in the direction of the polarizer. We can thus for other experiments use these photons and simply say they are in a pure state of the polarization after passing the polarizer. However, is/was the polarizer always truly in the specified definite direction? Is the polarization detection procedure always along this very precise axis? It would seem to be better to describe the photon as entangled with the polarizer which has some spread along the mostly known direction, but this means that when restricting the state to the photon we should actually describe it by a density matrix, not a pure state.

    Also, Peter, if you would like to have a discussion on these topics, please feel free to contact me!

    Could you help me understand your problems with decoherence as a solution to the basis problem? I’m always interested in learning the views of others, especially to see if I missed something important..
    I think your theoretical argument about quantum gravity is valid, although I believe I can for the purpose of understanding the experimentally observed nature of reality ignore it for the moment, as the decoherence solution seems natural and satisfactory within this context. In my foundational research I try to stay as close to experiment as possible to not lead me astray. I do not know at the moment what is physical and observable about “superpositions of space-time” and what questions I am allowed to ask or not, and thus I simply try to solve the problems at hand in the best way possible. This is my take on research, you cannot solve all problems at once, simply solve a problem in one arena then try later to expand to the other. By using only observed effects of QM, namely entanglement, I do not see what is wrong with using the fact that the interaction between electrons and the EM-field is at a single (“local”) point to understand that if I have a superposition of the electron |x1>+|x2> that after coupling with the environment it will be |x1>|E1>+|x2>|E2>. In the end this will mean that if ~0 coherence disappears on the local level of the electron. The interaction “simply” dictates how the environment couples to the system, if for example it coupled differently and would be of the form (|x1>+|x2>)|E12>, it would not lead to loss of coherence.. As you now see, in this case the interaction determines the preferred basis, it is “localized” in position space, and not in, for example, the basis |x1>+|x2>.

    As for references:
    Unfortunately, there are no clear explanations, but the references you have are the best starting point.

    Schlosshauer’s book is very good and has had considerable influence on my thoughts. Also Schlosshauer’s “Elegance and enigma”, which is a book with interviews of researchers of all corners of the foundational arena and gives a clear overview of how people can think about QM.
    Zurek is also good, but his vision of the capabilities of decoherence might often be too far extrapolated for my taste.
    The second chapter of the book by Audretsch “Entangled Systems New Directions in Quantum Physics” just for the clear explanation of some of the hidden assumptions people sometimes skip when interpreting experiments.

    I recommend the following article by De Raedt, Katsnelson, Michielsen:
    This article was also important for my view of QM. It overlaps with my overall view of what experiments say about nature and what can be inferred from them. However, as with almost any article, I don’t fully agree with it.

    Allahverdyan, Balian, Nieuwenhuizen for the references and the discussion: http://www.sciencedirect.com/science/article/pii/S0370157312004085
    Do not believe any claims on their part on “solving” the measurement problem, but their summary, discussions and references on the relevant quantities to this problem are quite clear. Simply do not read most of the main text, unless you want the nitty gritty details of how to understand their specific (but semi-realistic) model describing entanglement of a certain measuring device with a system, the selection of a basis of detection and the loss of coherence of the measuring device and of the system in this basis. They do not solve the single outcome problem, but merely show the decoherence solution to the superposition and basis problem.

  29. manfred Requardt says:

    Certainly entanglement is an important concept in QM. But in my view decoherence by environment is similar to friction in classical mechanics, i.e. it is of a rather contingent nature. I doubt that such an unsystematic interaction with the environment is really the solution to the quantum measurement process.
    A cornerstone of the decoherence philosophy is the socalled basis ambiguity (for macro staes) which is then claimed to be solved via the selection of a pointer basis by decoherence.
    In arXiv:1009.1220 we showed that the construction of a subalgebra of macro observables within the larger algebra of microscopic observables does exactly the same job. We then get mixtures in the classical world living above the microscopic quantum world in which the corresponding many-body state is still pure.

  30. vmarko says:


    “Could you help me understand your problems with decoherence as a solution to the basis problem?”

    I’d be glad to discuss this with you, but I am afraid that Peter’s blog might not be the best way to do it. How about we take the discussion privately, via e-mail? Peter, can you connect us?

    The thing with quantum gravity revolves around the issue of fundamentality of QM. If nature really obeys QM at the fundamental level, then QM should be capable of describing even cosmology. I just don’t see how the measurement problem can be solved by decoherence in that setup (no locality and no environment). And if QM is not fundamental but only an effective theory, then it is not really relevant, and calls for a construction of some other, more fundamental theory.

    Btw, thanks for the references! I didn’t get the chance to look them up yet, but I’ll certainly do it soon.

    Best, 🙂

  31. Jim says:

    Nice post. I think part of the problem is that these issues of interpretation force us to make a simple choice. We either ‘shut up and calculate’ or we acknowledge that something has gone wrong somewhere or something’s missing. Adopting the second position involves indulging our inner metaphysician and speculating about what we think reality should or might be like. Unfortunately, most papers on the foundations of quantum theory never get back across the line – they’re stuck in a metaphysical no-man’s land with no prospect of saying anything that might eventually lead to some kind of empirical test. The few that succeeded in getting back across the line – Bohm, Bell, Leggett to name a few – have inspired some awesome experiments which have deepened the mystery but which have also opened up new areas of research in quantum information.

    There’s actually been quite a bit of progress but the fundamental problems remain seemingly intractable: How should we interpret quantum probability? Quantum non-locality and the collapse of the wave function? ‘Spooky’ action-at-a-distance? All the different interpretational schemes are designed to circumvent some or all of these, including many worlds (which simply avoids them by invoking a multiplicity of different realities in which all possible measurement outcomes are realised).

    All these issues were well known and formed the basis of the famous Bohr-Einstein debate in the late 1920s.. We’ve been living with these problems for nearly a century.

  32. Martin says:

    Re decoherence: something similar at http://arxiv.org/abs/cond-mat/0303290

    Unitary time evolution of the density matrix does not change its eigenvalues, yet this is necessary for thermodynamic change. Similar to decoherence: an incomplete description of the system yields nonsense.

  33. Jan Reimers says:

    Two things that have always bothered be about most discussions of this topic:
    1) Over emphasis on the term “measurement” which implies that some sentient being is watching, which then drags the conversation off into ridiculous directions. It is really just interactions which generate a macroscopic effect that creates the illusion of wave function collapse. This can happen in lab, but it mostly happens everywhere else, including half way between here and Andromeda for example. Is there any reason we must call it the “measurement problem” rather than the “interaction problem”?

    2) Ignoring basis selection (einselection) as exemplified by the http://www.ipod.org.uk/reality/reality_decoherence.asp article. The density matrix goes diagonal in a particular basis,but why is that basis selected? Local interactions select a basis that localized in position space … how does that work and is it a universal rule?


  34. James Cooper says:

    There’s actually been quite a bit of progress but the fundamental problems remain seemingly intractable: How should we interpret quantum probability? Quantum non-locality and the collapse of the wave function? ‘Spooky’ action-at-a-distance?

    Why would you say these problems are intractable? Of the last 3, one could say the decoherence papers has answered them. Heisenberg himself understood much of it in an albeit vague way. There is no “collapse” of the wave function – it just decoheres. Once it does so (ie. can no longer interfere with itself) that is what we call ‘reality’ which is just another word for the decoherence. Classical physics and classical intuition is an emergent phenomenon. Nature is not realist: your physical state is only an approximation and has no real meaning.
    As much as I hate to link to something Lubos wrote- the first 2 answers in this thread really sum up this way of thinking

    I think the 1st question (on probablity) is the interesting one. Once we accept that nature is not deterministic (BTW, If it was, I couldn’t tell my fingers to type the keys they are typing!) and that quantum mechanics is complete, we can do away with Bohm and MWI and get back to the most interesting interpretation of all – Conciousness – What is it and how does it work

  35. Martin says:

    @Jan, point 2): this is specified by the observer. A system is typically described with a density matrix that corresponds to the energy of the interaction that yields the observation. E.g. at very low energies an atom can be described as a little ball without internal dynamics. At higher energy electronics transitions must be taken into account, at very much higher energy the internal dynamics of the atomic nucleus starts to play a role, etc. This is why classical theories such as the Navier-Stokes equation work at low excitation energies (long wavelengths) even though they ignore the molecular microstructure of the system (which enters the theory as parameter values such as viscosity).

  36. vmarko says:

    Folks, please stop telling everyone that decoherence solves the measurement problem. It doesn’t, until someone finds a solution to the preferred basis problem.

    Let me explain it a bit: suppose we are about to measure the spin of an electron, which is in the initial state |u>+|d> (I’ll be ignoring normalization factors of 1/sqrt(2) throughout). The “measurement” means to let it interact with the environment (i.e. the apparatus). After the interaction, we can write the state of the big system (electron plus environment) as |Psi> = |u>|Eu>+|d>|Ed>, where “Eu” stands for “environment has recorded the spin of the electron to be up”, and similarly for “Ed”.

    Now let me formulate the problem. I can introduce a different basis for the electron, |+> and |->, which is rotated by 45 degrees, as |+>=|u>+|d>, |->=|u>-|d>, and similarly |E+> and |E-> for the environment. Then I can rewrite the final state as |Psi>=|+>|E+>+|->|E-> (it’s the same state as above, this is just a change of variables). The problem is now that in experiment we always find the apparatus in states |Eu>,|Ed>, and never in |E+>,|E->. The former is somehow preferred over the latter by experiment, but not by theory (short of invoking the collapse postulate of Copenhagen interpretation). This is called the “preferred basis problem” or “pointer basis problem”.

    I simply don’t see how decoherence can be invoked in any way to favor one basis over the other (and I am not alone in this). Until you can provide a clean resolution to the pointer basis problem, please stop endlessly repeating that decoherence solves the measurement problem. It doesn’t.

    HTH, 🙂

  37. Jim says:

    James Cooper
    vmarko answered your point on decoherence before I could respond, and I couldn’t have put it better. I haven’t looked at the literature for a while, but my understanding is that decoherence doesn’t solve what Roland Omnes called the ‘problem of objectification’. Roger Penrose called it the problem of objective reduction and there’s a nice quote from John Bell somewhere about decoherence being unable to explain how ‘and’ is replaced by ‘or’.

    I’d be wary of trying to resolve the problems by connecting measurement with consciousness. This has a long history (von Neumann – inspired by Wigner – credited consciousness as an explanation for his projection postulate – essentially the collapse of the wave function – in his famous book Mathematical Principles of Quantum Theory published in 1932). Trying to shed light on something we don’t understand by invoking something else we don’t understand has always seemed an odd strategy to me.

  38. Jasper says:

    Simple answer to Marco ‘s problem : there are no realistic coupling terms in quantum theory which couple the system and environment in the second basis, just as I gave in my original example for the spatial superposition. To measure the spin of an electron you need to use the electromagnetic coupling.. and work from there.
    If you want to understand this important point, study a realistic model with realistic coupling between spin, measuring device and an environment.. In the end the interaction terms determine the bases which are robust and coherence terms dissappear on the level of system and measuring apparatus.

    Also, as said, decoherence does not solve the single outcome problem aka the measurement problem, only basis and superposition problem.

  39. vmarko says:


    “there are no realistic coupling terms in quantum theory which couple the system and environment in the second basis”

    I don’t understand this. The state |psi> that I wrote is the *final* state (the one describing the electron and the environment *after* the interaction), and I wrote the *same* state in two different bases. The properties of the interaction are important only when calculating the final state from the initial state. But once known, the same final state can be represented in two bases, and in experiments we always see only one of them and not the other. The interaction simply plays no role in this, as I see it.

    Can you explain your point more precisely?

    Best, 🙂

  40. Jasper says:

    This will be my last post, because this discussion is time consuming. I recommend reading Schlosshauer since it is very clear.

    The coupling picks which components of the environment couple to the which components of the system. The environment is thus correlated with specific information on the system, then if the distinct environmental states are almost orthogonal you can say that on the level of the system the coherence has almost disappeared in the components of the system picked by the coupling mechanism.

    Example: An environment of light particles scattering with a massive particle, which is in a superposition, using contact interactions
    (|x1>+|x2>)|E> -> |x1>|E1>+|x2>|E2>
    The states |E1> and |E2> become more and more distinguishable as they encode many collisions of the light particles with the massive particle. The states become more and more orthogonal, thus coherence disappears between |x1> and |x2>. Any other states of the environment would not have this orthogonality. In fact, I can derive the timescale of this orthogonality…

    Another example (hope this works, since I’m making it up as I go along): Qubit encoded in hyperfine levels of a atom with monitoring by stimulated photon emission

    A qubit is encoded in two levels of an atom, where the state |u> corresponds to a higher energy level than |d>. Then we couple with an EM field which has exactly one photon in with exactly the energy difference between the levels. Stimulated emission means that the EM field (read: environment) becomes entangled with the qubit:
    |u>|1> -> a|u>|1>+b|d>|2>
    Here |1> means 1 photon with the specific energy and |2> two photons . Note that |1> and |2> are orthogonal and that they couple in a clear way in the basis |u> and |d>.

    For the down state: |d>|1> -> c|d>|1>+d|u>|0>.
    A superposition of the qubit thus behaves as
    Since |0>, |1> and |2> are orthogonal, when restricting to the qubit (read: trace over EM states of density matrix) we can see that coherence between |u> and |d> can become less (Note: depends on specific details of coupling, i.e. the coefficients a b c d).

    Yes, after all this I can still write the density matrix in another way, but the orthogonal states of the EM field only couple in the above specific way to the atom,in the |u> and |d> basis which is where coherence is lost. In other words, using EM field we can only “observe” |u> or |d> of this atom, not a superposition |u>+|d>, and thus this basis is preferred.

    Maybe not the most clear example and a bit too ideal, but the point is that for all realistic physical systems, with some thought, the preferred basis can be identified in which entanglement occurs which results in loss of coherence.

    Anyway, hope you kind of see the idea.. It’s better to read the book.

  41. manfred Requardt says:

    These problems how a pointer basis is selected, i.e. the basis ambiguity, is discussed and in my view solved outside the realm of decoherence in the paper I mentioned above. I think decoherence is a too cheap solution to be correct under all circumstances.

  42. Martin says:

    @vmarko “The state |psi> that I wrote is the *final* state (the one describing the electron and the environment *after* the interaction)”

    No, the interaction is the measurement. After the measurement it is known whether the spin was up or down. What you write is that the measurement has taken place but nobody is aware of the outcome and your |Psi> indicates what the experimenter might guess about what has been recorded. Probability has to do with observation. “Probability” does not mean anything without conciousness. This is why QM probability is Bayesian. Like every probability. Forget Kolmogorov.

  43. vmarko says:


    “This will be my last post, because this discussion is time consuming.”

    At first I thought of giving up on the discussion, but then I changed my mind, since other people reading this blog might still be confused. I’m sorry that you don’t want to continue.

    In the first example, you said

    “The states become more and more orthogonal, thus coherence disappears between |x1> and |x2>. Any other states of the environment would not have this orthogonality.”

    The last statement is just false. If by “orthogonality” you mean that the scalar product between |E1> and |E2> is zero, then the basis |E+>=|E1>+|E2> , |E->=|E1>-|E2> is also orthogonal. Feel free to check.

    Orthogonality of the resulting basis plays no role in the discussion. The total Hamiltonian of system and environment is always self-adjoint, and its eigenvectors are always orthogonal. But there are also infinitely many other bases which are orthogonal as well, and I don’t understand the importance of the basis |E1>,|E2> in this example.

    Your second example is more complicated, due to the nonconservation of the number of photons, and because you didn’t specify all interactions. You said

    “Yes, after all this I can still write the density matrix in another way, but the orthogonal states of the EM field only couple in the above specific way to the atom,in the |u> and |d> basis which is where coherence is lost. In other words, using EM field we can only “observe” |u> or |d> of this atom, not a superposition |u>+|d>, and thus this basis is preferred.”

    Once you specify the final states that correspond to initial states |u>|2>, |u>|0>, |d>|2> and |d>|0>, I will be able to change from |u>,|d> to |+>,|-> basis for the atom, and from |0>,|1>,|2> basis to another basis for the EM field, which will be a linear combination of |0>,|1>,|2> which is orthogonal and which couples to |+>|-> in exactly the same way that |0>,|1>,|2> couples to |u>,|d>. So I still don’t see why |u>,|d> basis is unique in any physical sense.

    What I think you’re trying to appeal to is the fact that the interaction Hamiltonian is generically always local in coordinate representatation, as opposed to say momentum representation (where it is nonlocal). And then you could say that the coordinate basis is somehow physically preferred because the interaction Hamiltonian is local *only* in that basis. This argument is ok, and it is good as long as you ignore quantum gravity. However, in QG the interaction Hamiltonian will become nonlocal even in the coordinate basis, and generically there will be no basis where H_int would be local. That’s where the locality argument fails.

    I’ll make sure to read the Schlosshauer book.

    Best, 🙂

  44. nick herbert says:

    Elliot Tammaro has a nice review of the various quantum interpretations at http://lanl.arxiv.org/abs/1408.2093. In Tammao’s view none of the present-day interpretations fare very well.

  45. HDZ says:


    you are making a trivial but common mistake. Your example is valid only if the amplitudes of both components are exactly equal. (Feel free to check!) This ambiguity just reflects the fact that there is no unique diagonal representation of a density matrix in the case of degeneracy.

    However, the point of decoherence is that macroscopic properties are “permanently” (repeatedly) and uncontrollably (irreversibly) measured by their environment with respect to a given basis. So even in the case of degeneracy, the “wrong” components would immediately be decohered again and again, while the “correct” ones (those characterized by the unavoidable environment) are “robust” under further decoherence (what Zurek later called the “pointer basis”). These components are thus dynamically “autonomous” after a measurement proper. If unitarity holds globally, it must necessarily lead to a superposition of many such autonomous “worlds”.

    If you want to avoid that consequence, you have to modify quantum theory (such as by a collapse). This is certainly not a matter of mathematical sophistication (as Peter Woit seems to believe). Formal limits of infinite numbers of particles or degrees of freedom have often be misused in order to hide an essential physical issue.

    You may like to have a look at Sect. 3 of http://lanl.arxiv.org/abs/1402.5498

  46. Peter Woit says:

    I don’t at all think that any of the issues discussed here are issues requiring mathematical sophistication, rather that they are complex issues of the physics taking place. My interest in more sophisticated mathematical frameworks for quantum mechanics is aimed in a very different direction (understanding aspects of the standard model that are still mysterious, as well as the question of unification with gravity).

Comments are closed.