Is Quantum Mechanics a Probabilistic Theory?

There is a simple question about quantum theory that has been increasingly bothering me. I keep hoping that my reading about interpretational issues will turn up a discussion of this point, but that hasn’t happened. I’m hoping someone expert in such issues can provide an answer and/or pointers to places where this question is discussed.

In the last posting I commented that I’m not sympathetic to recent attempts to “reconstruct” the foundations of quantum theory along some sort of probabilistic principles. To explain why, note that I wrote a long book about quantum mechanics, one that delved deeply into a range of topics at the fundamentals of the subject. Probability made no appearance at all, other than in comments at the beginning that it appeared when you had to come up with a “measurement theory” and relate elements of the quantum theory to expected measurement results. What happens when you make a “measurement” is clearly an extremely complex topic, involving large numbers of degrees of freedom, the phenomenon of decoherence and interaction with a very complicated environment, as well as the emergence of classical behavior in some particular limits of quantum mechanics. It has always seemed to me that the hard thing to understand is not quantum mechanics, but where classical mechanics comes from (in the sense of how it emerges from a “measurement”).

A central question of the interpretation of quantum mechanics is that of “where exactly does probability enter the theory?”. The simple question that has been bothering me is that of why one can’t just take as answer the same place as in the classical theory: in one’s lack of precise knowledge about the initial state. If you do a measurement by bringing in a “measuring apparatus”, and taking into account the environment, you don’t know exactly what your initial state is, so have to proceed probabilistically.

One event that made me think more seriously about this was watching Weinberg’s talk about QM at the SM at 50 conference. At the end of this talk Weinberg gets into a long discussion with ‘t Hooft about this issue, although I think ‘t Hooft is starting from some unconventional point of view about something underlying QM. Weinberg ends by saying that Tom Banks has made this argument to him, but that he thinks the problem is you need to independently assume the Born rule.

One difficulty here is that you need to precisely define what a “measurement” is, before you can think about “deriving” the Born rule for results of measurements, and I seem to have difficulty finding such a precise definition. What I wonder about is whether it is possible to argue that, given that your result is going to be probabilistic, and given some list of properties a “measurement” should satisfy, can you show that the Born rule is the only possibility?

So, my question for experts is whether they can point to good discussions of this topic. If this is a well-known possibility for “interpreting” QM, what is the name of this interpretation?

Update: I noticed that in 2011 Tom Banks wrote a detailed account of his views on the interpretation of quantum mechanics, posted at Sean Carroll’s blog, with an interesting discussion in the comment section. This makes somewhat clearer the views Weinberg was referring to. To clarify the question I’m asking, a better version might be: “is the source of probability in quantum mechanics the same as in classical mechanics: uncertainty in the initial state of the measurement apparatus + environment?”. I need to read Banks more carefully, together with his discussion with others, to understand if his answer to this would be “yes”, which I think is what Weinberg was saying.

Update: My naive questions here have attracted comments pointing to very interesting work I wasn’t aware of that is along the lines of what I’ve been looking for (a quantum model of what actually happens in a measurement that leads to the sort of classical outcomes expected, such that one could trace the role of probability to the characterization of the initial state and its decomposition into a system + apparatus). What I learned about was

In these last references the implications for the measurement problem are discussed in great detail, but I’m still trying to absorb the subtleties of this story.

I’d be curious to hear what experts think of Landsman’s claim that there’s a possible distinct “instability” approach to the measurement problem that may be promising.

Update: From the comments, an explanation of the current state of my confusion about this.

The state of the world is described at a fixed time by a state vector, which evolves unitarily by the Schrodinger equation. No probability here.

If I pick a suitable operator, e.g. the momentum operator, then if the state is an eigenstate, the world has a well-defined momentum, the eigenvalue. If I couple the state to an experimental apparatus designed to measure momenta, it produces a macroscopic, classically describable, readout of this number. No probability here.

If I decide I want to know the position of my state, one thing the basic formalism of QM says is “a momentum eigenstate just doesn’t have a well-defined position, that’s a meaningless question. If you look carefully at how position and momentum work, if you know the momentum, you can’t know the position”. No probability here.

If I decide that, even though my state has no position, I want to couple it to an experimental apparatus designed to measure the position (i.e. one that gives the right answer for position eigenstates), then the Born rule tells me what will happen. In this case the “position” pointer is equally likely to give any value. Probability has appeared.

So, probability appeared when I introduced a macroscopic apparatus of a special sort: one with emergent classical behavior (the pointer) specially designed to behave in a certain way when presented with position eigenstates. This makes me tempted to say that probability has no fundamental role in quantum theory, it’s a subtle feature of the emergence of classical behavior from the more fundamental quantum behavior, that will appear in certain circumstances, governed by the Born rule. Everyone tells me the Born rule itself is easily explicable (it’s the only possibility) once you assume you will only get a probabilistic answer to your question (e.g. what is the position?)

A macroscopic experimental apparatus never has a known pure state. If I want to carefully analyze such a setup, I need to describe it by quantum statistical mechanics, using a mixed state. Balian and collaborators claim that if they do this for a specific realistic model of an experimental apparatus, they get as output not the problematic superposition of states of the measurement problem, but definite outcomes, with probabilities given by the Born rule. When I try and follow their argument, I get confused, realize I am confused by the whole concept: tracking a mixed quantum state as it evolves through the apparatus, until at some point one wants to talk about what is going on in classical terms. How do you match your classical language to the mixed quantum state? The whole thing makes me appreciate Bohr and the Copenhagen interpretation (in the form “better not to try and think about this”) a lot more…

This entry was posted in Quantum Mechanics. Bookmark the permalink.

74 Responses to Is Quantum Mechanics a Probabilistic Theory?

  1. Another Anon says:

    If you want to avoid the tricky question “What is a measurement?”, just consider an electron orbiting a nucleus, jumping to a lower energy state. Einstein asked what caused the jump. Bohr said it was probabilistic. So there’s an example of fundamental probability entering QM which avoids the “What is a measurement?” problem

  2. Charles Xu says:

    It doesn’t quite get at the interpretational issues you raised earlier (in that it assumes an underlying Hilbert-space description), but Gleason’s theorem https://en.wikipedia.org/wiki/Gleason%27s_theorem comes close to the kind of functional derivation of Born’s rule you’re asking about.

  3. Peter Woit says:

    Another Anon,
    The situation you describe involves not just a single particle in a potential (where an energy eigenstate will stay an energy eigenstate), but a particle coupled to a quantized electromagnetic field. In this case, you don’t know exactly what the initial quantum state of the electromagnetic field is, no? You also don’t know the state of the environment, and are going to have to perform a measurement to detect the photons that show that the electron has changed energy levels.

  4. There is a simple answer to your simple question: probability can come from lack of knowledge of the initial state.

    The problem is that it is not at all easy to recover the probabilistic predictions of quantum mechanics from this idea. The only way people found to do it is with Bohmian Mechanics. Which is not a satisfactory theory, in my opinion.

  5. Oren says:

    This, in particular the discussion around Gleason’s theorem, seems relevant. https://www.scottaaronson.com/democritus/lec9.html

  6. Peter Woit says:

    Mateus Araújo,

    I’m not interested in what I suspect ‘t Hooft or the Bohmians would like to do, deriving the Born rule from some underlying more classical picture. I’d like to know if you can do it starting with standard QM initial states, can you point to literature where people try to do this? I seem to be hearing from some quarters that Gleason’s theorem or other very general arguments can constrain probabilities and perhaps give Born’s rule, is that not the case?

    Also, where can I find a precise definition of a “measurement”? Or, not generally, but specifically, a detailed analysis of exactly what is happening in some simple case of a measurement?

  7. Another Anon says:

    Mateus: “There is a simple answer to your simple question: probability can come from lack of knowledge of the initial state.”

    Yes it can, but that’s not the fundamental uncertainty of quantum mechanics – that is not about lack of knowledge.
    The uncertainty principle explains why quantum mechanics must be probabilitic. If you have two peroperties which don’t commute, then you have fundamental limitations on your knowledge. In which case, you can only ever talk about the probability of the position of a particle, say, rather than knowing with certainty.

  8. Perry Rice says:

    Not all measurements are projective, so that needs to be taken into account.

  9. Peter Woit says:

    Another Anon,
    The fact that quantum states are different than classical states is a different issue, and has nothing to do with the question about the Born rule.

    For example, I can start with a state of a single particle. Quantum mechanics tells me what such states look like, there aren’t any that are both position and momentum eigenstates. If I start with a single particle in a momentum eigenstate, I can ask what happens when I “measure its position”. To do this I’m going to have to couple it to some very complicated (as a quantum system) measuring device + environment. Born’s rule says that the probability of any position will be the same. Can I derive this from some very general characterization of a what a measuring device is and how an environment behaves?

  10. Peter Woit says:

    Perry Rice,
    Can you point to a precise definition of what a “measurement” is?

  11. Peter Woit,

    “I’d like to know if you can do it starting with standard QM initial states, can you point to literature where people try to do this?”

    So what you want to know is if one can get probability from a lack of knowledge of the initial quantum state? The only work I know in this direction is A Flea on Schroedinger’s Cat. The authors are more concerned about causing a collapse of the wavefunction by introducing a perturbation in the effective Hamiltonian of a system (which can always be interpreted as a change in the quantum state of the measurement apparatus), but since the perturbation determines the result of the measurement, lack of knowledge about the perturbation is the source of probabilities in their model. The paper is just a numerical analysis of a double-well model, so it is more of a hope instead of worked-out solution.

    As for Gleason’s theorem, it shows that if the probability of an outcome is a (non-contextual) linear function of the projector associated with that outcome, then it must be given by the Born rule. So if you want non-Born probabilities you either need to give up on describing measurements with projectors or do some really funky stuff. But it says nothing about whether the probabilities come from lack of knowledge of a initial state.

    There are dozes of derivations of Born’s rule from general arguments, it is really hard to get a sensible theory without it.

  12. I can’t claim to be an expert here, but have you looked at Sebens and Carroll’s paper “Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics”?
    https://arxiv.org/abs/1405.7577

  13. RalphB says:

    Thank you so much for the link to Weinberg’s talk. It’s interesting that he lumps Everett in with the consistent quantum theory folks (Gell-Mann, Hartle, Griffiths). Consistent quantum theory defines measurements but I assume that’s not what you are looking for.
    Finally, I link to a short clip of Gell-Mann which basically summarizes why I like his approach to quantum theory.
    https://www.youtube.com/watch?time_continue=127&v=gNAw-xXCcM8

  14. Peter Woit says:

    Kevin S. Van Horn,
    I have looked at that, but it seems to me to be a more elaborate and less clear version of Zurek’s 2004 argument for a derivation of the Born rule, see here
    https://arxiv.org/abs/quant-ph/0405161

    I should have pointed to Zurek’s argument in my posting. My question might be rephrased as “does something like Zurek’s argument give what I want?”

  15. Joking says:

    The standard version of Gleason’s theorem tells you that if you assume that your physical theory is a probabilistic interpretation of a Hilbert space structure, then those probabilities have to be calculated according to Born’s rule. So when people worry about “deriving the Born rule”, they usually don’t mean its particular form, but rather the very probabilistic interpretation itself. Asking for a “derivation” of this interpretation as you do is asking for a hidden variables description, which is impossible — For an “alternative” to Bohmian mechanics that seems less well known (and fails for different reasons), you might want to look at Wetterich’s work on “quantum mechanics from classical statistics” — Derivations of the Hilbert space structure from axioms of physical measurements were given by Jauch and his followers, at a time when Boolean lattices were a popular subject. At their best, the “information theoretic derivations” are a modern reincarnation of these older insights.

  16. Peter Woit says:

    Joking,
    I’m not asking for a hidden variables description. I think the quantum state is a complete description of the state of the world, there is nothing else. But when you discuss going from an initial state to a final state via a process that is supposed to qualify as a “measurement”, you are assuming a measurement apparatus of some kind (and an environment), and it seems to me that you aren’t going to know the initial state of this apparatus + environment.

    Can this lack of knowledge about the initial state explain why your predictions about measurements have to be probabilistic? Can Gleason or Zurek be invoked to explain why Born’s rule must govern these probabilities?
    I suspect what is confusing here are issues of circularity: you have to have some theory that tells you how to deal with your lack of knowledge of the initial state, and one may argue there is no way to do this without invoking Born’s rule itself.

    Still, what’s wrong with the argument that here is the place that probability enters, and Born’s rule is the only consistent way it can enter?

  17. alex says:

    I think there are two aspects. One is, in the mathematics of Hilbert spaces, non-Boolean lattices, or whatever mathematical structure one prefers to write the mathematical formalism of qm, what liberty one has for the mathematical form of the Born rule?. This is a purely mathematical question, which has been, in my opinion and taste, pretty much sorted out since the 50s by Gleason’s theorem. Now, assume for a moment that we physically interepret this probability measure as some sort of real property of the particle, say its ‘propensity’. Now, when this particle interacts with a device, we randomly get a value for, say, spin, and the probability distribution of those values fits with the previous Born probability measure that the particle had before the interaction. Now, one asks, how is it that in such a complex dynamical interaction, the probability is somehow ‘transferred’ from the particle, before the interaction, to the distribution of the measured values? Is the sole role of the interaction just to produce a single outcome without affecting anything else? This would be the second aspect, and, if I understand correctly, the one that troubles Peter. And, indeed this seems to be more a complex physical question rather than a purely mathematical one, on that needs some physical modeling of the situation and some adequate physical insight and interpretation of what’s going on there. Of course, since this problem has not been solved yet, one can either make an operationalist physical interpretation of the mathematical formalism of qm (in which one physically interprets the Born measure as the experimental frequency of values in measurements, thing which simply builds on the behaviour observed on the second issued, and just takes it for granted to build a pragmatic interpretation), or, in the case in which the probability is, say, propensity, use the law of large numbers to get the frequency and, again, accept that this probability is transferred to the results of experiments or try to investigate why this is the case. Only this second case has some right to do that, since the operationalist take simply incorporated it into the very way in which one physically interprets the math. Back to the other approach, one can take three positions, either the known quantum dynamics will explain how this happens (thing which leads to the measurement problem, since usual dynamics+no hidden variables+defined outcomes is a trilemma), and then one has to deny defined outcomes in the usual sense (this leads to many worlds or to Landsman flea on S’s cat), or (non-local, contextual-)hidden variables (in which the mechanism that makes the values to define is outside standard qm, is an outside variable; in this case, the probabily is, of course, not a real property, but, still one has to explain why the measurement context doesn’t change it, like in the other approach), or one denies the standard dynamics (like GWR).

  18. Blake Stacey says:

    I think Adrian Kent’s critique of the Sebens–Carroll business is exactly on target.

    Everything I have seen in the Zurekian vein ends up being circular, though some arguments have a larger radius of curvature than others.

  19. Blake Stacey says:

    What I wonder about is whether it is possible to argue that, given that your result is going to be probabilistic, and given some list of properties a “measurement” should satisfy, can you show that the Born rule is the only possibility?

    As others have said, Gleason’s theorem is exactly this, but the “list of properties a ‘measurement’ should satisfy” is apt to be unsatisfying if one wants a story about energy flow or mechanical transformations. Instead, the assumptions are a list of conditions for expressing “measurements” in terms of Hilbert spaces. To each physical system we associate a complex Hilbert space, and each measurement corresponds to a resolution of the identity operator — in Gleason’s original version, to an orthonormal basis. The crucial assumption is that the probability assigned to a measurement outcome (i.e., to a vector in a basis) does not depend upon which basis that vector is taken to be part of. The probability assignments are “noncontextual”, as they say. The conclusion of Gleason’s argument is that any mapping from measurements to probabilities that satisfies his assumptions must take the form of the Born rule applied to some density operator. In other words, the theorem gives the set of valid states and the rule for calculating probabilities given a state.

    I have a sneaky suspicion that a good many other attempted “derivations of the Born rule” really amount to little more than burying Gleason’s assumptions under a heap of dubious justifications.

  20. I agree with almost everything Mateus said. If you want the randomness of quantum measurement outcomes to be due solely to lack of knowledge about the initial state, and you also want to reproduce all the predictions of QM correctly, and you also don’t want an insane cosmic conspiracy (‘t Hooft’s “superdeterminism”), then you’re pretty much going to be forced to something like Bohmian mechanics—which, in some sense, does exactly what you’re asking for, but only at the cost of importing the entire ontology of ordinary QM and then adding something additional and underdetermined on top of it. A few people had placed hopes in so-called “psi-epistemic” theories, wherein each pure state would correspond to a probability distribution over underlying “ontic states,” with the distributions for non-orthogonal states overlapping each other—but I would say that no-go theorems have basically ruled that idea out, for explaining anything more complicated than a single qubit. Reproducing the probabilistic predictions of QM from ordinary statistical ignorance is not actually that easy! Think for example about the Bell experiment, or Shor’s factoring algorithm, or better yet a quantum algorithm that simply outputs a sample from some crazy distribution over n-bit strings (what’s now called a “quantum supremacy experiment”). If the probabilities in such cases merely reflected ignorance of the initial state, then it must be in a radically different way from what anyone is used to, a way that involves nonlocality and exponential computational complexity and so forth. Which … to my mind, if you’re going to have to swallow all those things anyway, then why not just accept the probabilities as fundamental and be done with it? 🙂

    The question “why the Born rule?” is a different one, at most indirectly related to the problem of finding an ignorance/statistical interpretation of QM. To my mind, Gleason’s theorem, and many other results, have made a pretty compelling case that once you’ve
    (1) assumed the “non-probabilistic” part of QM (the state space, orthogonality relations, etc.), and also
    (2) decided that you want SOME rule for converting amplitudes to probabilities,
    nothing other than the Born rule really makes internal mathematical sense. And intuitively, this is not surprising. After all, the non-probabilistic part of QM already singles out the 2-norm of the amplitude vectors as their most basic conserved property. So then if your probability law isn’t also based on the 2-norm, but on some other norm or whatever, then no duh you’re going to have trouble with conservation of probability!

  21. akhmeteli says:

    Peter Woit wrote: “Also, where can I find a precise definition of a “measurement”? Or, not generally, but specifically, a detailed analysis of exactly what is happening in some simple case of a measurement?”

    I would think https://arxiv.org/abs/1107.2138 (Phys. Rep. 525 (2013) 1-166) fits the bill. The authors model measurement of a component of a spin using a Curie-Weiss magnet. They derive definite outcome of measurement (as a practical result), although, strictly speaking, this is incompatible with unitary evolution, but the change of the outcome can happen after a long time (of the order of Poincare reversal time). They derive the Born rule as an approximate result.

  22. Peter Woit says:

    akhmeteli,

    Thanks! That’s exactly the sort of thing I’ve been looking for and not seeing. It’s a long paper and I’ll have to find some time to understand what they are doing.

    Scott,

    Your claims seem incompatible with Weinberg’s. He seemed to me to be saying that the problem with getting probability from uncertainty about the initial state is getting the Born rule, you seem to be saying getting the Born rule is straightforward, but there’s some other problem with the idea.

    I really need to look at and think about the paper akhmeteli points to. But, until I do that, here’s the kind of specific question about “measurement” I’m wondering about. As in my earlier comment, consider a free particle in a momentum eigenstate. I think that’s a complete description of the particle, has no info about position. If you want to measure position, I don’t want to hear some formal abstract characterization of a position measurement. I’d like to see some physical system, describable by quantum mechanics, that you can couple to the free particle in such a way that the final state will give you a “position measurement”. My problem is that I haven’t seen any analysis of such a system. How do I know that the indeterminacy in the initial state of this “measurement system” doesn’t correspond to the indeterminacy in the “position” result?

    Blake Stacey,
    The Kent paper you link to seems to just be about the many-worlds part of Carrol-Sebens (which I’m willing to believe Kent makes no sense). There’s nothing there about the Zurek argument. What’s the best paper critically dealing with the Zurek argument (note that Zurek claims he has no need for “many-worlds”)?

  23. akhmeteli says:

    to Peter Woit: You are very welcome. Yes, the paper is extremely long, so you may wish to look at their earlier work https://arxiv.org/abs/quant-ph/0702135 first (just 13 pages).

  24. Joking says:

    Peter:

    Can this lack of knowledge about the initial state explain why your predictions about measurements have to be probabilistic?

    No. Probabilities and Born’s rule apply even when you have complete knowledge of the entire pure state. So unitary evolution forbids this kind of mechanism. The environment is necessary to make the quantum mechanical probabilities behave like classical probabilities (with high probability, and for the pointer states).

    Can Gleason or Zurek be invoked to explain why Born’s rule must govern these probabilities?

    Exactly!

    I suspect what is confusing here are issues of circularity: you have to have some theory that tells you how to deal with your lack of knowledge of the initial state, and one may argue there is no way to do this without invoking Born’s rule itself.

    Indeed, there is no non-circular derivation of the probabilistic interpretation of quantum mechanics. The standard fallacy is to say that a probabilistic description follows if we neglect terms that are small in the norm. But neglecting small norms because they correspond to low probability is exactly assuming what one is trying to derive!

    Still, what’s wrong with the argument that here is the place that probability enters, and Born’s rule is the only consistent way it can enter?

    There’s nothing wrong here. Assuming the relationship between physical measurement and mathematical theory to be probabilistic, Born’s rule is the only possibility.

  25. Jochen says:

    Are you looking for something like Mott’s 1929 analysis of alpha scattering in cloud chambers? It treats the atoms of the surrounding gas essentially as a quantum mechanical measuring device, and derives the classical straight trajectories from there. It’s often considered an important forerunner to decoherence theory. Here is a modern discussion of the paper: https://arxiv.org/pdf/1209.2665

    I’m not sure how much it really relates to the probabilistic aspect, though.

    On that front, I think a fundamental difference to the classical case is that there, you can at least in principle imagine getting rid of the ignorance about the state of the measurement apparatus, which isn’t the case for quantum mechanics. Although I suppose that basically ends up being kinda circular…

    Maybe things get clearer if we think about QM in the same arena as classical mechanics, i. e. phase space. Then, the Liouville distribution becomes deformed into the Winger distribution, which describes the quantum state. This now can’t ever be perfectly sharp, so if the interpretation of the Liouville distribution carries over, this would mean a residual fundamental uncertainty you can’t get rid of.

    Trouble with this is, of course, that the Winger distribution isn’t a probability distribution, but rather, a pseudoprobability, which can take negative values. So maybe your question becomes one of whether you can attach an ignorance interpretation to such a pseudoprobability?

    I know I’ve seen arguments to that effect, but I don’t recall where right now…

  26. If this is a well-known possibility for “interpreting” QM, what is the name of this interpretation?

    I’d say this is the perspective of “algebraic quantum physics” in view of quantum probability theory (also “non-commutative probability”), as nicely surveyed for instance in Gleason’s “The C*-algebraic formalism of quantum mechanics” (pdf) .

    Here the Born rule becomes but the theorem that expectation values in probability theory may equivalently be characterized as star-linear functionals on algebras of observables, commutative or not (algebraic quantum states).

    The allegend conundrum of “wave function collapse” disappears in this perspective, becoming but a part of the general formula for conditional expectation values (see here)

  27. Żurek’s derivation of the Born rule is, technically speaking, the same thing as Deutsch’s, but stripped down of the decision theory stuff and the talk about Many-Worlds (deriving uniform probabilities for uniform superpositions from a symmetry argument, and using ancillas to reduce non-uniform superpositions to uniform superpositions).

    He might refuse to talk about Many-Worlds, but is essential for his proof that a measurement is a unitary process, which for me is rather Many-Worldy.

    But the main problem with his proof is that he avoids the hard question of what probability \emph{is}, he just assumes that there is a probability that somehow makes sense. This is actually the only point about which I disagree with Scott: one cannot “just accept the probabilities as fundamental and be done with it”. One must explain what “fundamental probabilities” even are in order to accept them.

    The value of the Deutsch-Wallace argument is that they try to answer this, by proposing that probabilities are only subjective, and should be understood as tools to make decisions in a branching universe. On one hand I find it nice, as it explains why should we regard deterministic branching as a probabilistic process, but on the other hand rather unsatisfactory, as the whole thing about quantum probabilities is that they are somehow objective, and they don’t address this.

    I recently wrote a paper simplifying the Deutsch-Wallace argument and exploring its objectivity problem, it might interest you.

  28. Joking,

    Indeed, there is no non-circular derivation of the probabilistic interpretation of quantum mechanics. The standard fallacy is to say that a probabilistic description follows if we neglect terms that are small in the norm. But neglecting small norms because they correspond to low probability is exactly assuming what one is trying to derive!

    This circularity plagues the frequentist derivations of the Born rule, that were not even mentioned here. None of the proofs discussed here suffer from this problem.

    Woit,

    Your claims seem incompatible with Weinberg’s. He seemed to me to be saying that the problem with getting probability from uncertainty about the initial state is getting the Born rule, you seem to be saying getting the Born rule is straightforward, but there’s some other problem with the idea.

    The solution to the puzzle is that it is easy to get the Born rule from general arguments that are not about lack of knowledge of the initial state. If you insist on probabilities coming from lack of knowledge of the initial state, it is rather hard to get the Born rule, and the only success I know is Bohmian Mechanics.

  29. CM says:

    Dear Woit,

    the answers to your and Weinberg questions were given in 1932 by Johann Von Neumann in its masterpiece book “Mathematical foundations of quantum mechanics”. There is a new 2018 English edition that makes the book more easily readable compared with the 1955 edition. Chapter IV, “The deductive development of the theory”, explains why probabilities and derives the “Born rule”.

  30. Peter: Weinberg points out, and I agree with him, that there seems to be no way to derive from the unitary part of QM that probabilities should ever enter the theory in the first place, without making some additional assumption. On the other hand, I don’t think he disagrees with the statement that, once you’ve decided you want some probability rule, there are strong mathematical arguments that choices other than the Born rule are going to lead to serious problems (superluminal communication and so forth). If you like, though, I can ask him next time I see him.

  31. Aris Papadopoulos says:

    The paper “Quantum Mechanics of individual systems”

    by James Hartle,
    American Journal of Physics 36(8)

    attempts a “derivation” of the Born rule. I mention it only as an an entry point into the literature, by a well-respected, physicist, written in a readable fashion, meant to be understood even by undergraduate physics majors. Of course its conclusions are by no means universally accepted.

  32. GoletaBeach says:

    Gordon Baym’s QM text has a useful description of the Stern Gerlach measurement process. I seem to remember that he bridges the gap between the idea that a spin component measurement always gets an eigenvalue and the Born interpretation of the wave function. So perhaps all there is is the Born wave function.

    The mathematical arguments are far less interesting to me than experiments that could distinguish between QM interpretations. I guess the Bell’s tests where choices are made in a causally disconnected fashion seem to be the most interesting, but I haven’t kept up.

  33. Lee Smolin says:

    Dear Peter,

    I’ve just picked up my head from doing the final corrections to my new book on realism in quantum foundations to find you asking, “where exactly does probability enter the theory?”

    My understanding, after a lot of study, is that you have the following options:

    1) Put the probabilities in at the beginning, as did Bohr, Heisenberg and von Neumann. This requires an operational approach which introduces measurement and probabilities as primitive concepts, ie through a “collapse” or “projection” postulate, which postulates Born’s rule and “eigenvalue realism”, or through a Hardy-style operational reconstruction. These are elegant but they do not answer your question as measurement and probability are primitive concepts.

    2) You can attempt to derive probabilities from a formalism that has only unitary, Schrodinger evolution, which has no notion of probabilities to begin with. This is Everett’s MWI route.

    This is by now a very long story. It took me a lot of time to sort out for the book, and I had help from Saunders and Wallace and others. At best, there is no consensus amongst experts that this can be done. (This agrees with Scott’s remark, above.) The rough outline is

    i) the original version due to Everett fails, because you can show that with certainty there are branches of the wavefunction whose observers record measurements that disagree with Born’s rule. Because there is no primitive notion of probability you cannot say that these observers are improbable, in fact there are an infinite number of them, and also an infinite number whose observations agree with Born’s rule.

    ii) There are recently several very sophisticated attempts to derive subjective probabilities and the Born rule. These are centred at Oxford, were initiated by David Deutsch and developed in different versions by Hillary Greaves, Wayne
    Myrvold, Simon Saunders and David Wallace. These all use decoherence and also give up on recovering objective probabilities. Instead, they try, (in one version) from the axioms of decision theory, to show that it is rational for an observer to bet (ie choose subjective probabilities) as if Born’s rule were true. (Even though objectively Born’s Rule is false.)

    If you read the literature you can only conclude, after some challenging technical arguments, that the experts disagree about whether this kind of approach succeeds or fails, and what the implications should be.

    3) Invent a new physical theory which gives a complete description of individual processes from which the quantum probabilities are derived from ignorance about the initial state. This would then be a completion of QM rather than an interpretation. de Broglie-Bohm and collapse models are existence proofs that this is a possible route. There are also other approaches of this kind, such as Adler’s trace dynamics and my real ensemble formulation.

    I have the impression you don’t find any of these 3 options satisfactory. The kind of answer to your question of where the probabilities come from would be one in which we start with QM without measurement, probabilities etc and derive them. But this was option 2 and a whole lot of very bright people have tried and failed to make it work (in a way that convinces all the experts).

    My personal view is that option 3) is the only way forward for physics. But I wouldn’t try to do more here than argue that unless some notion of subjective probability can be made to work, as in option 2), you simply cannot get an answer to your question. You then either need to conclude with Bohr that the only kind of theory of atomic phenomena is operational, and has probabilities and measurement as primitive terms
    or agree with Einstein, de Broglie, Schrodinger, Bohm, Bell ets that QM requires a completion that gives a complete description of individual experiments.

    Thanks,

    Lee

  34. Peter Woit says:

    All,
    I noticed that Tom Banks in 2011 wrote up a long explanation of his views, posted at Sean Carroll’s blog, which attracted an interesting discussion involving Scott and others. I’ve added a link and some comments as an update to the posting here.

    Another good reference for work pointed to by akhmeteli is
    https://arxiv.org/abs/1406.5178
    This gives the kind of analysis of a measurement I’ve been asking for, but its implications for the question I’m wondering about are still unclear to me, since the authors are starting from a different “statistical interpretation” point of view. I need to find time to read this and think about it.

    Lee,
    Thanks for the comment. I look forward to seeing your book. In your categorization, I’m following option 2, and my question is being asked in that context.

    Scott,
    My initial thought was to ask Banks about his views, since those were what Weinberg was referring to, but as mentioned above, I see there is a good source already for his answer. If you do talk to Weinberg, I would be curious to know more about his discussions with Banks, whether he really would characterize Banks as seeing probability as having the same origin as in classical mechanics, and if so why he sees getting out Born’s rule as the problem with this.

  35. Blake Stacey says:

    I hesitate to say what the “best” critiques of the Zurekian program have been, but arXiv:1406.4126 and arXiv:1603.04845 may provide a reasonable jumping-off point.

  36. Peter Woit says:

    Blake Stacey,
    Those aren’t very convincing. Especially the second: four pages in large type?

    The question of how classical emerges from quantum is clearly a complex and subtle subject and I don’t see the value of crude arguments like Kastner’s four pager.

  37. Blake Stacey says:

    The problem is that in quantum foundations, the state of the literature is appalling. Clear statements of what is presumed in approach X, what is novel about Y, whether Z is seen as successful, etc., are harder to find than they should be. Issues that are, as you say, complex and subtle, get addressed in ways that appear at first to be sophisticated, but then turn out to be passing off intricate notation as subtlety. (Maybe this is not so different from the state of affairs in the study of non-Riemannian hypersquares, now that I think about it.) Sometimes the best one can do is a statement like Mateus Araújo’s above: “But the main problem with his proof is that he avoids the hard question of what probability is, he just assumes that there is a probability that somehow makes sense.” Yet statements that plain don’t make for papers in Stud. Hist. Phil. Mod. Phys.

    (Looking at what I posted before, I think I got snowed under by open arXiv tabs; the second link I meant to give was arXiv:1404.2635, which has a little evaluation of what decoherence can and can’t do at the very end.)

  38. Peter Woit says:

    Blake Stacey,
    Thanks, that Schlosshauer reference is much more promising. I was impressed by his book on the topic, should go back and look at that again.

    The literature on quantum foundations I definitely find frustrating. Part of this is its sheer size, part of it is that it’s difficult to figure out what people are actually saying, with, beside lots of notation and formalism, thickets of natural language words applied to contexts outside of where those words have clear meanings.

  39. Blake Stacey says:

    An older paper by Schlosshauer and Fine, arXiv:quant-ph/0312058, goes into more detail about where probabilities are introduced so that probabilities can come back out.

  40. Bruno Galvan says:

    A possibility (maybe the only) for the answer to the question “is the source of probability in quantum mechanics the same as in classical mechanics: uncertainty in the initial state of the measurement apparatus + environment?” is yes (without introducing hidden variables) is that there is no collapse, quantum states evolve always deterministically, and the different outcomes of a measurement depend on different initial microstates of system + apparatus + environment. A line of research in this sense has been developed by Schulman in his book Time’s arrows and quantum measurement (from chapter 6). In order that the final state of the laboratory is not a superposition of macroscopically different states, the initial microstate must by hyperfine-tuned. To avoid this drawback Schulman proposes two-times boundary conditions, with a teleological flavor similar to that of the least action principle.

  41. Lee Smolin says:

    Dear Peter,

    I appreciate you are trying to follow path 2: “attempt to derive probabilities from a formalism that has only unitary, Schrodinger evolution, which has no notion of probabilities to begin with”. The point of my remark is that this is much harder than seems at first. A lot of really smart people have devoted years to trying to make this work and have not convincingly succeeded. Several arguments such as Everett’s original attempt, and related arguments of Hartle, Finkelstein, Banks, etc. turn out to be circular because they sneak in a measure related to probability and/or a special role for measurement. Then there are issues with the use of decoherence first pointed out by Abner Shimony, because the dynamics is unitary and reversible so there is a quantum Poincare time after which the state recoheres. So if you attempt to argue that decoherence defines the branches you can’t get an irreversible outcome to associate objective probabilities to.

    It thus seems you also have to give up objective notions of probability so what you end up trying to show is that observers should chose their subjective probabilities as if Born’s rule is correct, when it is actually false. Would this much weaker notion of probability satisfy you?

    So my query to you? What are you willing to give up in your beliefs about probability to make route 2 succeed?

    Thanks,

    Lee

  42. Anonymous says:

    the dynamics is unitary and reversible so there is a quantum Poincare time after which the state recoheres. So if you attempt to argue that decoherence defines the branches you can’t get an irreversible outcome to associate objective probabilities to.

    So what? The same problem exists in the derivations of thermodynamics from the equations of classical mechanics, because they are time-reversible and even if the dynamics is chaotic the Liouville theorem still stands and you get Poincare returns. Boltzmann knew that his “mechanical” entropy wasn’t really time-irreversible. Supposedly he riposted to Zermelo arguing that the entropy will go back to the original value after Poincare time with “you’ll have to wait long” and to Loschmidt arguing that the entropy will go back to the original value if the velocities of all particles were to be reversed with “try to reverse them”. Has this ever been satisfactorily solved? I’ve read that Prigogine had tried to, but his mathematics didn’t quite work.

  43. Another Peter says:

    It seems you might be interested in looking at

    http://www.compphys.org/pdf/Ann.Phys347.45.2014.pdf

    and some of the several follow-up papers that they have.

  44. RBG says:

    I’m not smart enough to know how useful it is, but this topic of probability in the foundation of quantum mechanics is discussed in detail in several chapters of Karl Popper’s autobiography Unended Quest.

  45. Dear Lee Smolin,

    Are you by any chance confusing Everett’s derivation with the old frequentist derivations by Graham, DeWitt, and others? Because those derivations do suffer from the problem you point out (and more), but Everett’s does not. He did derive the Born rule to start with, and having it he can legitimately claim that observing non-Born relative frequencies is improbable. The problem with his derivation is that he starts from extremely strong assumptions (essentially that the probability of a branch i depends only on its amplitude \alpha_i, and that states must be normalised with the 2-norm), so nobody regards his derivation as better than just postulating the Born rule.

    Also, I don’t understand what you mean by “objectively Born’s Rule is false” in the Deutsch-Wallace approach. They regard Born’s rule as a guide to rational action, and prove that you should follow it. What’s false about it?

  46. Peter Woit says:

    Mateus Araújo,

    I finally got a chance to start looking at
    https://arxiv.org/abs/quant-ph/0405161
    which is the kind of thing I was looking for. I’ve gotten a lot out of other things Klaas Landsman has written, looking through his papers I’m interested to see that he’s quite concerned with the subtleties of the classical limit, likely with one motivation of seeing if these give insight into the measurement problem.

  47. Peter Woit says:

    Lee,
    The problem here is that I lack much in the way of beliefs about probability to either stick to or give up. And I know that you’re correct that trying to get probability from the quantum formalism minus probability leads to great difficulties (and circularities).

    I guess my fundamental point of view though is that I find the QM axioms (minus probability) to be part of an extremely compelling mathematical picture, and the probability-based results to be extremely well tested, all of this so much so that I’m not optimistic you’re going to find room for something different. I’d be a lot more convinced there was a problem here that needed some new ingredient to explain if I had in hand a detailed quantum model of a realistic measurement, and could be sure that the apparent “measurement problem” was real, and not getting resolved in a subtle way.

  48. akhmeteli says:

    Peter Woit wrote: “I guess my fundamental point of view though is that I find the QM axioms (minus probability) to be part of an extremely compelling mathematical picture, and the probability-based results to be extremely well tested, all of this so much so that I’m not optimistic you’re going to find room for something different.”

    Irreversibility is also extremely well tested, yet, strictly speaking, it is incorrect. The results by Allahverdyan, Balian, Nieuwenhuisen that I quoted show that the projection postulate and the Born rule are good approximations, but still approximations, so they cannot be derived from quantum axioms minus probability as exact laws (let me also note that the projection postulate and the Born rule do not reflect the fact that any measurement requires some time to perform). On the other hand, if one believes that probability-based results are exact laws, one gets some strong results, such as violations of the Bell inequalities in quantum theory, that cannot be proven based on QM axioms minus measurement theory.

    As for “room for something different”… It is with some hesitation that I would like to mention mathematical results of my paper http://link.springer.com/content/pdf/10.1140%2Fepjc%2Fs10052-013-2371-4.pdf (Eur. Phys. J. C (2013) 73:2371 ). One of the models I consider there is scalar electrodynamics (Klein-Gordon field minimally interacting with electromagnetic field). It turns out the Klein-Gordon field can be algebraically eliminated (in the unitary gauge), and the resulting equations for the electromagnetic field describe its independent evolution. This local theory can be embedded into a quantum field theory using generalized Carleman linearization. Thus, even if a quantum field theory’s predictions are in agreement with experiments, the results of the experiments can also be explained by a local theory.

  49. GoletaBeach says:

    https://arxiv.org/abs/1406.5178 is a bit peculiar from an experimentalists’ perspective… maybe OK but hardly communicative… G. Baym, Lectures on QM, 1969 & 1973, end of chapter 14, much better, at least for those who build experiments. But not sure neutral atoms can be born in EPR-type states… photons present other experimental challenges… would need the equivalent of Baym for the gamma measuring process.

    Without practical designs of apparatus, it may be fruitless to speculate.

  50. Peter Woit says:

    Goleta Beach,
    Stays away from the measurement problem, but for the experimentally inclined, a wonderful book is Haroche and Raimond, Exploring the Quantum

    All,
    Thanks for pointers that have been very helpful. I’ve added an update to the posting listing some of these. All of this has very much deepened my understanding (as well as my confusion…) about these issues.

Comments are closed.