A couple of recent discussions about quantum mechanics that may be of interest:
- There’s a recent paper out by Don Weingarten that looks looks like it might have a different take on the fundamental “many-worlds” problem of, as he writes:
how in principle the definite positions of the macroscopic world emerge from the microscopic matter of which it is composed, which has only wave functions but not definite positions.
My naive feeling about this has always been that the answer should lie in a full understanding of the initial state of the measurement apparatus (+ environment), that it is our imperfect probabilistic understanding of the initial state that limits us to a probabilistic understanding of the final state. I found Weingarten’s investigation of this intriguing, although I’m not sure that the language of “hidden variables” is a good one here, given the use of that language in other kinds of proposals. By the way, Weingarten is an ex-lattice gauge theorist who I had the pleasure of first meeting long ago during his lattice gauge theory days. He at some point left physics to go work for a hedge fund, I believe he’s still in that business now.
Luckily for all of us, Jess Riedel has looked at the paper and written up some detailed Comments on Weingarten’s Preferred Branch, which I suggest that anyone interested in this topic look at. Discussion would best be at his blog, a much better informed source than this one.
- Gerard ‘t Hooft has a remarkable recent preprint about quantum mechanics, with the provocative title of Free Will in the Theory of Everything. I fear that the sort of argument he’s engaging in, trying to ground physics in very human intuitions about how the world should work, is not my cup of tea at all. Instead, what has always fascinated me about quantum mechanics has always been its grounding in very deep mathematical ideas, and the surprising way in which it challenges our conventional intuitions by telling us about an unexpected new way to think about physics at a fundamental level.
For more discussion of the paper, there are Facebook posts by Tim Maudlin here and here in which he argues with ‘t Hooft. I confess that I wasn’t so sure whether to take the time to read these, and after a short attempt gave up, unable to figure out precisely what the argument was about (and put off by Maudlin’s style of argument. Do philosophers really normally behave like that?). Links provided here in case you have more interest in this than I do, or better luck getting something out of it.
I won’t judge Maudlin’s style of discussing, nor do I know if other philosophers have similar style or not. But as far as the content of the discussion goes (at least the parts in the above links), I believe the following is the crux of the matter.
Prof. ‘t Hooft seems to be fighting against a theorem (Bell’s theorem). Tim Maudlin has studied the theorem top-to-bottom, upside-down and inside-out, and serves as a voice for the theorem to push back.
In short, Prof. ‘t Hooft’s argument is that one can have a violation of Bell’s inequality in a local deterministic theory if one additionally imposes the law of “conservation of ontology” (I don’t want to go into a technical specification what this means). Maudlin’s counterargument is that, if one does impose that conservation law, then that law must be non-local (as per Bell’s theorem), rendering the full theory nonlocal.
And then there are a gazillion of off-topic sideway arguments going back and forth, which are ultimately irrelevant for the above argument. But the bottomline seems to be the above.
IMO, I fear that Prof. ‘t Hooft is on the loosing side here — while fighting against a theorem can be very instructive, insightful and educational, eventually it has to fail.
The short abstract of the whole discussion can be found in the second Maudlin’s facebook post, near the end, in the allegorical dialogue between Vladimir and Estragon.
HTH, 🙂
Marko
It seems that until quantum mechanics has an experimental problem, the arguments about how to solve the measurement problem are going to be difficult to pick from. That’s why experimental results like Vinante2016 https://arxiv.org/abs/1611.09776 are the most interesting (to me).
I read most of the exchange but was hampered by my ignorance of the meaning of “ontological state”. Google didn’t help me. Can anyone provide a definition?
a,
The notion and some properties of ontological states are on page 15 of Prof. ‘t Hooft’s arXiv paper.
HTH, 🙂
Marko
Peter,
I read all the way through both of the (long!) posts. And, I have read enough stuff by a lot of philosophers that I can answer your question: “Do philosophers really normally behave like that?”
No. Maudlin is much more comprehensible and much more knowledgeable about science than most philosophers I have read. I suppose the obvious example for incomprehensibility is Hegel (whom I thankfully have not read!), but Maudlin is more comprehensible even than most twentieth-century “analytic” philosophers. As for his knowledge of science, compare Maudlin to Popper’s writings on QM or J. L. Mackie’s various papers on science: both Popper and Mackie are very comprehensible, usually, and often insightful, but they got all mixed up when they wrote about science.
I know a fair amount about Bell’s work, and Maudlin quite accurately describes that work.
The difference between Maudlin and ‘t Hooft can be summarized very easily:
1) ‘t Hooft uses “locality” to refer to equal-time commutation relations, as is common among physicists. Maudlin uses “locality” in a more general sense defined by John Bell that applies to a much broader class of theories, including ones in which ETCRs make no sense. ‘t Hooft seems fairly clearly not to understand that Bell’s sense of the word is not the same as his.
2) ‘t Hooft, Maudlin, Bell, and pretty much everyone who has carefully thought about this agrees that a sort of “superdeterminism” is logically possible in which what experimenters choose to measure is pre-ordained from the beginning of the universe and that, in principle, this would sort of allow one to evade Bell’s theorem. Maudlin (and most other people who have considered this possibility) think this idea is infinitely bizarre. ‘t Hooft disagrees.
On point 2, I am personally with Maudlin, but ‘t Hooft could prove us both wrong if he could come up with any sort of actual concrete comprehensible theory which is superdeterministic and which reproduces the sort of results by which QM violates Bell’s theorem.
So far, ‘t Hooft has not done so.
I’m pretty sure he never will, but then I have been surprised before.
Dave
‘t Hooft’s new, free book explains a lot of his thinking and the “ontological state”. Given his assumptions, it seems he can argue against Bell. The book is: The Cellular Automaton Interpretation of Quantum Mechanics
http://www.springer.com/us/book/9783319412849
Ah, well. Style is a matter of taste. For what it is worth, after many decades of people trying to tell ‘t Hooft that his whole project is doomed by Bell’s theorem, he got engaged enough in this discussion to actually sort out that he has been using some terminology (in particular “superdeterminism”) in a completely idiosyncratic way, which has meant that everyone has been talking past each other for these many years. Everything is becoming clarified.
It’s actually the first time I have written anyone a play in order to make a point. I’m rather proud of it.
Hi, Tim. I’m not quite sure the play worked — but then I did not really like the original by Sam Beckett, either.
Yes, I think ‘t Hooft just does not understand that he is just using terms differently from many people interested in the subject, although I think the main problem is the two different meanings of “locality”: i.e., I think it is clear enough what is going on with “superdeterminism.”
Anyway, congratulations to both you and ‘t Hooft for trying to get through the communication problems. And, just maybe ‘t Hooft will come up with something and surprise both you and me: betting against Gerard ‘t Hooft is not the safest of bets!
Dave Miller
I think, Peter is right. It is evident that the microscopic conditions of the initial state of measurement instrument and environment are only incompletely known and hence the outcome will be of a stochastical nature. Furthermore, what Bell really tells us is that quantum theory and the micro state of quantum space time are intrinsically non-local. This can also be learned from e.g. Black Hole physics. This kind of non-locality is however not of an ordinary nature.
Hi Peter. Sorry this is off topic but just wanted to let you know
about a recent pro multiverse article. https://www.forbes.com/sites/startswithabang/2017/10/12/the-multiverse-is-inevitable-and-were-living-in-it/
Peter, have you read Feynman’s interpretation in his 1985 book “qed” (not to be confused for the 1965 book written before he discussed the problem with David Bohm). In it, while ignoring Bohm’s pilot wave, Feynman argues that the path integral of qft explains the wave function collapse in 1st quantization. I’ve never seen any discussion of this on any high profile site.
Basically, the different possible interactions of long-lived (on mass shell) particles with random field quanta manifestations in the vacuum provide a physical mechanism for indeterminacy. A short-lived pair production loop near the path of an electron’s orbit will affect that path, causing indeterminacy. Thus, Feynman comments (in a footnote!) the uncertainty principle of 1st quantization is explained by taking as real possibilities the many different interactions possible between electrons (or other particles) and randomly occurring vacuum particles!
feynman fan,
I hadn’t seen that claim from Feynman (or anyone else) that QFT automatically solves the measurement problem. Doesn’t sound plausible at all….
Peter: please see figure 65 in the 1985 book called QED by Feynman to be convinced!
Discrete Coulomb field quanta exchanges cause indeterministic electron orbits, in that analysis:
‘I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding [amplitudes for all possible paths, by way of the path integral] for all the ways an event can happen – there is no need for an uncertainty principle!’ – Richard P. Feynman, QED, pp. 55-56.
‘… when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. … we have to sum the [wavefunction amplitudes for each possible path] to predict where an electron is likely to be.’ – Richard P. Feynman, QED, pp. 84-5.
Apologies if this isn’t apropos.
I watched this interview maybe a year ago, and the statement above jogged my memory. Check out around 8:55 (and a little before for some background). I believe what Gell Mann is saying may be along the lines of what feynman fan is proposing. If I understand correctly, what Gell Mann is speculating on is the notion that the path integral isn’t just a tool for calculating. It is, in some sense, fundamental.
I have no opinion on this, nor the intellect to rightly have one. Just thought it might be interesting to note.
https://www.youtube.com/watch?v=f-OFP5tNtMY
feymanfan and LMMI,
I don’t think the fact that you can formulate QM in a path integral formalism solves the measurement problem. If you interpret the path integral as a probability measure, one problem is that you need to do this in imaginary time, another is that this tells you about the wave function, whereas in measurement theory what governs the probability is the norm-squared of the wavefunction.
Feynman (in Fig 65 of QED) uses Euclidean space for the path integral. All he is saying is that the measurement problem of “wavefunction collapse” in QM is fake news because:
1. There is a separate wavefunction amplitude for each Moller scattering event in the interaction of the Coulomb field for a particle in QFT, whereas there is only a single wavefunction in QM because QM falsely uses a classical Coulomb field.
2. To solve the measurement problem, simply replace the classical Coulomb field potential in the Hamiltonian of QM with a proper Moller scattering QFT path integral, and you will no longer have a single wavefunction collapse paradox. Instead, you will see that you are dealing with numerous wavefunction amplitudes via the corrected (truly quantized) Coulomb field. If you try to “measure” the position of the electron, that measurement involves using a quantum field, and adds another Feynman interaction diagram (with its own wavefunction amplitude)!
feynman fan,
I took Quantum Mechanics from Feynman in the 1974-75 academic year and then took Intro to Elementary Particle Physics from him in the 1975-76 year.
He never offered the interpretation you are attributing to him.
He was, in fact, not much interested in the “measurement problem” one way or another (I tried to get him interested, to no avail).
What you have quoted is just the common-sense stuff about the path integral that all competent physicists have known about for decades. Sorry, but the “measurement problem” refers to something else. (And I am pretty sure Peter does not want me or anyone else to give you a tutorial on all this in this comments section!)
Comments by Miller and Maudlin above say that it is now realized that ‘t Hooft uses a definition of “superdeterminism” which is different than others’, the latter being that everything was fixed at genesis (I guess?).
I am mildly surprised, since the different definition seems to be known for some time now; Hossenfelder wrote the following and also, in a funny coincidence, I was told about this definition by an unrelated colleague last week:
“But really, this is a very misleading interpretation of superdeterminism. All that superdeterminism means is that a state cannot be prepared independently of the detector settings. That’s non-local of course, but it’s non-local in a soft way, in the sense that it’s a correlation but doesn’t necessarily imply a ‘spooky’ action at a distance because the backwards lightcones of the detector and state (in a reasonable universe) intersect anyway.”
http://backreaction.blogspot.co.uk/2013/10/testing-conspiracy-theories.html
Note though that in http://backreaction.blogspot.co.uk/2016/01/free-will-is-dead-lets-bury-it.html she says that they themselves might not even agree on what they mean 🙂
Imho both cases are definitely not worthy spending time and money on, I’m just mentioning it in the context of this discussion (which proves more absorbing than it ought to!).
In multiverse links, it now seems that “relinquishing refutability” has entered the mainstream and we are looking for the next big annoying thing to get rid of:
http://capp.ibs.re.kr/_prog/seminars/index.php?mode=V&site_dvs_cd=capp_en&menu_dvs_cd=0601&mng_no=9221