Sean Carroll’s new (available in stores early September) book, Something Deeply Hidden, is a quite good introduction to issues in the understanding of quantum mechanics, unfortunately wrapped in a book cover and promotional campaign of utter nonsense. Most people won’t read much beyond the front flap, where they’ll be told:
Most physicists haven’t even recognized the uncomfortable truth: physics has been in crisis since 1927. Quantum mechanics has always had obvious gaps—which have come to be simply ignored. Science popularizers keep telling us how weird it is, how impossible it is to understand. Academics discourage students from working on the “dead end” of quantum foundations. Putting his professional reputation on the line with this audacious yet entirely reasonable book, Carroll says that the crisis can now come to an end. We just have to accept that there is more than one of us in the universe. There are many, many Sean Carrolls. Many of every one of us.
This kind of ridiculous multi-worlds woo is by now rather tired, you can find variants of it in a host of other popular books written over the past 25 years. The great thing about Carroll’s book though is that (at least if you buy the hardback) you can tear off the dust jacket, throw it away, and unlike earlier such books, you’ll be left with something well-written, and if not “entirely reasonable”, at least mostly reasonable.
Carroll gives an unusually lucid explanation of what the standard quantum formalism says, making clear the ways in which it gives a coherent picture of the world, but one quite a bit different than that of classical mechanics. Instead of the usual long discussions of alternatives to QM such as Bohmian mechanics or dynamical collapse, he deals with these expeditiously in a short chapter that appropriately explains the problems with such alternatives. The usual multiverse mania that has overrun particle theory (the cosmological multiverse) is relegated to a short footnote (page 122) which just explains that that is a different topic. String theory gets about half a page (discussed with loop quantum gravity on pages 274-5). While the outrageously untrue statement is made that string theory “makes finite predictions for all physical quantities”, there’s also the unusually reasonable “While string theory has been somewhat successful in dealing with the technical problems of quantum gravity, it hasn’t shed much light on the conceptual problems.” AdS/CFT gets a page or so (pages 303-4), with half of it devoted to explaining that its features are specific to AdS space, about which “Alas, it’s not the real world.” He has this characterization of the situation:
There’s an old joke about the drunk who is looking under a lamppost for his lost keys. When someone asks if he’s sure he lost them there, he replies, “Oh no, I lost them somewhere else, but the light is much better over here.” In the quantum-gravity game, AdS/CFT is the world’s brightest lamppost.
I found Carroll’s clear explanations especially useful on topics where I disagree with him, since reading him clarified for me several different issues. I wrote recently here about one of them. I’ve always been confused about whether I fall in the “Copenhagen/standard textbook interpretation” camp or “Everett” camp, and reading this book got me to better understanding the difference between the two, which I now think to a large degree comes down to what one thinks about the problem of emergence of classical from quantum. Is this a problem that is hopelessly hard or not? Since it seems very hard to me, but I do see that limited progress has been made, I’m sympathetic to both sides of that question. Carroll does at times too much stray into the unfortunate territory of for instance Adam Becker’s recent book, which tried to make a morality play out of this difference, with Everett and his followers fighting a revolutionary battle against the anti-progress conservatives Bohr and Heisenberg. But in general he’s much less tendentious than Becker, making his discussion much more useful.
The biggest problem I have with the book is the part referenced by the unfortunate material on the front flap. I’ve never understood why those favoring so-called “Multiple Worlds” start with what seems to me like a perfectly reasonable project, saying they’re trying to describe measurement and classical emergence from quantum purely using the bare quantum formalism (states + equation of motion), but then usually start talking about splitting of universes. Deciding that multiple worlds are “real” never seemed to me to be necessary (and I think I’m not the only one who feels this way, evidently Zurek also objects to this). Carroll in various places argues for a multiple world ontology, but never gives a convincing argument. He finally ends up with this explanation (page 234-5):
The truth is, nothing forces us to think of the wave function as describing multiple worlds, even after decoherence has occurred. We could just talk about the entire wave function as a whole. It’s just really helpful to split it up into worlds… characterizing the quantum state in terms of multiple worlds isn’t necessary – it just gives us an enormously useful handle on an incredibly complex situation… it is enormously convenient and helpful to do so, and we’re allowed to take advantage of this convenience because the individual worlds don’t interact with one another.
My problem here is that the whole splitting thing seems to me to lead to all sorts of trouble (how does the splitting occur? what counts as a separate world? what characterizes separate worlds?), so if I’m told I don’t need to invoke multiple worlds, why do so? According to Carroll, they’re “enormously convenient”, but for what (other than for papering over rather than solving a hard problem)?
In general I’d rather avoid discussions of what’s “real” and what isn’t (e.g. see here) but, if one is going to use the term, I am happy to agree with Carroll’s “physicalist” argument that our best description of physical reality is as “real” as it gets, so the quantum state is preeminently “real”. The problem with declaring “multiple worlds” to be “real” is that you’re now using the word to mean something completely different (one of these worlds is the emergent classical “reality” our brains are creating out of our sense experience). And since the problem here (classical emergence being just part of it) is that you don’t understand the relation of these two very different things, any argument about whether another “world” besides ours is “real” or not seems to me hopelessly muddled.
Finally, the last section of the book deals with attempts by Carroll to get “space from Hilbert space”, see here, which the cover flap refers to as “His [Carroll’s] reconciling of quantum mechanics with Einstein’s theory of relativity changes, well, everything.” The material in the book itself is much more reasonable, with the highly speculative nature of such ideas emphasized. Since Carroll is such a clear writer, reading these chapters helped me understand what he’s trying to do and what tools he is using. From everything I know about the deep structure of geometry and quantum theory, his project seems to me highly unlikely to give us the needed insight into the relation of these two subjects, but no reason he shouldn’t try. On the other hand, he should ask his publisher to pulp the dust jackets…
Update: Carroll today on Twitter has the following argument from his book for “Many Worlds”:
Once you admit that an electron can be in a superposition of different locations, it follows that person can be in a superposition of having seen the electron in different locations, and indeed that reality as a whole can be in a superposition, and it becomes natural to treat every term in that superposition as a separate “world”.
“Becomes natural” isn’t much of an argument (faced with a problem, there are “natural” things to do which are just wrong and don’t solve the problem). To me, saying one is going to “treat every term in that superposition as a separate “world”” may be natural to you, but it doesn’t actually solve any problem, instead creating a host of new ones.
Update: Some places to read more about these issues.
The book Many Worlds?: Everett, Quantum Theory and Reality gathers various essays, including
Simon Saunders, Introduction
David Wallace, Decoherence and Ontology
Adrian Kent, One World Versus Many
David Wallace’s book, The Emergent Multiverse.
Blog postings from Jess Riedel here and here.
This from Wojciech Zurek, especially the last section, including parts quoted here.
I may be misunderstanding the question, but I don’t think you need any complicated explanation of classical emergence from the quantum in order to extract the many-worlds picture, provided you believe that the universe is completely described by a quantum state and that Schrödinger’s equation describes how it evolves. It falls out of the linearity, right? If the environment starts in some state psi_E and the thing you’re measuring has a state that lives in some other small Hilbert space, then the state that (1/sqrt(2))((psi_1 + psi_2) tensor psi_E) evolves into has to be a linear combination of the states that (psi_1 tensor psi_E) and (psi_2 tensor psi_E) evolve into, hence the macroscopic superposition.
I can see how you might believe these assumptions more readily if they were coupled with a good description of why macroscopic stuff looks classical; if this part of the story were somehow broken, there would be more of a reason to suspect that quantum mechanics needs to be modified. (Is this all you’re saying?) But it seems pretty straightforward that the Everettian picture is what these assumptions imply.
Now, the many-worlds story comes from taking a physical model that performs very well in all the domains where we’ve tested it and inferring that it must also apply to the whole universe, a domain in which we very much have not tested it (whatever that would even mean). I don’t think it’s totally unreasonable to balk at the many worlds and conclude that it’s a sign that the standard QM story must be missing something. But (a) there doesn’t seem to be any experimental evidence for such a thing, and (b) I don’t think there’s a great theoretical candidate for what missing thing we could add that would solve the problem convincingly.
The “Map of Madness” mentioned above (arXiv:1509.04711) does a pretty good job of finding what one might consider “canonical” sources for the various interpretations it tabulates.
“It seems to be a magical property of QM that for practical purposes almost all of the information about our incomplete knowledge can be ignored, and we can work entirely with mixed states (density operators). This seems like something which deserves an explanation in any account of an interpretation of quantum mechanics.”
That is an excelent point. There is, however, an explanation. If you use true randomness to prepare your ensembles, and model the preparation procedures in the Many-Worlds way, then any two preparation procedures that result in the same density matrix for subsystem A must be related by a unitary transformation in subsystem B. Then it is clear why you can’t distinguish the preparation procedures by doing measurements on A alone: that would violate relativity! You would be able to find out whether some unitary transformation was applied to subsystem B, but that application can be done with a space-like separation.
For more details, see the blog post I wrote about it.
“Once you admit that an electron can be in a superposition of different locations, it follows that person can be in a superposition of having seen the electron in different locations, and indeed that reality as a whole can be in a superposition, and it becomes natural to treat every term in that superposition as a separate “world”.”
Ummm, no. This ignores the amplification process necessary to make the behavior of the electron visible to the person. Amplifiers are inherently noisy, which means they fuzz things out and destroy quantum superpositions. There is a very nice paper about this, which is, unfortunately, probably rather hard to find.: R. J. Glauber, Amplifiers, attenuators, and Schroedinger’s cat, New Techniques in Quantum Measurement Theory, volume 480 of the Annals of the New York Academy of Sciences (1986).
Here is a copy, online but paywalled.
”The best effort towards that I can find is the program of working to understand decoherence, the preferred basis problem and perhaps quantum Darwinism. Do the results of this program imply the ‘reality’ of many worlds?”
Surely not. In https://arxiv.org/pdf/1404.2635, Maximilan Schlosshauer states in Section VII the little decoherence can contribute to the foundations. In particular, he writes: ”Since decoherence follows directly from an application of the quantum formalism to interacting quantum systems, it is not tied to any particular interpretation of quantum mechanics, nor does it supply such an interpretation”.
”It’s not as if many worlds are postulated to explain anything, rather they are the consequence of applying the laws of quantum mechanics to the whole universe.”
The laws of quantum mechanics neither contain the notion of a world nor that of a superposition in a preferred basis (which is usually used to motivate worlds). Thus many worlds cannot be a consequence of the laws of quantum mechanics alone. One must add quite some fancy, very controversial interpretation stuff to get many worlds….
The problem with just saying it is the Hamiltonian that determines the preferred basis is, “which Hamiltonian, and how does it do this?”
For the case of a quantum particle, I see arguments for
1. configuration space basis: locality of interactions
2. momentum space basis: these are energy eigenstates for the free particle, so good basis for weak interactions.
3. coherent state basis: these have a nice relation to the classical limit.
For the case of a spin-1/2 degree of freedom, I have no idea how this is supposed to work.
The problem is how do you get from “the state is a macroscopic superposition” to “this corresponds to multiple worlds”. In particular, “superposition with respect to which basis?” is the “preferred basis” problem. That’s one problem you have to solve before the “multiple worlds” picture even begins to make any sense.
“The laws of quantum mechanics neither contain the notion of a world nor that of a superposition in a preferred basis (which is usually used to motivate worlds). Thus many worlds cannot be a consequence of the laws of quantum mechanics alone.”
The worlds are part of emergent reality, not fundamental reality. The laws of quantum mechanics also do not contain the notion of trains, but this doesn’t make them any less real.
“One must add quite some fancy, very controversial interpretation stuff to get many worlds….”
On the contrary, Many-Worlds is the only interpretation that takes quantum mechanics as it is. To get rid of the multiple worlds is that you need “fancy, very controversial interpretation stuff”. To note:
Denying an objective reality (QBism).
Postulating a classical reality irreducible to the quantum laws (Copenhagen).
Postulating hidden variables (Bohmian mechanics).
Changing the Schrödinger equation (Collapse models.).
“The laws of quantum mechanics also do not contain the notion of trains, but this doesn’t make them any less real.”
I thought the problem was to get the trains from QM, and I don’t see how MWI does it. I agree with Peter Woit when he writes that you have to solve the “preferred basis” problem before the “multiple worlds” picture even begins to make any sense.
There’s something else that bothers me. What is this “environment” that causes decoherence? I suppose I’m wrong, but I sense a certain circularity. Environments cause decoherence, and if something doesn’t cause decoherence in a certain system, it doesn’t count as an “environment” of that system.
”The worlds are part of emergent reality, not fundamental reality. ”
See Peter Woit’s preceding post for the extra input needed to infer something emergent.
”Many-Worlds is the only interpretation that takes quantum mechanics as it is.”
Maybe it used to be the only interpretation that claimed that, but now there is also the thermal interpretation, which is based solely on the unitary dynamics of the universe:
It works in a single world and features none of your objections to other interpretations,
”Denying an objective reality (QBism).
Postulating a classical reality irreducible to the quantum laws (Copenhagen).
Postulating hidden variables (Bohmian mechanics).
Changing the Schrödinger equation (Collapse models.).”
It has been pointed out by more than a few quantum foundation philosophers including Ruth Kastner that Quantum Darwinism because it is observer independent must assume a partition into environment+pointer+system in order to derive just that. Kastner posits an observer independent collapse theory which repairs a hole but causes another leak(non-unitary evolution of the Schrodinger equation). This is the common running theme in foundations literature(which assumption do you deny and how do you account for the inconsistency that arises).
@Bill: It shouldn’t be circular.
You can choose many, many partitions into environment+pointer+system, and most of them don’t behave in ways at all consistent with Quantum Darwinism. So what you really need to do is show that there exists a partition with the right properties.
And of course, you need assumptions to ensure that this partition exists.
Assuming that such a partition exists at time t, and showing that such a partition exists for all times not too long after that is a reasonable way to proceed. This partition is not going to continue existing when the universe has died a heat death, so all you can ask for is that it remains for some time.
“I agree with Peter Woit when he writes that you have to solve the “preferred basis” problem before the “multiple worlds” picture even begins to make any sense.”
The preferred basis problem has been solved, for fuck’s sake. Everybody seems to love throwing around the sentence “preferred basis problem” without having the faintest idea what they’re talking about. If you want to be taken seriously at least describe what the problem is and why do you think the canonical solution is not satisfactory. Argh.
The problem was to explain why the basis corresponding to the measurement outcomes was the one corresponding to the quasi-classical worlds. Otherwise you couldn’t derive the fact that a state of the form |0>|M_0> + |1>|M_1> corresponds to a superposition of a world with result zero and a world with result one. Everett kind of postulated that measurement outcomes do correspond to worlds, and I think that’s fine: it clearly works. He was attacking the problem from the top-down direction, guessing what the quantum dynamics must be in order to solve the measurement problem. That’s profoundly unsatisfactory from the philosophical point of view, though, because you are introducing in your theory special “measurement” devices which happen to split the world according to their outcomes. A proper reductionist theory must describe measurements in terms of quantum dynamics, and explain why they split the world according to the outcomes.
And this has been done! It’s what this story about decoherence and pointer states and quantum Darwinism is all about. We have learned, from the 70s onwards and with more and more detail, that measurement devices are those that entangle the measured system to complex quantum systems, that their dynamics are nothing special, but a generic feature of system+environment interaction – called decoherence, that the measurement outcomes correspond to the pointer states – precisely those stable under decoherence, and that the subsequent dynamics of these pointer states are effectively split from eachother, again because they have decohered. That’s how you go from a superposition between two spins to a superposition of non-interacting quasi-classical worlds. Don’t take my word for it, read a fucking paper about it.
“There’s something else that bothers me. What is this “environment” that causes decoherence? I suppose I’m wrong, but I sense a certain circularity. Environments cause decoherence, and if something doesn’t cause decoherence in a certain system, it doesn’t count as an “environment” of that system.”
Yes, you are wrong. Decoherence is not about tautologically classifying systems into “environment” and “not environment”. It’s about showing how the usual interactions between some quantum systems – the prototypical example between a dust mote and air – cause them to be entangled, and thus interference in the system of interest effectively unobservable.
Yes, I understand that there are people who claim the preferred basis problem is solved, but I’m not convinced. As you note, the Everett “solution” isn’t convincing, it just assumes classical emergence solves the problem somehow. Zurek actually engages with the problem, but has he really solved it? Jess Riedel in one of the blog postings I linked to above, takes the point of view “I say that the decoherence program as led by Zeh, Zurek, and others is an improvement—a monumental advance—but not a complete solution.”
For another randomly chosen example, here
Commenter “Matt” (Matt Leifer??)
objects to Sean Carroll’s claim that the problem is solved by “interactions are local in space”, and writes
“The preferred-basis problem, on the other hand, is specially a problem for the many-worlds interpretation, and refers to the fact that we can trivially expand the overall state vector in any of an infinite set of different choices of basis for the overall Hilbert space, where each basis paints a very, very different picture of what those different worlds are and what are the probabilities associated to them. There is no guarantee that most of those choices of basis will involve basis states that look classical, and, on the other hand, there may well be two (or more) choices of basis that give classical-looking basis states and thus inconsistent sets of classical realities that are not related to each other in a classically understandable way.
The preferred-basis problem is unsolved, and probably unsolvable without somehow adding on more axiomatic principles to the many-worlds interpretation for choosing one basis over all the others.”
Unlike “Matt”, I’m willing to believe that the preferred-basis problem as he states it is solvable, with a better understanding of what happens in a usual sort of physical “measurement”, without new “axiomatic principles”. My suspicion is that you have to really engage with what the properties of the Hamiltonian are that give the sort of “classical world” we want (and just saying “the interaction Hamiltonian is local in space” doesn’t do it). I’ve not spent enough time understanding the details of what Zurek and others have done and have not done, but when I try and follow discussions of this, at crucial points they seem to me to be effectively assuming that an unsolved problem gets solved the way they want.
Commenter “Matt” doesn’t know what he’s talking about. As I explained above, the preferred basis problem was about explaining why measurement outcomes correspond to quasi-classical worlds, not about the trivial fact that a quantum state can be written in different bases. His assertion that “each basis paints a very, very different picture of what those different worlds are and what are the probabilities associated to them” is often repeated, but empty. Nobody has ever even tried to show that there are in fact such different worlds, and what they would be (except for the science fiction author Greg Egan in his novel Quarantine, which I highly recommend, but is not a scientific work). If one wants to claim that there is some other basis where we have worlds with blue dragons, the burden of proof is on them.
About your point of view, it’s hard to argue against wanting to have a better understanding of the measurement process, but I don’t see a concrete objection. Do you think that there could be something that we usually regard as a measurement apparatus, but whose outcomes do not correspond to pointer states? Or do you think that we can have some pointer states that give rise to quasi-classical worlds with some radically non-classical feature, such as not being well-localized in phase space or having blue dragons?
The fact that neither thing happens in all the systems we have studied is good evidence that they in fact do not happen. Moreover, it is hard to prove a negative.
I don’t see what is your problem with the “local interactions” argument. It is a great insight; Hamiltonians with interactions that are local in space will decohere spatial superpositions, and preserve states that are well-localized in phase space. Since the fundamental Hamiltonians we know are local, this does show that we won’t have quasi-classical worlds with delocalized states, so this blue dragon is dead. What other non-classical features still need excluding?
I don’t see how the decoherence problem will really be solved until you can answer questions like: given a physical system, when will the polarization of light decohere into the vertical/horizontal basis, when will it decohere into the right/left diagonal basis, and when will it decohere into the clockwise/counterclockwise basis? And how about situations where it decoheres, but in which there is no preferred basis; how do you model decoherence in these situations?
Finally, blue dragons are indeed relevant; in some sense, the theory of decoherence won’t be truly complete until you can prove that there are no quasi-classical worlds with a hidden basis that contains blue dragons. (Although making such a proof a requirement for accepting a theory of decoherence is much too demanding.)
It seems to me that pointer bases and quantum Darwinism only analyze the easy cases.
To be a bit more explicit than Mateus Araujo:
* Which Hamiltonian? Given that we are assuming here that QM is objectively correct, the one and only Hamiltonian that governs the evolution of the universal quantum state. Which, obviously, we don’t yet know exactly.
* Which basis: to get a quasi-classical large-scale world, the wavefunction must be sharply peaked (by macroscopic standards) in both configuration and momentum space, but only for coordinates representing particles bound into macroscopic objects. “Sharply peaked” meaning resolved into multiple disconnected islands, one per “world”. In this context “disconnected” allows the islands to be actually connected along one or a few dimensions, as when a position measurement is made on a delocalised particle. Owing to the very high dimensionality, O(Avogadro’s number) for a simple lab setup, such connections are FAPP impossible to utilize to demonstrate interference effects.
The states are neither momentum nor position eigenstates which are both unphysical, nor are they exact eigenstates of any universal unique operator, e.g. coherent states. Zurek claims to have shown that einselection operates because the Hamiltonian can be expressed relatively simply in terms of position and momentum operators, and as a consequence, he says, when expressed in these spaces the wavefunction changes relatively smoothly. In contrast, whatever operator has eigenstates corresponding to non-pointer states, like, say, (live + dead) cat and (live – dead) cat would have a wave function that changed very fast indeed. States that change relatively slowly (and are therefore short-term predictable) are necessary for the function of any life-form (or general IGUS – information gathering and utilising system), so life has evolved operate in quasi-classical terms.
Light-matter interaction is one of the most studied problems in physics. Finding an open problem about how the polarisation degree of freedom entangles with the environment would be a feat in itself. But lets say you do find such a situation where we don’t know whether the decoherence will happen in the vertical/horizontal or clockwise/counterclockwise basis. What is the problem then? We do have “classical” light in such polarisations, these are not blue dragons.
“And how about situations where it decoheres, but in which there is no preferred basis; how do you model decoherence in these situations? ”
What about that? Sounds like you’re just getting photons polarised in a random basis. Isn’t that pretty vanilla?
“Finally, blue dragons are indeed relevant; in some sense, the theory of decoherence won’t be truly complete until you can prove that there are no quasi-classical worlds with a hidden basis that contains blue dragons. (Although making such a proof a requirement for accepting a theory of decoherence is much too demanding.)”
I agree that if somebody did find such a basis of quasi-classical worlds with blue dragons it would be a problem, as it would amount to a prediction that blue dragons exist, and in reality they don’t (or even more fascinating, if somebody found such a basis that was different from the usual pointer states/coherent states basis, as it would imply that parallel to our reality of boring quasi-classical worlds there exists a hidden reality with blue dragons. That’s the premise of Greg Egan’s Quarantine). But absent such a discovery, proving that blue dragons are not predicted by quantum mechanics sounds like proving that Russel’s teapot is not there.
Thanks for your comments. I’m afraid I’ve reached the point where to get more out of this I’d have to spend serious time doing some more reading and thinking, and there are other projects which seem more likely to be fruitful that I should get back to instead.
Without doing such reading and thinking, I’m stuck unconvinced that invoking “many worlds” as an explanation really explains anything, and in a position I don’t like to be in, that of trying to figure out what is going on not by understanding something myself, but by seeing what experts say. From all I can tell, the best work done in this area has been that of Zurek and collaborators, and from what I can gather, Zurek comes down on the “many-worlds not necessary” side, while on the “is the preferred basis problem solved?” question I take Zurek as a yes, and Jess Riedel as “not a complete solution yet”.
The Carroll book is going to convince a lot of non-physicists that the problems of QM are resolved by the existence of multiple worlds. I hope it will have the effect that experts with a good understanding of what is really going on will find the appearance of the book a good opportunity to write something less propagandistic that gives interested non-experts a more balanced explanation of the state of the subject.
I hope so too. This is not a hope really founded on experience — my expectation is that people will continue to endorse whichever lazy answer they had already settled on, and to propagate the same dull arguments that were old by 1980 — but I shall hope anyway.
A last attempt to unblock the log-jam in your thinking about this. You have been told several times already, and so the fact that you don’t take it on board suggests to me that it may be the crux of your problem:
Invoking “many worlds” is *not* supposed, even by MWI-ers, to be an explanation of anything. The explanation is QM. Many worlds is a *logical consequence* of QM. Or so say the MWI-ers. So you shouldn’t be puzzling about what sort of thing many worlds explains, you should be puzzling about whether you can have a self-consistent, ontic, QM without getting many worlds as a by-product.
I think I was too kind to Zurek in an earlier comment. If you accept that decoherence is the explanation for the pure state to (apparent) mixed state transition, which he does, then you are committed to the mixed state being mixed only FAPP, and therefore you cannot consistently deny the existence of terms in the mixture that do not correspond to the world we experience. But Zurek did try to do that for many years, although I read something by him in the last decade or so (sorry, can’t remember where) , which seemed to me to reluctantly accept the MWI deduction.
For the record, I am not fully convinced by MWI either. Probability is still a big problem, pace Wallace. QM could be wrong. It just seems to me that your criticism of MWI is surprisingly shallow.
I think it’s likely that you’re misunderstanding my point.
Recall that the context of this posting is a discussion of Sean Carroll’s book, which is devoted to making the case that invoking multiple worlds explains everything, with all that most people are going to get out of it the news that:
“Carroll says that the crisis can now come to an end. We just have to accept that there is more than one of us in the universe. There are many, many Sean Carrolls. Many of every one of us.”
This is what I’m criticizing. I don’t see how my accepting that there are many Peter Woits in the universe will end any crisis, or explain anything. What I decide to call a “real world”, my choice of “ontology” if you like, to me is more of a philosophical than scientific issue, unless you can make a good case that this choice is important to understanding how to solve a problem.
The interesting physics question I’m trying to understand is that of exactly how the usual rules we teach students about how the bare QM formalism relates to “measurements” can be derived rather than postulated. This is an attractive idea and I don’t see any evidence that this can’t be done. The decoherence/Zurek line of research has clearly made a lot of progress towards this goal. I remain confused about whether it has completely addressed certain problems, and various claims made by Wallace and others for solutions to these problems don’t seem convincing (the explanation for the Born rule lies in decision theory???). In trying to resolve these confusing issues, I still don’t see how invoking a “many worlds” ontology solves anything.
I also don’t think the approach to the probability problem taken by Wallace (and Carroll and Vaidman, for that matter) is the proper one. Sure, they can derive from reasonable assumptions that rational agents must assign subjective probabilities given by the Born rule. But who cares about subjective probabilities? The reason probabilities in quantum mechanics are interesting is precisely because they are objective, not subjective. And rational agents? Come on! Many-Worlds is a reductionist theory about an objective reality, this agent-centric nonsense belongs in Copenhagen.
I also think it is a huge missed opportunity, as making sense of objective probabilities is an age-old problem in philosophy, and the only solution I know is using many-world theories.
I’m using lower case here because the solution I know is using a classical many-world toy theory, Kent’s universe, not the quantum Many-Worlds itself. The quantum case still doesn’t have a satisfactory explanation.
Pingback: Quantum Supremacy II | Not Even Wrong
Pingback: An Apology | Not Even Wrong