I’m on vacation in Europe, not in any mood to spend more time on this than just to point out that it’s the same usual tedious string theory promotional operation from the same people who have been at this for decades now. We have

- A PRL publication that has nothing at all to do with string theory, preprint here. This is about a purely classical pde calculation in coupled EM + gravity.
- The researcher’s university puts out a press release.
- A story then appears where the usual suspects claim this is some sort of vindication for string theory and shows their loop quantum gravity opponents are wrong. There’s a lot of quite good information in the story about the actual classical calculation involved, but no indication of why one might want to be skeptical about the effort to enlist this result in the string vs. loop war.

While traveling I’ve seen a couple very good stories about physics online:

- A summary from Dennis Overbye about the the current status of energy frontier HEP.
- An excellent long article by Philip Ball about quantum mechanics and the measurement problem.

I don’t know what to think about the very impressive article by P.Ball.

On one hand, it’s refreshing to see a sharp attack of the metaphysical features some people try to ascribe to quantum mechanics (it’s 2017 and posts on stackexchange about quantum measurement quickly get hijacked by supporters of parallel universe and even conscious observer interpretations).

On the other, it seems to me that his enthusiasm largely consists of rephrasing of controversial terms but not necessarily their explanation. How did the described scheme explain the statistical nature of the outcomes? Maybe I missed that and I will definitely look into the decoherence theory now, but informed comments could be helpful.

What I wonder about is how much established the status of decoherence theory is. Statements like the following sound pretty strong; are they considered mainstream in quantum optics or other fields? (Of course not being mainstream doesn’t render them wrong etc.)

“A detailed theoretical analysis of decoherence carried out by Zurek and his colleagues shows that some quantum states are better than others at producing these replicas: they leave a more robust footprint, which is to say, more copies. These robust states are the ones that we can measure, and that ultimately produce a unique classical signature from the underlying quantum morass.”

Peter, your readers might be interested in this (freely accessible) Physics Today article by Ed Witten that introduced M-theory. It appeared in May 1997, two years after Witten had conjectured the existence of such a theory as a way to unify various flavors of string theory. http://physicstoday.scitation.org/doi/abs/10.1063/1.881616

tulpocid:

If the article is correct, then before 1996, there was no way to calculate the speed of decoherence, while today we know how to do it, and the results of the calculation match experiment. This is definitely great progress in understanding the nature of quantum mechanics and how it generates a classical-looking universe.

On the other hand, it is not clear that this is a satisfactory resolution of the philosophical questions on the nature of quantum mechanics. Certainly, not everybody is satisfied.

Correction to my previous comment: the article says that 1996 was when the first experiment comparing theoretical calculations of decoherence rate with experiment was done. The theory was developed over the previous decade or so. (I should have known this; I was paying attention during part of this time.)

Peter Shor,

I’m not sure how experiments checking decoherence are even possible. Experiments of necessity involve conscious observers. Decoherence is supposed to tell you what happens in the absence of observers.

Jeff M,

Maybe this is why the people who achieved it won the Nobel prize 🙂

(Whose tremendous importance was partly lost on the general audience, because it was the same year with the Higgs announcement and it’s more fun to whine about the latter not getting it prematurely than explaining quantum fundamentals. But I digress.)

Jeff M:

Did you read Philip Ball’s article? I quote:

In its simplest form, decoherence is the decay of the diagonal elements of the density matrix in the

pointer basis(calledpointer statesin Ball’s article). This is a phenomenon we believe happens with or without conscious observers (although I assume it’s only been measured in their presence).Decoherence theory is great, but do note that the consensus of the majority of the experts in the field (i.e. everyone I ever asked) is that decoherence does *not* solve the measurement problem. The reason is that QM does not give any rule to determine the pointer basis, and specifying it ad hoc in any way is actually equivalent in power to the collapse postulate itself, which we set out to explain by decoherence.

HTH, 🙂

Marko

Put simply, Marko, I think of decoherence as an explanation of how all the alternative quantum realities can disappear (why we could never see Schrödinger’s cat alive and dead at the same time -the atoms in the cat quickly decohere) but it does not explain which of those realities we actually see, as you say – the pointer basis.

But I think just the fact that it explains how the alternate realities can disappear is damaging for the many-worlds multiverse.

For me, this is the kicker: “Decoherence doesn’t completely neutralise (sic) the puzzle of quantum mechanics. Most importantly, although it shows how the probabilities inherent in the quantum wave function get pared down to classical-like particulars, it does not explain the issue of uniqueness: why, out of the possible outcomes of a measurement that survive decoherence, we see only one of them.”

I suppose it’s nice that one doesn’t have to explain away an infinity of alternate realities. However, and it could very well be I’ve misunderstood something, it seems to me that if whatever process you invoke (Darwinian or otherwise) doesn’t select a unique reality that we conscious minds perceive (and always agree on), it’s not clear that progress has really been made. To my untrained eye the advantage of paring things down appears to be more psychological than anything else. Instrumentalism still seems to rule as the only truly honest way to cope.

I think it’s worth pointing out that indirect measurements of (statistically averaged) decoherence in many-body systems predate 1996 in the form of “mesoscopic” physics in metals and semiconductors. There are quantum interference corrections to electronic conduction in solids (weak localization and universal conductance fluctuations, to name two). A classic early review of weak localization is this one from 1984, and this is a broader review from 1992.

In a Feynman-Hibbs heuristic argument, in the absence of decoherence, an electron propagating from one location to another through a solid takes all possible paths. Along each of these paths, the electron wavefunction accumulates a phase proportional to the product of its wavevector (related to its kinetic energy) and the propagation distance, and with an additional “Aharonov-Bohm” phase proportional to the electronic charge and the line integral of the vector potential along the path. Scattering off static disorder along a particular trajectory can introduce additional phase shifts. The final transmission probability of the electron from initial to final location involves summing the complex amplitudes for each of these trajectories and then finding the magnitude-squared of the total amplitude. Changing a magnetic field alters the relative phases of the different trajectories, altering the probability of electron transmission and hence the electrical conduction.

With decoherence, you don’t have to worry about interfering contributions from trajectories that take longer than the decoherence timescale. That ends up setting a characteristic magnetic field scale for conductance changes that can be determined experimentally. TL/DR version: By measuring the conductance (or equivalently resistance) of a metal or doped semiconductor as a function of an externally applied magnetic field, it is possible to infer the underlying decoherence timescale associated with the motion of the charge carriers. The dominant causes of decoherence for these free charge carriers are inelastic scattering processes involving lattice vibrations (at high temperatures) and electron-electron interactions (at low temperatures, say below 4.2 K) as well as spin-based (magnetic) scattering processes (also at low temperatures). The typical coherence timescale for electrons in such a system around 10 fs near room temperature, but can easily be nanoseconds or longer at cryogenic temperatures.

Marko, Peter Shor and Another Anon,

It is usually at this point that Peter accuses us being un-informed and repeating tired old arguments, but I will go out on a limb and participate anyway

Regarding the pointer basis, Nima comments on this at 55:40 in the video below. Start a 40:00 if want the full context.

https://www.youtube.com/watch?v=3bqvAIKH2Rg

The short answer as I understand it is that the pointer basis is selected by the Hamiltonian that describes the interaction between the “system” and the environment. The interactions are local, so in his example local (position) states are selected as the pointer basis rather than super-positions of position states. I don’t how to extend this argument to alive and dead cats though!

It seems there are three things we need to explain

1) Selection of a pointer basis

2) Diagonalization of the reduced density matrix for the “system” **in the pointer basis** (i.e. no macroscopic entanglement)

3) Collapse of the diagonal density matrix elements to only one element as a result of a measurement.

Decoherence explains 2) and the interaction Hamiltonian explains 1). Is there a way that interaction with the environment can also explain 3)?

Wow!

The comments here make me suspect that there are more people than I ever would have believed who are deeply invested in the mystical idea that consciousness has something to do with quantum mechanical dynamics.

This probably says something about human nature.

Jan Reimers,

Nima is repeating the usual locality argument for the selection of the pointer basis, which everyone talks about, but noone is able to implement effectively. So at this point it’s just wishful thinking. Moreover, the locality argument can exist only on a classical spacetime manifold, and fails in a full quantum gravity setup, since the notion of locality cannot be defined on superpositions of manifolds.

So that argument doesn’t really work, and cannot work, as a criterion for the pointer basis.

HTH, 🙂

Marko

Jan Reimers and Marko,

What you say is not totally incompatible with Ball’s article (which I quite liked, by the way). He explicitly says that decoherence doesn’t solve the multiplicity problem (Reimers point 3), which is why most experts say that decoherence doesn’t solve the measurement problem, as claimed by Marko.

Marko’s reason for that is false, however. The consensus is that decoherence *does* solve the preferred basis problem, precisely as Reimers report. If you really think that no one can effectively derive the pointer basis from the interaction Hamiltonian, you had a lot of comments to write on the decoherence papers.

As for the quantum gravity argument against decoherence, come on. Using a non-existent theory to argue against an existent theory is just bizarre.

Regarding decoherence, the respective part of FAQs is helpful.

And regarding P.B.’s words (in the article linked in the P.W.’s post) against “spooky” stuff, does it mean that P.B. (and P.W.) disagree with Bell inequalities? AFAIK Bell inequalities hold regardless of decoherence being/not-being part of measurement.

Regarding words on Higgs boson mass in the D.O.’s text (linked in the P.W.’s post, seek the second occurrence of the word “quadrillion” there), do not they mix the originally expected Higgs mass (several times greater than lately found) with the absurd value of theoretically approached CC value?

Martin S.:

I’m sure Philip Ball is referring to the nonlocal collapse of the wavefunction, this is what decoherence theory eliminates (in favour of a local effective collapse).

The violation of Bell inequalities is extremely well-stablished experimentally, you cannot get rid of them by playing with the theory.

@Mateus Araújo:

“

The violation of Bell inequalities is extremely well-stablished experimentally, you cannot get rid of them by playing with the theory.” That’s what I meant; I was way too sloppy there.“

I’m sure Philip Ball is referring to the nonlocal collapse of the wavefunction, this is what decoherence theory eliminates (in favour of a local effective collapse).” Thanks, then according to my knowledge there must be some another nonlocal process, e.g. that choosing of single diagonal element in density matrix should be nonlocal, or we need (absolutely) nonlocal hidden variables. And that another nonlocality is the same “spooky” as nonlocality of any other process, I would say.Mateus Araújo,

“As for the quantum gravity argument against decoherence, come on. Using a non-existent theory to argue against an existent theory is just bizarre.”

No, it’s not bizarre. You cannot claim that you can resolve a fundamental problem (the pointer basis) by appealing to a semiclassical (i.e. approximate) theory, which completely ignores one very fundamental interaction in nature (gravity). At the very least, there is a potential danger that your solution may be an artifact of the approximation rather than a genuine property of a (so far unknown) complete theory.

And this is actually what happens — the proposed solution of the pointer basis problem rests on the notion of locality, which in turn relies on the notion of classical spacetime manifold, which may or may not exist in a full theory of quantum gravity. So this is the weakest point of the whole argument, and ignoring that point is what’s bizarre.

Even Zurek, the father of decoherence, was aware from the very beginning that gravity may introduce trouble — in his foundational paper [1], on page 1520 he wrote: (the assumption of pairwise interactions) “is customary and clear, even though it may prevent one from even an approximate treatment of the gravitational interaction beyond its Newtonian pairwise form.” In other words, he made a disclaimer that the whole decoherence programme may fail in the presence of a tripartite interaction term in the Hamiltonian, which is precisely what relativistic gravity has.

And finally, one doesn’t need a full theory of quantum gravity to make that point. It is quite enough to know that ordinary theory relies on spacetime being classical — and in almost all QG models this assumption is violated, resurrecting the measurement problem back to its full glory.

Btw, it’s not that experts in the field aren’t aware of this. They are, but they don’t know what to do about it, and there is a lack of good ideas how to deal with such a problem (in the absence of a full QG theory). That is one of the reasons why they generally agree that decoherence doesn’t really solve the measurement problem. At least the ones I’ve talked to. 😉

HTH, 🙂

Marko

[1] W. H. Zurek, Phys. Rev. D 24, 1516 (1981).

Marko, you are quoting a paper from 1981 to talk about the theory in 2017. This is 36 years ago. Decoherence changed from being a new idea to being a well-established theory. This inverts the roles of decoherence that has to adapt itself to quantum gravity to succeed, it is quantum gravity which must allow a decoherence-like mechanism to be taken seriously. Are you forgetting the experiments that need decoherence to be explained, including the one that got Haroche his Nobel prize?

And you don’t “ressuct the measurement problem” as soon as you drop the assumption that spacetime is classical. We knoe that classical spacetime is a very good approximation at any energy scale we can actually experiment with. If this works at all like nornal science, relaxing this assumption will make an \epsilon correction to the theory.

You have clearly been talking to different experts than me. And, as someone who works in quantum foundations, I do know a lot of experts.

Martin S.

We don’t need a “nonlocal process”, what we need is to be able to reproduce the observed violation of Bell inequalities. And a purely quantum description of the experiment is perfectly local, the nonlocalities only appear when you start collapsing stuff.

Mateus,

“it is quantum gravity which must allow a decoherence-like mechanism to be taken seriously”

Sure, this is a valid point of view, and may serve as a criterion for a plausible QG model. The general drought of experimental results in QG makes all such criteria fair game. But suppose sometime in the (far far) future, we start making experiments in QG, and find out that it indeed badly violates the notion of locality. That would certainly invalidate decoherence as a solution of the measurement problem, wouldn’t it? Regardless of how unlikely that may be, it certainly is a logical possibility. So it’s fair to say that the locality-based solution of the pointer basis problem is contingent on certain properties of QG, which is so far unknown, and outside of experimental realm.

“We knoe that classical spacetime is a very good approximation at any energy scale we can actually experiment with. If this works at all like nornal science, relaxing this assumption will make an \epsilon correction to the theory.”

Well, no. All experiments we can do are in our Solar system, which so far doesn’t feature any strong superpositions of the gravitational fields. That’s why classical spacetime is a good approximation. However, one can certainly imagine scenarios (typically involving a photon, a beam splitter and a black hole) where one should discuss strong superpositions of very different geometries, which behave nothing like an \epsilon correction to a classical geometry. Theory should be able to account for these situations too.

There was an analogous mishap with the assumed universal validity of the law of energy conservation (which was overwhelmingly supported by experiments), until general relativity taught us that such a law is valid only under certain circumstances, which are satisfied in our Solar system and the Milky Way, but not in general. So one has to be very careful with the reasoning you proposed above.

“You have clearly been talking to different experts than me. And, as someone who works in quantum foundations, I do know a lot of experts.”

This may be getting slightly off-topic, but just today I came back from visiting your PhD advisor in Vienna. 🙂 // I also attended the ongoing ESI workshop [2] there… // Granted, I didn’t talk to him about this particular issue (didn’t come up in the conversations, we had other stuff to cover), but I’ll make sure to do so when I visit his group again in October. So would you agree to postpone this “argument from authority” until then? 😉

Best, 🙂

Marko

[2] http://quark.itp.tuwien.ac.at/~grumil/ESI2017/

@Mateus Araújo: Trying to be less sloppy, I mean experiments where apparatuses are spacelike separated, their outputs are real, we have choices to arrange those apparatuses locally (yeah, I had read those free-will-theorem articles), and “nonlocal” is considered as “effectively nonlocal”, admitting e.g.: speeds without limits and/or going backwards in time (sort-of the transactional interpretation), some spatial bridges in other dimensions, or whatever.

Then while decoherence is important, it leaves the

uniqueness(as stated in the article linked in the blog post), that is point 3 at Jan Reimers’s comment, and the upper part of the FAQ part. When one is honest, the effective nonlocality occurs somewhere (to have such an experiment done), and here it is apparently pushed into thatuniquenesspart.Thus I am unhappy about that article, since it says that there is nothing “spooky” (that is effectively nonlocal), while sweeping it under the rug, that is into the

uniqueness. May be that I read too much into it though.@Marko:

I don’t understand your argument against decoherence and localization at all.

If you believe that the Standard Model is a very good approximation to string theory under most realistic conditions, please explain why a theory of decoherence that holds for the Standard Model wouldn’t also apply to string theory.

And if you don’t believe that the Standard Model is a very good approximation to string theory under most realistic conditions, then you’ve got even more explaining to do.

Off-topic: Peter, can you mention this in some future “quick items” post?

Besides Peter Scholze, who do you think will win the Fields Medal in 2018 (choose up to three names)? Vote here: https://vote.pollcode.com/31229277

Martin S.,

I think you do have a point. While decoherence does give you a physical (and therefore local) explanation for the destruction of the off-diagonal elements, to get a unique result you need again some sort of collapse, which is by necessity nonlocal. And one does not even need Bell inequalities to see that, the good old EPR paradox is good enough. Consider as usual that Alice and Bob share a singlet |01> – |10>, and that Alice makes a measurement in a basis |A_0>,|A_1>. Then we rewrite the state in this basis to turn it into |A_0 A_1> – |A_1 A_0>, and the measurement (through decoherence) maps it into |A_0 A_1><A_1|, dependent on a distant change of basis.

So decoherence only gives you a way to locally kill off the off-diagonal elements; it doesn't give you a way to locally select a single outcome. Still, it is an improvement over the collapse postulate, according to which even the disappearance of the off-diagonal elements is nonlocal.

But what I was talking about is that if you stick to the quantum mechanical formalism, you'll have a completely local theory, as any physical Lagrangian is Lorentz covariant. You only get nonlocality when you do an ad-hoc modification of the quantum formalism, such as selecting unique outcomes.

Maybe you'll be interested in this paper by Brown and Timpson, which goes on precisely about how you get EPR and Bell correlations locally if you stick to the quantum formalism: https://arxiv.org/abs/1501.03521

Marko,

I think I’ve understood why we’re failing to communicate. You expect decoherence to *always* work, even in exotic quantum gravity regimes where we’re in a superposition between being inside a black hole and outside. Whereas I’m claiming that quantum gravity must allow decoherence to work in the experimental regimes we can access today on Earth.

“That would certainly invalidate decoherence as a solution of the measurement problem, wouldn’t it?”

No, it wouldn’t. While it would be certainly be shocking if we found out that decoherence actually fails in such exotic regimes, it wouldn’t make me shed a single tear: its job is to explain the measurements we’re actually doing, and that it does with unparalleled success (its main rivals being burying your head in the sand or radically modifying quantum theory).

Getting wildly off-topic: I did not meant “talking to different experts” as an insult. I just meant that you have been probably talking to quantum gravity people, whereas I’ve been talking to quantum foundations people. Anyway, I think I’m more familiar than you with the opinion of my former PhD supervisor =). But we could just ask him, no?

@Mateus Araújo: regarding 1501.03521;

1) Bell’s opinions: science is not about caring of opinions; e.g. QM is quite successful regardless some dislikes by one of its early fathers, namely A. E.;

2) then that article is about the Everett interpretation: and it requires the uniqueness, that is the part that hides nonlocality inside itself;

3) an attempt to come to uniqueness without nonlocality (within E. I.) is by later (local) comparisons of results; but here (non-aligned-spins experiments) you need to get rid off some result combinations so that proper correlations are gained, and such a pruning means that some branches get killed; such a branch killing is first, strange by itself, second, you either have to kill such branches in their entirety (that is non-locally), or you need to kill anyone who tries to test it (making holes in the branches by that). then we can do an amount of such experiments together with the comparisons, and after some amount of such tests we reach a branch that either has to get killed entirely, that is together with us (in the former case), or we get killed when we try to read the results (in the latter case);

4) less naughty way of testing the outputs is via the transactional interpretation where the future is probed before actually taking one real output, that is without a need of killing us later. not arguing for it though, just mentioning it as a less terrible alternative;

PS Am not willing to argue about E. I. any more, since (according to my understanding) it solves nothing (as I described in the points above).

I remember seeing an article on arxiv explaining why the initial condition of the universe must be taken into account to explain why the universe is classical-looking (so in particular, decoherence theory by itself would not be a good enough explanation).

The argument in a nutshell: if we start from a random, highly nonclassical state of the universe, and apply any unitary transformation (representing the evolution of the universe since the origin) then we obtain another random, highly nonclassical state.

I find this argument quite compelling but I’d be curious to know the opinion of people with a deeper understanding of physics than me.

@Pascal: Will not help you, especially when not having link to the text (and probably neither even with a link). If it is something like/this, I would suggest to ask Bee.

Martin S.,

I’m afraid you have misunderstood the article. It is about how you get Bell correlations in a local way if you *don’t* have uniqueness. This is what the Everett interpretation is about, not modifying quantum mechanics and trying to interpret the resulting weirdness. There is no killing involved, either of branches or experimentalists.

Mateus Araújo:

… or cats.

Peter,

my friend Chris just showed me this:

http://www.lhc-epistemologie.uni-wuppertal.de/

It is a well-funded physics-philosophy research program on the epistemology of the LHC. I really do not know what to think about it. But it exists.

One of the projects of that physics-philosophy program:

Principal Investigators:

Robert Harlander (RWTH Aachen)

Adrian Wüthrich (TU Berlin)

Principal Collaborator:

Friedrich Steinle (TU Berlin)

Postdoctoral Researcher:

Daniel Mitchell (RWTH Aachen)

Doctoral Researcher:

Markus Ehberger (TU Berlin)

The sub-project A1 investigates the concept of a virtual particle from a historical point of view. This concept is an integral part of modern physics, mainly since it provides a means to interpret and describe complex quantum physical processes in comparatively simple terms. Despite of its wide use, the notion of a virtual particle appears not to be clearly defined. One of the main goals of the project is therefore to trace the origins of the concept and to follow its evolution through time. Since it is of a fundamental quantum physical nature, it is quite clear that the roots of the virtual particle concept are tied to the beginnings of quantum mechanics and quantum field theory. However, the reasons for its introduction, its further development in the context of quantum field theory, S-matrix theory, and Feynman’s diagrammatic method have never been the focus of historical research. By taking up these issues, the present project aims to explain the genesis of today’s terminological diversity, to bring to the fore the reasons for the apparent inconsistencies, and to further our understanding of concept formation in the physical sciences.

—

The goals are somewhat unclear to me, but if the result is to bring some precision to the various uses of “virtual particle” in the physics literature, that would be a good thing. If new experimental results are in short supply, then spending effort to elucidate the foundations of physics is worthwhile, in my opinion.

Peter Woit said

“There’s a lot of quite good information in the story about the actual classical calculation involved, but no indication of why one might want to be skeptical about the effort to enlist this result in the string vs. loop war.”

Well, it’s obvious why they make that connection. Vafa argues that the naked singularity goes away, exactly when you include charged matter obeying the weak gravity conjecture. String theory appears to obey the weak gravity conjecture and in some cases one can even say why. Loop quantum gravity appears to not obey the weak gravity conjecture because electromagnetic couplings are simply free parameters in that theory, there’s no known mechanism that can stop you from making them weaker than the conjecture allows.

In my opinion, the main reason not to bring strings vs loops into this, is that loop quantum gravity already has so many other problems: chiral fermions, logarithmic corrections to black hole entropy, the peculiarities of polymer quantization, the very existence of a classical limit. If an alternative to string theory is desired, one could reasonably ask why we are getting the loop perspective, rather than e.g. the asymptotic safety perspective, which at least has successful empirical and mathematical predictions to its credit.

Decoherence does not imply the disappearance of superposition. It just implies the disappearance of the detectability of superposition, which depends on an interference _pattern_. This is easy to see. Two points from which emanate a wave will result in maxima and minima at a screen. The interference is detectable. Add a few more points at random locations and this sign of interference disappears. The interference is still there, it is just no longer detectable by experiments in which the results of repetitions demonstrate the pattern. The pattern, from which we infer interference is gone. The interference is still there. The superposition is still there. You still need a collapse.

It is easy to visualize the replication with photons. I see it as a representation of the system by successive groups of impinging photons. However, what if the pointer apparatus uses a static electromagnetic field. I am thinking of the stern gerlach apparatus. I assume photons are used somewhere, but what is the mechanism? And what codes the replications?

Kane’s quoted comment about “realistic” searches is outrageous and should be completely discreditive.

Why would we expect or even want a solution to the measurement problem to solve the uniqueness problem? Decoherence means that anything big, complex and connected enough to be an observer will necessarily see an approximately classical universe. What more do you want? Sure, you can’t predict which classical universe you will see. But so what? In QM some things aren’t predictable.

It’s like a quantum version of the anthropic principle. If you exist then the universe around you must look approximately classical.

ppnl,

The uniqueness problem is about conciliating the multiple outcomes that appear in the equations with the unique outcomes we experience. It’s not about predicting the outcome: I would be perfectly happy with a random outcome that came out of the quantum dynamics. This is pretty much what is done by collapse models, and they do have a satisfactory answer to the uniqueness problem. The problem with collapse models is that they don’t really match reality.

I still don’t get it. Maybe I’m just not smart enough to understand the problem.

In collapse models you have an observer (what counts as an observer?) that makes a measurement (what exactly is a measurement?) and that reduces the state of the object.

With a decoherence you have an observer (any complex connected object will do) that makes a measurement (any interaction will do even if the observer is not aware of it.) and it gets constant state reduction behind its back so to speak.

Both have multiple outcomes that appear in the equations and in both cases one is chosen. There is nothing to reconcile. What more do you want? What would the answer you seek even look like?

So I just came back from the “Gravitational decoherence” workshop in Bad Honnef, and one of the most interesting points of the final discussion session was the lack of consensus among people regarding the unitarity of QM.

There were two main points of view, distributed roughly half-and-half among the people present there. The first half maintains that nature is obviously unitary, since all existing measurements may always be reinterpreted as unitary evolution plus the decoherence with the environment.

The second half maintains that experiments give us obviously nonunitary results, and that imposing unitarity to QM and jumping through various hoops to make all evolution unitary lacks justification (and represents an ideology).

This was basically a divide between pure QM folks (favoring formalism) and the objective collapse folks (favoring instrumentalism). The initial question (raised by me 😉 ) was about the experimental status and possible tests of unitarity in nature. Instead of answering the question, people got split in two groups and started arguing, based on whether they interpret experimental results as saying that nature is not unitary or as saying that decoherence is too strong to avoid.

The consensus, however, was that in the end this question is extremely hard (if at all possible) to resolve experimentally, i.e. the assumption of unitarity is apparently not falsifiable. So it doesn’t really belong to physics, but to philosophy. So there you go — collapse or decoherence, take your pick, it’s a metaphysical point of view. 😉

Best, 🙂

Marko