I’ve just finished reading Sean Carroll’s forthcoming new book, will write something about it in the next few weeks. Reading the book and thinking about it did clarify various issues for me, and I thought it might be a good idea to write about one of them here. Perhaps readers more versed in the controversy and literature surrounding this issue can point me to places where it is cogently discussed.
Carroll (like many others before him, for a recent example see here), sets up two sides of a controversy:
- The traditional “Copenhagen” or “textbook” point of view on quantum mechanics: quantum systems are determined by a vector in the quantum state space, evolving unitarily according to the Schrödinger equation, until such time as we choose to do a measurement or observation. Measuring a classical observable of this physical system is a physical process which gives results that are eigenvalues of the quantum operator corresponding to the observable, with the probability of occurrence of an eigenvalue given in terms of the state vector by the Born rule.
- The “Everettian” point of view on quantum mechanics: the description given here is “The formalism of quantum mechanics, in this view, consists of quantum states as described above and nothing more, which evolve according to the usual Schrödinger equation and nothing more.” In other words, the physical process of making a measurement is just a specific example of the usual unitary evolution of the state vector, there is no need for a separate fundamental physical rule for measurements.
I don’t want to discuss here the question of whether the Everettian point of view implies a “Many Worlds” ontology, that’s something separate which I’ll write about when I get around to writing about the new book.
What strikes me when thinking about these two supposedly very different points of view on quantum mechanics is that I’m having trouble seeing why they are actually any different at all. If you ask a follower of Copenhagen (let’s call her “Alice”) “is the behavior of that spectrometer in your lab governed in principle by the laws of quantum mechanics” I assume that she would say “yes”. She might though go on to point out that this is practically irrelevant to its use in measuring a spectrum, where the results it produces are probability distributions in energy, which can be matched to theory using Born’s rule.
The Everettian (let’s call him “Bob”) will insist on the point that the behavior of the spectrometer, coupled to the environment and system it is measuring, is described in principle by a quantum state and evolves according to the Schrödinger equation. Bob will acknowledge though that this point of principle is useless in practice, since we don’t know what the initial state is, couldn’t write it down if we did, and couldn’t solve the relevant Schrödinger equation even if we could write down the initial state. Bob will explain that for this system, he expects “emergent” classical behavior, producing probability distributions in energy, which can be matched to theory using Born’s rule.
So, what’s the difference between the points of view of Alice and Bob here? It only seems to involve the question of how classical behavior emerges from quantum, with Alice saying she doesn’t know how this works, Bob saying he doesn’t know either, but conjectures it can be done in principle without introducing new physics beyond the usual quantum state/Schrödinger equation story. Alice likely will acknowledge that she has never seen or heard of any evidence of such new physics, so has no reason to believe it is there. They both can agree that understanding how classical emerges from quantum is a difficult problem, well worth studying, one that we are in a much better position now to work on than we were way back when Bohr, Everett and others were struggling with this.
Can you expand a bit on being in a much better position to study such things now? Is this due to new ways to measure isolated quantum systems experimentally, or due to new ways to explain the theory?
Relatedly, do you think we will see substantial progress on these “interpretation” issues, and if so where will it (likely) come from?
Note that when Everett was doing his work, even the notion of decoherence was not understood, so there have been huge improvements since then in our understanding of the theory of how classical emerges from quantum.
I’ve often here pointed to the work of Zurek and others, for instance on the notion of quantum Darwinism. For a discussion of recent experimental results related to this, see
A book describing many of the relevant modern experimental techniques is Haroche and Raimond, see
I know nothing about this, but I’ve always wondered whether these issues will come up in the design of quantum computers, where in some sense the whole problem is that of a quantum/classical interface (creating isolated qubits presumably is easy enough, but how do you then manipulate them?). I’ve been told there are serious practical problems with getting significant bandwidth when providing input to or getting output from a quantum computer. Can one learn anything from these (I have no idea)?
Ultimately the difference is that Bohr and Heisenberg (inspired by the success of observer-dependence in Einstein’s relativity) were anti-realists (they did not believe in a mind-independent objective reality) and used that to argue that quantum mechanics was complete, rather than that “we simply don’t understand the details.” For example they were OK with objective-level logical contradictions (ala Winger’s friend), because they had abandoned realism in favor of subjectivism. More generally this has the consequence of disfavoring whole lines of research into the details, because much of what would ordinarily in the scientific tradition be seen as important issues, such as measurement outcomes not being objective, clearly defining what does and does not constitute a measurement, the contradiction of the measurement apparatus being treated classically, locality in instantaneous collapse in entangled systems, and so on, can be “waved” away by an appeal to antirealism. In this reading, indeed Alice and Bob as you describe them are really on the same side, against Copenhagen. Both can certainly be realists who just “aren’t sure” about the details of how the classical emerges from the quantum. Both can be open to various “completions” of the QM story (Bob apparently favors Unitary QM), but both are aware that there can be no local hidden variables due to Bell, so whatever completion there is (within the tradition of scientific realism) is not so mundane as to be reasonably ignored by researchers. Alice could well even be a psi-epistemicist who is an instrumentalist/positivist because she thinks that progress cannot be made on the problem without further empirical constraint, while nonetheless acknowledging that there is some mind-independent machinery that we lack information about, that could potentially be part of an “interpretation” of what is going on behind the scenes. But such a position is generally understood as distinct from the antirealist views of Bohr and Heisenberg, murkily articulated by them though they were, because they would reject that anything more could be going on “behind the scenes” at all, just as there is no “behind the scenes, absolute velocity” in some preferred frame in relativity.
At some point in history this story sort of morphed into an association between “Copenhagen” and “rejection of philosophy” (ironically, given how universally literate and strongly informed the founders were by philosophy, logical positivistic though some of it was), rather than a more accurate association between “Copenhagen” and a rejection of realism. The rejection of a mind-independent reality has a poor reputation in virtually every other scientific discipline, including most of the rest of physics, so this reframing has historically allowed the discipline to sort of ignore what would otherwise produce significant cognitive dissonance.
I’m trying to understand the difference, if any, between “Copenhagen” and “Everett” in the senses (which are fairly conventional) in which Carroll uses the terms. Here “Copenhagen” is what he calls the supposed current majoritarian textbook point of view, and which he also characterizes as the “shut up and calculate” point of view. To the extent Bohr/Heisenberg were arguing for “shut up” on dubious idealist philosophical grounds, I don’t think those are shared by the textbook writers. Instead, the “shut up” point of view I think historically was backed by experience that no good could come of arguments about the problem of classical emergence from quantum because these tended to be empty and philosophical. This was a problem for those like Zeh trying to seriously attack the problem using physics.
If you instead define “Copenhagen” as what Bohr/Heisenberg thought, one problem is that they were confused and struggling, said different things at different times. While using “Copenhagen” to mean one or another reconstruction of what they were thinking is relevant to the history of science, I don’t think it’s particularly relevant to the question of how physicists today are struggling with the “interpretation” or “measurement” problem of quantum mechanics.
It might be useful for me to reproduce here Carroll’s discussion (page 23) of “Copenhagen”:
“In a modern university curriculum, when physics students are first exposed to quantum mechanics, they are taught some version of these five rules [i.e. state space, Schrodinger, observables, Born rule, wave function collapse]. The ideology associated with this presentation – treat measurements as fundamental, wave functions collapse whey they are observed, don’t ask questions about what is going on behind the scenes – is sometimes called the Copenhagen Interpretation of quantum mechanics. But people, including the physicists from Copenhagen who purportedly invented this interpretation, disagree on precisely what that label should be taken to describe. We can just refer to it as “standard textbook quantum mechanics.”
Ah! I’ve never seen an explanation of Everett exactly like Carrolll’s. After reading that,
I see that what I been expounding is exactly that. If Carroll is conventional Everett,
I’m an Everettian.
Given that, I can state that the point is that Bob needs to be able to explain to Alice exactly HOW the Born rule works. We know that after a while a thermal system, the apparatus, evolves into probabilities that are Gaussian even after addition of the measured particle. We need a formalism that explains how the Gaussian probabilities of the apparatus select out the probabilities that we know from Born are correct,
for the “world” that includes the pointer in the correct position. It’s different depending on whether its like a spin 1/2 particle, discrete, or position of a photon
on a screen after passing a double slit, continuous. Its that formalism you should be searching out.
As far as I know, such a formalism is lacking. But I’ve found it essentially impossible even to search for the answer to that question.
Added statement: I have always been mislead, if Caroll is correctly explaining Everett, by the words “split” and “worlds”. There is only one world, that of unitary evolution.
Peter, you have to understand that by “fairly conventional” you are referring to the few sentences in which the term “Copenhagen” is used in quantum mechanics textbooks, in which the aim is to teach physics students the bare bones of how to do calculations in quantum mechanics. The aim is not to get into any of these thorny philosophical details, which would take an entire course on its own, and would be hard to understand anyways without first learning the von Neumann style rules that the books teach. Further, since there is no wide consensus on which completion of QM is correct, any responsible textbook author is going to fall into a “shut up and calculate”-like position. As such, I think Carroll’s description in the quoted text is consistent with what I’ve said. It’s also relevant that to a large extent the textbook writers are not experts in quantum foundations, and so it shouldn’t be all that important what they thought (interestingly, one of my favorite quantum textbooks is by Bohm, which happens to adopt the Copenhagen viewpoint, despite him later completely rejecting it). Though again, it’s important to read textbooks in the correct context, which is that they are trying to convince the students to focus on the calculations, because there is no consensus pedagogic ontology or heuristic that will help the students learn more than bog them down and prevent the lecturer from finishing the standard topics on time in a quantum mechanics curriculum.
So it’s the wrong perspective to pin down your definition of “Copenhagen” on the textbooks. For an in-depth discussion of interpretations, you want to use the definition relevant to an in-depth discussion, and quantum textbooks aren’t the place that in-depth philosophic positions on interpretations are expounded on. One place that in-depth discussion took place was literally in Copenhagen with Bohr and Heisenberg, and which influenced a generation of physicists, hence the name “Copenhagen”. Despite Bohr and Heisenberg being opaque writers and not in total agreement, their general position was one of antirealism *and therefore* “shut up because therefore you won’t get anywhere”, *not* “shut up because philosophy is empty.” Of course we all know that whatever the reasons, they were *wrong* about shutting up, as for example Bell showed that Einstein’s realist intuitions about locality were correct and that there were testable consequences. But again, this wouldn’t have mattered to Bohr, who was antirealist anyway.
(I should clarify that I even though I have not read Sean’s book, I would assume that with his description of Copenhagen as “textbook QM”, he is less taking issue with the textbooks for not critically examining alternate interpretations, but more generally taking aim at the history and culture in physics of ignoring quantum interpretation work as “second class” or “too philosophical”, of assuming as Bohr and Heisenberg had argued that “the work is done and QM is complete”, a position that is only tenable if you are antirealist)
I think the hope in many worlds is to prove the Born rule rather than assuming it. See for example:
That the observer cannot see why they’re different is the reason people are arguing about it…
They difference is in the part that you don’t want to discuss, which is that Everettians postulate the other worlds are real, while Copenhagenists refuses to say anything about what cannot be observed. I am not a fan of the Copenhagen interpretation (I just recently explained on my blog why) but it’s philosophically on the safe side. Talking about the existence of non-observable things isn’t something I like to see scientists engage in.
(Not so coincidentally, I am currently reading the same book. )
If you like, just follow Carroll and replace “Copenhagen” with “standard textbook” in what I wrote. I’m referring to our understanding of the issues today, not that of Bohr and Heisenberg in their discussions in Copenhagen shortly after the discovery of QM nearly a century ago. Yes, their attitude was “shut up because our understanding is that you can’t get anywhere”, but
1. I don’t think that this attitude was just naive anti-realism. It seems likely they had considered the practical issue of what would happen if you tried to treat a spectrometer as a quantum system, and realized they had no idea of how to approach the problem of emergence of the classical from quantum, so were stuck with figuring out what you could say without an understanding of such emergence.
2. Developments since the 20s such as decoherence have shown that there are hopes for making progress on understanding such emergence. As a result I think the “standard textbook” story has evolved to take into account this possibility (my impression is that this is sometimes called “neo-Copenhagen”).
I don’t think issues about Bell, Einstein, locality and realism have anything to do with this.
Yes, the “Everettian” hope is that if you understand classical emergence you’ll derive the Born rule. I don’t think though that there is now any accepted such derivation. For instance, for a discussion of the problems with the relatively recent Carroll/Sebens attempt at a derivation, see this by Adrian Kent
The conventional “Copenhagen” view of Bohr/Heisenberg would be that you can’t hope to derive the Born rule, and the “standard textbook” view is that it’s a postulate because there is now no derivation.
I guess one way of answering the question of what the Copenhagen/Everett difference is would be to say that they differ mainly in their guess as to how difficult it will be to understand classical emergence and derive the assumed properties of classical observables, collapse and Born’s rule. Given progress on issues like decoherence I don’t think the extreme “no progress is possible” position is tenable, so the difference between the two is one of degree.
I’ll be curious to hear what you think about the book!
I do want to put off discussing the “many worlds” business, because to my mind it is about an independent set of complicated issues and I want to think more about them. I’ll just point out here that Zurek argues that the Everett point of view does not require adopting a many worlds ontology, and, even Carroll (on page 234) writes: “The truth is, nothing forces us to think of the wave-function as describing multiple worlds”. Yes that quote is out of context, and he also makes an argument on the next page that multiple worlds are “enormously convenient”. This is where I think I disagree with him, for reasons best discussed separately from the issue I wanted to bring up here.
Good old books inform that the same issue had been fiercely debated around 1926, when Schroedinger/Einstein wanted to describe everything via a deterministic local equation, getting rid of quantum jumps. Heisenberg/Bohr explained that it’s not possible because we see particles as events. Decoherence and all modern stuff allow to understand better but don’t change the key point: we need probabilities. So the Schroedinger equation is just a tool for computing probabilities in configuration space. Progress will be possible only if somebody will understand why a particle decays at a given time and choosing a specific final state (directions etc). But, exploring higher energy, nobody found dices.
I find it helpful to distinguish the standard quantum formalism from any and all attempts to interpret this. It’s fair to say that the axiomatic formulation is heavily influenced by the ‘Copenhagen school’ – the wavefunction provides a complete description, observables as the expectation values of Hermitian operators, the Born rule, the unitary time-dependent Schrodinger equation – but the formalism itself is a set of axioms and mathematical relationships, not an interpretation.
Although I agree with Carroll that there is no such thing as a ‘Copenhagen interpretation’ on which there was ever any kind of consensus, I think it is reasonable to follow Bohr and Heisenberg, who (rather uneasily) agreed on Bohr’s notion of complementarity and a clear distinction between the quantum and classical worlds – sometimes referred to as ‘Heisenberg’s cut’, or what John Bell called the ‘shifty split’. Interestingly, I don’t accept that von Neumann’s projection postulate (the ‘collapse of the wavefunction’) should be included as part of this understanding of the Copenhagen interpretation. In the ‘Mathematical Foundations’, von Neumann applied his Process 2 (unitary time evolution according to the Schrodinger equation) also to classical measuring instruments, and only invoked Process 1 (the collapse) when the wavefunction encounters the experimenter’s ‘ego’. I don’t know if Bohr ever had anything to say about this, but he surely wouldn’t have liked the idea of applying quantum mechanics to classical objects.
It’s therefore wrong in my opinion so say that the Copenhagen interpretation implies the collapse of the wavefunction. Copenhagen is a fundamentally anti-realist interpretation – arguably it doesn’t take the wavefunction to represent the real physical state of a quantum system. The ‘collapse’ then just represents an updating of our experience, not a real physical event. It is only when you prefer to adopt a realist interpretation – as in the MWI – that you are obliged to interpret the collapse realistically. Of course, this hasn’t stopped people (from Everett and Bryce De Witt, to Adam Becker in his book ‘What is Real?’) from accusing the Copenhagen interpretation of all this bad stuff.
The formalism is completely inscrutable on the question of interpretation and, as we know from all the marvellous experiments that have been done in the last 40 years or so, any attempt to *extend* the formalism using local or crypto non-local hidden variables fails to predict the results correctly. Any interpretation must therefore be consistent with the formalism, as we know this is correct. This doesn’t mean to say that other realist extensions invoking spontaneous collapse mechanisms (for example) aren’t possible, it’s just that we haven’t properly tested these yet. However, I have some sense of how such experiments might turn out …
So, in terms of the question you posed in your post – no, there is absolutely no difference in the way you apply the time-dependent Schrodinger equation in the standard formalism and in many worlds. But many worlds is a realist interpretation, and as such it struggles to find a realistic explanation for the Born rule, and for the preferred basis. Decoherence (understood as a real physical process) helps to bridge the quantum and classical worlds but I’d argue that it doesn’t fix these problems.
And then you have the point that Sabine raised, above. Some ‘Everettians’ see little or no difference between the different ‘branches’ of the MWI and the ’empty waves’ of de Broglie-Bohm pilot wave theory. Others see the MWI as a useful heuristic, not to be taken too literally. But still others – such as David Deutsch and philosopher David Wallace – want to interpret many worlds literally, in terms of parallel universes or a multiverse. This is where in my book the interpretation gets completely overwhelmed by its metaphysics.
I’m just now going through the final edit of my new book (now titled: ‘Quantum Reality: The Essential Meaning of Quantum Mechanics and the Game of Theories’) which will hopefully set all this out reasonably clearly so folks can understand the nature of the game that’s being played. It will be published next year.
I took QM from Feynman in 1974-75. He was quite explicit that he was making no philosophical point (“ontological commitment,” the philosophers would say) in teaching the standard “Shut-up-and-calculate” approach (by the way, the phrase seems to be Dave Mermin’s and was not Feynman’s, contrary to folk wisdom).
Feynman told us simply that if we tried to pursue interpretation issues, we would end up finding that we had not made any progress at all: it was not that the questions were necessarily meaningless but just beyond our abilities to answer.
As to quantum computing: the British physicist David Deutsch, one of the pioneers of theoretical quantum computing, has written on this extensively and is quite convinced that the standard QM theory of quantum computing proves that many-worlds must be true. (For the record, I think he is wrong, but, then again, all I know is that I do not know the true answer!)
Further to my earlier comment, I was sufficiently intrigued to spend a few minutes this morning finding out if Bohr ever said anything about von Neumann’s quantum theory of measurement. There’s a very brief reference in Abraham Pais’ biography of Bohr, in which Pais quotes an entry from one of his old notebooks pertaining to a lecture delivered by Bohr in November 1954 which reads: ‘[Bohr] thinks that the notion “quantum theory of measurement” is wrongly put’.
This gels with my own understanding. An anti-realist interpretation which assumes that the quantum representation doesn’t apply to classical objects has no use for a quantum theory of measurement. This was never part of the Copenhagen interpretation.
In his 1970 Physics Today article (the one that really launched ‘many worlds’) Bryce De Witt claimed that von Neumann’s collapse postulate was part of the ‘conventional’ or ‘Copenhagen’ interpretation. I’d accept ‘conventional’, but he was wrong to say ‘Copenhagen’. I suspect this was the beginning of attempts to demonize the Copenhagen interpretation and accuse it of all manner of bad things, presumably to make many worlds look good by contrast.
To follow up on Jim Baggott: it makes little sense to identify “textbook quantum mechanics” with the Copenhagen interpretation, since textbooks typically just provide the Born recipe without explanation. In this sense, “textbook QM” can be seen as compatible with Everettian and Copenhagen views, whether or not the two are contradictory.
In order to meaningfully discuss the issue of compatibility of the Copenhagen and Everett pictures, one should first nail down what one means with the Copenhagen interpretation (as different people mean different things).
This article about Bohr’s vehement opposition to Everett’s proposal may be helpful in this regard.
Suppose you want to do cosmology and study the extreme situations when quantum effects become relevant. There does not seem to be any meaningful notion of a “measurement” or an “observer” since no scientist can locate his or her lab outside of the universe.
Would that not favor Everett over Copenhagen ?
There is a clear difference between Copenhagen and Everett.
The various versions of Copenhagen (or textbook) QM all require that the system studied is in fact a small subsystem of a larger universe. This is to make room for Bohr’s classical world, where live the measuring instruments, clocks and observers. This is also required to connect the “probabilities” defined by Born’s rule and the projection postulate with genuine experimentally obtained relative frequencies. The whole formalism is based on a clean distinction between states and observables, which expresses the assumption that by doing many repetitions of a measurement or preparation (which is only possible if your system is a subsystem of a larger universe) you can cleanly separate the effects of laws from those of initial conditions.
These distinctions all become problematic when one tries to apply textbook QM to the whole universe. Consequently anyone who contemplated doing so (ie a quantum cosmology) were aware it would require a modification of QM.
Everett QM was a deliberate attempt to make such a modification of QM that would make sense when applied to the universe as a whole. That is a big difference. Everett, Wheeler and deWitt all emphasized this point.
It was Everett’s brilliant insight that to make QM cosmological might only require dropping the projection postulate and any reference to intrinsic probabilities, thus basing the theory on universal unitary evolution. It seems to be widely believed that his version fails to recover the correct probabilities. Otherwise there would be no need for the very subtle Oxford version of Deutsch, Wallace, Saunders etc. Whether they have completely succeeded or not-and in what version-remains, to my understanding, unresolved.
ps I look forward to reading Sean’s new book. I disagree with him on several key points, but have tremendous admiration for the clarity of his thinking and writing.
People are willing to accuse Bohr of all sorts of mean, nasty, horrible things based on second-hand impressions, hearsay and a few overly-recycled quotations. His reputation for being an obscure writer is warranted, but it is also amplified by the unfortunate accident that his highest-profile writing (like his reply to EPR) is much less clear than some lesser-known works (like his contribution to the 1938 Warsaw conference). Catherine Chevalley adds that the conceptual concerns which motivated Bohr typically hailed from a tradition that is unfamiliar to modern philosophers of physics, and that the questions on Bohr’s mind were often not those which nowadays go under the name of “quantum foundations.” This has naturally led readers to be frustrated with him, simply because they expect something else from a discourse on “the interpretation of quantum mechanics.”
Yes, that’s true. The textbook/Copenhagen recipe doesn’t work for a QM cosmology. On the other hand, given the difficulties one already has with understanding out to get classical emergence in the non-gravitational case, it’s not clear that Everett will also solve the much harder problem of how to get a quantum cosmology.
“If you ask a follower of Copenhagen (let’s call her “Alice”) “is the behavior of that spectrometer in your lab governed in principle by the laws of quantum mechanics” I assume that she would say “yes”.”
There is divide between old Copenhagen and new Copenhagen in this point. I think it is nicely illustrated by what happened in QUPON 2015:
Časlav Brukner, a good representative of the neo-Copenhagen point of view, gave a talk about how to understand Wigner’s friend. He argued that measurements are in fact described by the Schrödinger equation, but that in order to avoid Many-Worlds one needs to understand quantum states as only representations of knowledge of specific observers, not as ontological entities. Moreover, one cannot compare perspectives of different observers.
Right afterwards Anton Zeilinger, aa good representative of the old Copenhagen point of view, and Časlav’s former PhD supervisor, gave a talk about something else. He started anyway with a critique of Časlav’s talk, and visibly angry he thundered: “The measurement apparatus is classical!”.
I agree. Indeed, since, for me, that there is only one universe follows from the definition of the universe, plus basic principles like the identity of the indiscernible, any argument that applying QM to cosmology requires a many worlds formulation is a reductio ad absurdum.
The correct conclusion, I would argue, is that QM in its current form, cannot be applied to the whole universe, because it is structurally a description of a subsystem of a larger system. What is needed is a completion that gives a complete description of individual phenomena, such as dBB, dynamical collapse models or the real ensemble formulation. Or, something yet to be invented.
As pointed out in some comments, research on decoherence can be viewed as an attempt to recover a classical world from Everett-style QM, that is, from purely unitary evolution.
So it is maybe not surprising that Zurek and Everett had the same thesis adviser (Wheeler).
I am confused now because according to Zurek’s wikipedia page his thesis advisor was William C. Schieve. But according to Wheeler’s, Zurek was a student of his.
Maybe Wheeler was a co-advisor?
Pingback: Something Deeply Hidden | Not Even Wrong
I think that the desire to make quantum mechanics apply to the whole universe (as there seems to be no border between classical and quantum mechanics) is the only reason for the popularity of the Everett point of view that there is only unitary evolution of the universe and nothing else. But since the purely unitary view lacks definite concepts for the notion of objective local events and local observers Everett had to introduce other (in my view abstruse) stuff to produce (an appearance of) consequences of the universal evolution that are applicable to the lab – something that textbook quantum mechanics has no problem postulating.
In my recent work I propose a different interpretation of quantum physics, which shares with Everett’s view the advantage of having only unitary evolution for the universe. But it avoids the downsides (of ill-defined worlds, splits, events, histories, etc.). This ”thermal interpretation” is described in my paper https://arxiv.org/abs/1902.10779 and discussed in further papers accessible from my web site http://www.mat.univie.ac.at/~neum/physfaq/therm/thermalMain.html
These papers form the basis of a book featuring the thermal interpretation of quantum physics that will come out soon: