Gerard ’t Hooft in recent years has been pursuing some idiosyncratic ideas about quantum mechanics; for various versions of these, see papers like this, this, this and this. His latest version is last month’s Discreteness and Determinism in Superstrings, which starts with cellular automata in 1+1 dimensions and somehow gets a quantized superstring out of it (there are also some comments about this on his web-site here).

Personally I find it difficult to get at all interested in this (for reasons I’ll try and explain in a moment), but those who are interested might like to know that ’t Hooft has taken to explaining himself and discussing things with his critics at a couple places on-line, including Physics StackExchange, and Lubos Motl’s blog. If you want to discuss ’t Hooft’s ideas, best if you use one of these other venues, where you can interact with the man himself.

One of ’t Hooft’s motivations is a very common one, discomfort with the non-determinism of the conventional interpretation of quantum mechanics. The world is full of crackpots with similar feelings who produce reams of utter nonsense. ’t Hooft is a scientist though of the highest caliber, and as with some other people who have tried to do this sort of thing, I don’t think what he is producing is nonsense. It is, however, extremely speculative, and, to my taste, starting with a very unpromising starting point.

Looking at the results he has, there’s very little of modern physics there, including pretty much none of the standard model (which ’t Hooft himself had a crucial role in developing). If you’re going to claim to solve open problems in modern physics with some radical new ideas, you need to first show that these ideas reproduce the successes of the estabished older ones. From what I can tell, ‘t Hooft may be optimistic he can get there, but he’s a very long way from such a goal.

Another reason for taking very speculative ideas seriously, even if they haven’t gotten far yet, is if they seem to involve a set of powerful and promising ideas. This is very much a matter of judgement: what to me are central and deep ideas about mathematics and physics are quite different than someone else’s list. In this case, the central mathematical structures of quantum mechanics fit so well with central, deep and powerful insights into modern mathematics (through symmetries and representation theory) that any claim these should be abandoned in favor of something very different has a big hurdle to overcome. Basing everything on cellular automata seems to me extremely unpromising: you’re throwing out deep and powerful structures for something very simple and easy to understand, but with little inherent explanatory power. That’s my take on this, those who see this differently and want to learn more about what ’t Hooft is up to should follow the links above, and try discussing these matters at the venues ’t Hooft is frequenting.

I hope you’ll also comment on Alain Connes new paper. It does try to be consistent with the standard model, but apart from that it’s too hard for me.

Even though my work is here sketched as “not even wrong”, I will avoid any glimpse of hostility, as requested; I do think I have the right to say something here in my defense (One positive note: “Not even wrong” sounds a little bit better than “Wrong wrong wrong” on another blog …).

First, I agree that cellular automata doesn’t sound very sexy; those who have seen Wolfram’s book will certainly be discouraged. But I want to stress as much as I can that I am striving at a sound and interesting mathematical basis to what I am doing; least of all I would be tempted to throw away any of the sound and elegant mathematics of quantum mechanics and string theory. Symmetries, representation theory, and more, will continue to be central themes.

I am disappointed about the reception of my paper on string theory, as I was hoping that it would open some people’s eyes. Perhaps it will, if some of my friends would be prepared to put their deeply rooted scepsis against the notion of determinism on hold.

I think the mathematics I am using is interesting and helpful. I encounter elliptic theta functions, and hit upon an elegant relation between sets of non-commuting operators p and q on the one hand, with integer, commuting variables P and Q on the other. All important features of Quantum Mechanics are kept intact as they should.

I did not choose to side with Einstein on the issue of QM, it just came out that way, I can’t help that. It is also not an aversion of any kind that I would have against Quantum Mechanics as it stands, it is only the interpretation where I think I have non-trivial observations.

If you like the many world interpretation, or Bohm’s pilot waves, fine, but I never thought those have anything to do with the real world; my interpretation I find far superior, but I just found out from other blogs as well as this one, that most people are not ready for my ideas. Since the mud thrown at me is slippery, it is hard to defend my ideas but I think I am making progress.

They could well lead to new predictions, such as a calculable string coupling constant g_s, and (an older prediction) the limitations for quantum computers. They should help investigators to understand what they are doing when they discuss “quantum cosmology”, and eventually, they should be crucial for model building.

G. ’t H

Prof. ‘t Hooft,

Thanks for writing here with your reaction to and comments on the blog posting. I hope you’ll keep in mind that I often point out that “Not Even Wrong” is where pretty much all speculative ideas start life. Some of the ideas I’m most enthusiastic about are certainly now “Not Even Wrong”, in the sense of being far, far away from something testable.

While my own enthusiasms are quite different than yours, and lead me to some skepticism about your starting point, the reason for this blog posting was not to launch a hostile attack, but to point others to what I thought was an interesting discussion, one which many of my readers might find valuable to know about.

Good luck pursuing these ideas, may you show my skepticism and that of others to be mistaken…

Prof. ‘t Hooft,

While I am not familiar with your particular work, I am familiar with previous explorations on the theme of interpretations on quantum mechanics and determinism, particularly with old things such as de Broglie-Bohm’s theory, Bell’s contextual ontological model, Kochen-Specker’s model, and newer things such as Harrigan & Spekkens classification of ontological models, Lewis et al. psi-epistemic model, Hardy’s excess baggage theorem, etc. But after studying them with interest for a while, I gradually developed the opinion that they have no good motivation, use uninteresting mathematics, and have been generally fruitless. Since then I have stopped paying attention to this area of research.

Since you seem to be interested in defending your work and, furthermore, in publicizing it, I would be very interested in knowing what’s the difference from these previous explorations and, more importantly, what’s the motivation for you to begin work in this area (since you claim not to be motivated by an aversion to Quantum Mechanics).

I do hope you can convince me to study your work, and perhaps an answer could be useful to other researchers as well.

I think it’s great that Prof. ‘T-Hooft (hope I got the capitalizations and apostrophes right) is commenting here. This site is in many ways much less hostile than other sites we could all name easily.

I started out (well, in terms of blog years, not my physics education several decades ago) as a string critic. I didn’t find Brian Greene’s first book very persuasive (nor a book from the 1980s, I forget who wrote it). I was receptive to Lee Smolin’s “Three Roads to Quantum Gravity” and to this blog, even before the book came out.

However, partly as a result of reading this blog for about half a dozen years (I think), I’ve gradually warmed to a lot of the string stuff. Not the Kaku stuff, but the Arkani-Hamed types of stuff.

The fact that we may be 10-15 orders of magnitude away from probing the energies needed to test some of these theories is, of course, daunting. And I think both Woit and Smolin were basically right to warn of the “We are only interested in hiring string theorists” situation a while back.

However, things seem to be on a somewhat better keel today. Or so is my perception from this blog and others. And of course the stuff coming out of cosmology and observational astronomy is just plain exciting.

How it links with math is also exciting, albeit probably decades or even centuries off in terms of real experimental links.

Exciting times. And I think “Not Even Wrong” is useful for reminding its readers to doubt some popular theories. Sort of like the guy the Romans used to hire to ride behind the triumphant god-king warrior to remind him he is not really a god.

As for CAs, I’d've thunk (a technical term ) that Kochen-Specker and other no-go results rule out strictly local (as CAs seem to be) theories. But I look forward to seeing Prof. ‘T-Hooft enter this arena of debate.

–Tim May

Prof. ‘t Hooft,

I think the key point in all this is in your last paragraph:

“They could well lead to new predictions, such as a calculable string coupling constant g_s, and (an older prediction) the limitations for quantum computers. They should help investigators to understand what they are doing when they discuss “quantum cosmology”, and eventually, they should be crucial for model building.”

Regardless of what people think about your ideas (most of the people probably just dismiss it without reading it after hearing about what you are claiming) the key point with new interpretations of Quantum Mechanics is suggesting an experiment where it can predict something new (OK, saying this to a Nobel prize professor sounds perhaps very pretentious, but I just want to express my deepest concern about your message). A new paper based on this one, but concentrated only on the “what’s measurably different” in all of what you are saying, would (I suppose) attract attention and perhaps put skepticism on hold for a while. I confess I find difficult to digest your paper but if I could understand better where could it measurably matter then this would be a different story. Otherwise it will probably stay as a curiosity.

Let me finish saying I deeply admire your courage on working with fundamental problems in QM and as I semi-young physicist I even more deeply envy the freedom you have to pursue your own ideas.

Prof ‘t Hooft is in exactly the right situation to play with highly speculative ideas that a young postdoc. cannot afford to do.

If anyone has “earned the right” to engage the profession with speculative ideas, it is Professor ‘t Hooft with his track record of brilliant theoretical insights. I look forward to learning more about this.

‘t Hooft,

Could you explain why you think the Bohm theory is rubbish? It seems to me that it has already done exactly what you are trying to do — provide a perfectly consistent alternative to the standard approach in quantum mechanics.

I also don’t think that you are siding with Einstein on quantum mechanics. Einstein made clear that he did NOT think something like Bell’s inequality could exist, but it DOES! He was very clear that his thinking would change if something like a Bell inequality was discovered, in particular it would lead to something along the lines of a Bohm interpretation.

At first, I did not believe it was GT – but doing the (’t) correctly (twice even) gave it away. Thanks for posting.

I posted this before and I will post it again!

Ray Streater on Bohmian Mechanics:

http://www.mth.kcl.ac.uk/~streater/lostcauses.html#XI

Unless Prof. ’t Hooft himself wants to answer and discuss the topic, enough about Bohmian mechanics, please.

`t Hooft is one of the good guys, an incarnation of Dirk Foster and a physicist of the highest calibre. This deterministic programme seems Just Possibly True, and should probably serve as inspiration for the sort of paradigm-shifting research everyone seems to advocate (but no-one pursues). My humble opinion.

I think the biggest issue here is that ‘t Hooft (clearly a brilliant physicist) ignores Conway/Kochen, who clearly prove, given three axioms, that the universe *can’t* be deterministic. Actually, they prove it isn’t “random” in the usual sense either. And the axioms one needs to assume are essentially completely straight QM and relativity, together with the inability to influence past events.

Though I don’t necessarily accept ‘t Hooft’s scenario, I am a little surprised by some of the criticism. The no-go theorems mentioned above are not relevant.

There is no contradiction for quantum evolution to occur in a deterministic system. A cellular automaton, for example, can be regarded as a transfer matrix with a special property. This property is that given any initial (basis) state, the matrix element is not zero for a unique choice of final (basis) state. Such transfer matrices can be unitary, hence identified with quantum evolution.

I don’t know whether the scenario is useful or not. It is certainly not, however, ill-conceived.

There are two problems with Jeff’s dismissal of ‘t H.’s approach; Peter Orland has touched on one. For the other, see

http://philsci-archive.pitt.edu/8437/1/WuthrichChristian2010PhilSci_IndeterministicWorld.pdf

Wüthrich’s observation that even Conway and Kochen’s scenario depends on access to information about—in fact is a *function* of—prior events (which, NB, has nothing to do with influence over prior events) seems like a huge analytic challenge to C&W’s core thesis. Obviously, the game is still in play, and meanwhile, you can’t appeal to C&W as any kind of decisive refutation of what ‘t H. is proposing.

… and it is not the same as DeBroglie/Bohm mechanics. Or Nelson’s for that matter.

Peter O,

I’m a bit confused about why you feel Conway/Kochen isn’t relevant. Prof. ‘t Hooft is more than a good enough mathematician to understand that his system somehow violates one of their 3 axioms, but I can’t from his writeup figure out which one it would be. Then again, if you read his paper, he seems to be ducking the issue “The philosophy used here is often attacked by using Bell’s inequalities[1]—[3] applied to some Gedanken experiment, or some similar “quantum” arguments. In this paper we will not attempt to counter those (see Ref. [16]), but we restrict ourselves to the mathematical expressions.” Since the discussion in [3] is a valid mathematical proof, I can’t understand ignoring it.

Jeff,

‘t Hooft’s model does not specify values of observables. It has deterministic evolution of basis states. That is why the theorems you quote do not apply.

Peter o

Thanks, been too long since I thought like a physicist. :-). I’m reading as a mathematician, and assumed “deterministic” had the usual meaning.

Just another link, which may be helpful in this context:

http://www.youtube.com/watch?v=lGB3oVxivhE

“Though I don’t necessarily accept ‘t Hooft’s scenario, I am a little surprised by some of the criticism. The no-go theorems mentioned above are not relevant.

There is no contradiction for quantum evolution to occur in a deterministic system. A cellular automaton, for example, can be regarded as a transfer matrix with a special property. This property is that given any initial (basis) state, the matrix element is not zero for a unique choice of final (basis) state.”

This (cellular automaton as a transfer matrix) may or may not be true but it’s not interesting. The problem of interest here is not alternative computational models of QFT (ie alternative ways to solve PDEs or calculate path integrals). The problem of interest is WHY DOES SOMETHING LIKE COLLAPSE OF THE WAVE FUNCTION OCCUR?

The deterministic answer, in all its forms, whether Bohm or many universes or whatever, is to claim that this “collapse” is a misunderstanding, that if you define the problem properly, then some deterministic combination of initial conditions plus minor perturbations along the way lead you inevitably to the specific state that you measured.

The nice thing about this world view is that it’s compatible with relativity — it doesn’t bring up any (so far completely unresolved IMHO) questions about “when” does this collapse occur, bearing in mind that “when” is a relative concept and so does the collapse “propagate outward” at the speed of light from some initiating point (very problematic) or does it happen “simultaneously” (hmm, that’s not a very relativistic word) (perhaps simultaneously over some especially blessed world surface?).

OK, so determinism is nice. Only problem is that it appears to be wrong wrong wrong. Bell’s inequality is one version of why it’s wrong, but the deeper reason it is wrong is that it uses a broken mental model of the relationship between probability and QM.

Probability is based on the idea of underlying space \Omega, a sigma field of events, and a measure associated with the sigma field. On top of this we construct random variables which (and this is the important part) are all COMPATIBLE. That is, for any random variables X and Y, the concept of a joint distribution,say F(X, Y) is well defined. At root, this is because the sigma fields defined by X and Y are subfields of the underlying \Omega sigma field, and we can construct an intersection of them.

Even more fundamentally, this is because the “building up” operator we’re working with is set union, and set intersection plays nicely with set union.

Now we switch to the world of operators and vector spaces.

Given one particular operator O, this has a set of eigenvalues.

Associated with each set of eigenvalues (-\infinity, \lambda] is a vector space,call it V_\lambda.

IF in addition we are given a vector \psi, we can now associate a real number with V_\lambda, namely the length of the projection of \psi into V_\lambda.

This in other words gives us a monotonic increasing function (a measure) associated with increasing \lambda.

This may look unfamiliar, but it’s really not scary, it’s the usual stuff you are familiar with — an eigenvalue and associated with a “probability” for that eigenvalue given by , only made a little more rigorous and described in the language of probability.

This monotonic increasing function associated with increasing \lambda is just like the cumulative distribution function associated with a random variable,and because of that, people have for almost a hundred years being slipping informally between operator language (eigenvalues, eigenspaces, “probability associated with an eigenvalue”) to an implicit assumption that we are dealing with full-blown probability theory and random variables. We are NOT.

Things fall apart if we consider now a second operator, call it Q, which does not commute with operator O. Whereas two random variables ARE always “compatible” in the sense that I stated earlier, specifically, that they have an associated joint distribution (and an associated underlying set of points \xi \in \Omega each of which represent “initial conditions” which might lead to a particular joint outcome of X=x and Y=Y; this sort of thing is no longer true for non-commuting operators O and Q.

Q (and vector \psi) generate another spectrum of eigenvalues, each with an associated weight, and so can also, apparently, be thought of as random variable.

But there is no “compatibility” between these two random variables. More specifically, there IS NO finer sigma algebra which contains both the sigma algebras generated by the O-”random variable” and the Q-”random variable”.

At root, this is because the fundamental “points” we are dealing with when we treat O as having an associated random variable are not simple points, they are vector spaces, and the building-up operation as we aggregate these is not a union of sets of “simple points”, it is a cartesian product of vector spaces. But the cartesian product does NOT play nicely with intersection the way union does (we don’t have the full set of De Morgan laws).

Or to put it slightly differently, given a set \Omega, the set which is actually relevant to QM is C^\Omega, and the measures defined by operators apply to this set C^\Omega “sliced by cartesian product” along different angles for different operators. This is different from standard measures which derive by slicing \Omega “by union”. The union slices, when intersected, still give useful sets. Cartesian product slices, when intersected, simply give the set {0}, not useful structure.

I know this sounds like a whole lot of weirdness, but there’s nothing unorthodox here — it’s just standard probability theory, and standard Hilbert space theory interpreted as measure theory. But, IMHO, this conceptualization is, once you understand it, extremely powerful in revealing where the true weirdness of QM lies. In particular, again IMHO, it’s as powerful a mathematical argument as we’re ever going to get that the underlying idea of many-world theories. I’ve never seen a real mathematica formulation of a many-world theory, so I’ve no idea what the proponents actually mean; but as far as I can tell, what they mean is essentially a probabilistic model: the multiverse consists of some unfathomably large set \Omega of points \xi, each \xi corresponding to a universe, with some sort of measure tied to subsets of \Omega, and our universe is the deterministic unfolding of one of these \xi. As I’ve tried to explain, you just can’t get this to work, because the model only works when you “aggregate points by union”, and QM doesn’t do that.

Maynard,

Bell’s inequality is not violated in ‘t Hooft’s model. It is just a way to formulate quantum mechanics. It is deterministic in the way that one basis vector is sent to the next, during some discrete time interval. This is a linear unitary map (both vectors are normalized), hence just quantum mechanics. Nothing is any different concerning collapse or no-collapse of wave functions.

A trivial example of this kind of model (much simpler than what ‘t Hooft considers) can be done with two spin states, s1 and s2. The transfer matrix sends s1 to s2 and s2 to s1. It therefore has the representation of the Pauli matrix sigma1. Well, that is a unitary evolution operator. A slightly less trivial example is a permutation of N objects, with N>2 (which can have complex eigenvalues, with unit norm). Any permutation can be represented as a unitary N X N matrix. Hence it can be written as the logarithm of i (sqrt of minus 1) times the time interval times a Hamiltonian (also an N X N matrix). The Schroedinger equation is satisfied, in the sense that at its solution is the state vector. The eigenvalues of the transfer matrix do not have to be real numbers (though its components are real).

This kind of model is not a traditional hidden-variable theory in the sense of De Broglie, Bohm, Nelson or anyone else. Unlike those theories, you cannot specify all observables simultaneously. You can specify certain observables at discrete time intervals, but the uncertainty principle is intact.

I am not saying you should accept the idea, just that you need to see it for what it is.

Anyway, I would prefer not spending time defending this. I wish I did not get incensed when I see people’s work criticized for the wrong reasons.

I meant to say Bell’s inequality is violated. It is just QM.

Thanks, Peter.

I don’t want to thread jack, but how does what you say fit in with Peter’s statement about “One of ’t Hooft’s motivations is a very common one, discomfort with the non-determinism of the conventional interpretation of quantum mechanics.”

I will admit I have not yet this particular ‘t Hooft’s paper (though I have pretty much always been pleased with when I have read something by him) but I assumed, from that, that this WAS the point at issue. Hence my long attempt to summarize the issues in play and my views on them.

If all we have is “deterministic evolution of the wave function” then why is Peter saying what he is saying?

Maynard,

If you read my posting, you’d notice that its point is to explain why I’m not trying to seriously understand exactly what ‘t Hooft is doing. So, I’m not in any position to participate in a discussion of this sort. Actually, neither are you, since you admit you haven’t read his paper. Enough about this.

Here’s a more concise rebuttal of Streater: http://www.ilja-schmelzer.de/realism/BMarguments.php

No more Bohmian mechanics. At all, ever.

Prof ‘t Hooft, don’t get discouraged by the opposition to your cellular automota approach to QM. Computer science is still in its infancy, and it’s impact on mathematics and physics has only just begun.

I think your approach via minimalistic cellular automota is very promising, and the recent anti-determinism bias is just a fad. QM has equally expressive deterministic and non-deterministic formulations, and an exploration of the limits of deterministic formal systems given the current empirical evidence is essential. There seems little reason to posit non-determinism if it is not necessary to do so, as deterministic explanations invariably have more explanatory power.