Something Deeply Hidden

Sean Carroll’s new (available in stores early September) book, Something Deeply Hidden, is a quite good introduction to issues in the understanding of quantum mechanics, unfortunately wrapped in a book cover and promotional campaign of utter nonsense. Most people won’t read much beyond the front flap, where they’ll be told:

Most physicists haven’t even recognized the uncomfortable truth: physics has been in crisis since 1927. Quantum mechanics has always had obvious gaps—which have come to be simply ignored. Science popularizers keep telling us how weird it is, how impossible it is to understand. Academics discourage students from working on the “dead end” of quantum foundations. Putting his professional reputation on the line with this audacious yet entirely reasonable book, Carroll says that the crisis can now come to an end. We just have to accept that there is more than one of us in the universe. There are many, many Sean Carrolls. Many of every one of us.

This kind of ridiculous multi-worlds woo is by now rather tired, you can find variants of it in a host of other popular books written over the past 25 years. The great thing about Carroll’s book though is that (at least if you buy the hardback) you can tear off the dust jacket, throw it away, and unlike earlier such books, you’ll be left with something well-written, and if not “entirely reasonable”, at least mostly reasonable.

Carroll gives an unusually lucid explanation of what the standard quantum formalism says, making clear the ways in which it gives a coherent picture of the world, but one quite a bit different than that of classical mechanics. Instead of the usual long discussions of alternatives to QM such as Bohmian mechanics or dynamical collapse, he deals with these expeditiously in a short chapter that appropriately explains the problems with such alternatives. The usual multiverse mania that has overrun particle theory (the cosmological multiverse) is relegated to a short footnote (page 122) which just explains that that is a different topic. String theory gets about half a page (discussed with loop quantum gravity on pages 274-5). While the outrageously untrue statement is made that string theory “makes finite predictions for all physical quantities”, there’s also the unusually reasonable “While string theory has been somewhat successful in dealing with the technical problems of quantum gravity, it hasn’t shed much light on the conceptual problems.” AdS/CFT gets a page or so (pages 303-4), with half of it devoted to explaining that its features are specific to AdS space, about which “Alas, it’s not the real world.” He has this characterization of the situation:

There’s an old joke about the drunk who is looking under a lamppost for his lost keys. When someone asks if he’s sure he lost them there, he replies, “Oh no, I lost them somewhere else, but the light is much better over here.” In the quantum-gravity game, AdS/CFT is the world’s brightest lamppost.

I found Carroll’s clear explanations especially useful on topics where I disagree with him, since reading him clarified for me several different issues. I wrote recently here about one of them. I’ve always been confused about whether I fall in the “Copenhagen/standard textbook interpretation” camp or “Everett” camp, and reading this book got me to better understanding the difference between the two, which I now think to a large degree comes down to what one thinks about the problem of emergence of classical from quantum. Is this a problem that is hopelessly hard or not? Since it seems very hard to me, but I do see that limited progress has been made, I’m sympathetic to both sides of that question. Carroll does at times too much stray into the unfortunate territory of for instance Adam Becker’s recent book, which tried to make a morality play out of this difference, with Everett and his followers fighting a revolutionary battle against the anti-progress conservatives Bohr and Heisenberg. But in general he’s much less tendentious than Becker, making his discussion much more useful.

The biggest problem I have with the book is the part referenced by the unfortunate material on the front flap. I’ve never understood why those favoring so-called “Multiple Worlds” start with what seems to me like a perfectly reasonable project, saying they’re trying to describe measurement and classical emergence from quantum purely using the bare quantum formalism (states + equation of motion), but then usually start talking about splitting of universes. Deciding that multiple worlds are “real” never seemed to me to be necessary (and I think I’m not the only one who feels this way, evidently Zurek also objects to this). Carroll in various places argues for a multiple world ontology, but never gives a convincing argument. He finally ends up with this explanation (page 234-5):

The truth is, nothing forces us to think of the wave function as describing multiple worlds, even after decoherence has occurred. We could just talk about the entire wave function as a whole. It’s just really helpful to split it up into worlds… characterizing the quantum state in terms of multiple worlds isn’t necessary – it just gives us an enormously useful handle on an incredibly complex situation… it is enormously convenient and helpful to do so, and we’re allowed to take advantage of this convenience because the individual worlds don’t interact with one another.

My problem here is that the whole splitting thing seems to me to lead to all sorts of trouble (how does the splitting occur? what counts as a separate world? what characterizes separate worlds?), so if I’m told I don’t need to invoke multiple worlds, why do so? According to Carroll, they’re “enormously convenient”, but for what (other than for papering over rather than solving a hard problem)?

In general I’d rather avoid discussions of what’s “real” and what isn’t (e.g. see here) but, if one is going to use the term, I am happy to agree with Carroll’s “physicalist” argument that our best description of physical reality is as “real” as it gets, so the quantum state is preeminently “real”. The problem with declaring “multiple worlds” to be “real” is that you’re now using the word to mean something completely different (one of these worlds is the emergent classical “reality” our brains are creating out of our sense experience). And since the problem here (classical emergence being just part of it) is that you don’t understand the relation of these two very different things, any argument about whether another “world” besides ours is “real” or not seems to me hopelessly muddled.

Finally, the last section of the book deals with attempts by Carroll to get “space from Hilbert space”, see here, which the cover flap refers to as “His [Carroll’s] reconciling of quantum mechanics with Einstein’s theory of relativity changes, well, everything.” The material in the book itself is much more reasonable, with the highly speculative nature of such ideas emphasized. Since Carroll is such a clear writer, reading these chapters helped me understand what he’s trying to do and what tools he is using. From everything I know about the deep structure of geometry and quantum theory, his project seems to me highly unlikely to give us the needed insight into the relation of these two subjects, but no reason he shouldn’t try. On the other hand, he should ask his publisher to pulp the dust jackets…

Update: Carroll today on Twitter has the following argument from his book for “Many Worlds”:

Once you admit that an electron can be in a superposition of different locations, it follows that person can be in a superposition of having seen the electron in different locations, and indeed that reality as a whole can be in a superposition, and it becomes natural to treat every term in that superposition as a separate “world”.

“Becomes natural” isn’t much of an argument (faced with a problem, there are “natural” things to do which are just wrong and don’t solve the problem). To me, saying one is going to “treat every term in that superposition as a separate “world”” may be natural to you, but it doesn’t actually solve any problem, instead creating a host of new ones.

Update: Some places to read more about these issues.

The book Many Worlds?: Everett, Quantum Theory and Reality gathers various essays, including
Simon Saunders, Introduction
David Wallace, Decoherence and Ontology
Adrian Kent, One World Versus Many

David Wallace’s book, The Emergent Multiverse.

Blog postings from Jess Riedel here and here.

This from Wojciech Zurek, especially the last section, including parts quoted here.

Last Updated on

Posted in Book Reviews, Multiverse Mania | 21 Comments

What’s the difference between Copenhagen and Everett?

I’ve just finished reading Sean Carroll’s forthcoming new book, will write something about it in the next few weeks. Reading the book and thinking about it did clarify various issues for me, and I thought it might be a good idea to write about one of them here. Perhaps readers more versed in the controversy and literature surrounding this issue can point me to places where it is cogently discussed.

Carroll (like many others before him, for a recent example see here), sets up two sides of a controversy:

  • The traditional “Copenhagen” or “textbook” point of view on quantum mechanics: quantum systems are determined by a vector in the quantum state space, evolving unitarily according to the Schrödinger equation, until such time as we choose to do a measurement or observation. Measuring a classical observable of this physical system is a physical process which gives results that are eigenvalues of the quantum operator corresponding to the observable, with the probability of occurrence of an eigenvalue given in terms of the state vector by the Born rule.
  • The “Everettian” point of view on quantum mechanics: the description given here is “The formalism of quantum mechanics, in this view, consists of quantum states as described above and nothing more, which evolve according to the usual Schrödinger equation and nothing more.” In other words, the physical process of making a measurement is just a specific example of the usual unitary evolution of the state vector, there is no need for a separate fundamental physical rule for measurements.

I don’t want to discuss here the question of whether the Everettian point of view implies a “Many Worlds” ontology, that’s something separate which I’ll write about when I get around to writing about the new book.

What strikes me when thinking about these two supposedly very different points of view on quantum mechanics is that I’m having trouble seeing why they are actually any different at all. If you ask a follower of Copenhagen (let’s call her “Alice”) “is the behavior of that spectrometer in your lab governed in principle by the laws of quantum mechanics” I assume that she would say “yes”. She might though go on to point out that this is practically irrelevant to its use in measuring a spectrum, where the results it produces are probability distributions in energy, which can be matched to theory using Born’s rule.

The Everettian (let’s call him “Bob”) will insist on the point that the behavior of the spectrometer, coupled to the environment and system it is measuring, is described in principle by a quantum state and evolves according to the Schrödinger equation. Bob will acknowledge though that this point of principle is useless in practice, since we don’t know what the initial state is, couldn’t write it down if we did, and couldn’t solve the relevant Schrödinger equation even if we could write down the initial state. Bob will explain that for this system, he expects “emergent” classical behavior, producing probability distributions in energy, which can be matched to theory using Born’s rule.

So, what’s the difference between the points of view of Alice and Bob here? It only seems to involve the question of how classical behavior emerges from quantum, with Alice saying she doesn’t know how this works, Bob saying he doesn’t know either, but conjectures it can be done in principle without introducing new physics beyond the usual quantum state/Schrödinger equation story. Alice likely will acknowledge that she has never seen or heard of any evidence of such new physics, so has no reason to believe it is there. They both can agree that understanding how classical emerges from quantum is a difficult problem, well worth studying, one that we are in a much better position now to work on than we were way back when Bohr, Everett and others were struggling with this.

Last Updated on

Posted in Quantum Mechanics | 26 Comments

Where We Are Now

For much of the last 25 years, a huge question hanging over the field of fundamental physics has been that of what judgement results from the LHC would provide about supersymmetry, which underpins the most popular speculative ideas in the subject. These results are now in, and conclusively negative. In principle one could still hope for the HL-LHC (operating in 2026-35) to find superpartners, but there is no serious reason to expect this. Going farther out in the future, there are proposals for an extremely expensive 100km larger version of the LHC, but this is at best decades away, and there again is no serious reason to believe that superpartners exist at the masses such a machine could probe.

The reaction of some parts of the field to this falsification of hopes for supersymmetry has been not at all the abandonment of the idea that one would expect. For example, today brings the bizarre news that failure has been rewarded with a $3 million Special Breakthrough Prize in Fundamental Physics for supergravity. For uncritical media coverage, see for instance here, here, and here.

Some media outlets do better. I first heard about this from Ryan Mandelbaum, who writes here. Ian Sample at the Guardian does note that negative LHC results are “leading many physicists to go off the theory” and quotes one of the awardees as saying:

We’re going through a very tough time… I’m not optimistic. I no longer encourage students to go into theoretical particle physics.

At Nature, the sub-headline is “Three physicists honoured for theory that has been hugely influential — but might not be a good description of reality” and Sabine Hossenfelder is quoted. At her blog, she ends with the following excellent commentary:

Awarding a scientific prize, especially one accompanied by so much publicity, for an idea that has no evidence speaking for it, sends the message that in the foundations of physics contact to observation is no longer relevant. If you want to be successful in my research area, it seems, what matters is that a large number of people follow your footsteps, not that your work is useful to explain natural phenomena. This Special Prize doesn’t only signal to the public that the foundations of physics are no longer part of science, it also discourages people in the field from taking on the hard questions. Congratulations.

In related news, yesterday I watched this video of a recent discussion between Brian Greene and others which, together with a lot of promotional material about string theory, included significant discussion of the implications of the negative LHC results. A summary of what they had to say would be:

  • Marcelo Gleiser has for many years been writing about the limits of scientific knowledge, and sees this as one more example.
  • Michael Dine has since 2003 been promoting the string theory landscape/multiverse, with the idea that one could do statistical predictions using it. Back then we were told that “it is likely that this leads to a prediction of low energy supersymmetry breaking” (although Dine soon realized this wasn’t working out, see here.) In 2007 Physics Today published his String theory in the era of the Large Hadron Collider (discussed here), which complained about how “weblogs” had it wrong that string theory had no relation to experiment. That piece claimed that

    A few years ago, there seemed little hope that string theory could make definitive statements about the physics of the LHC. The development of the landscape has radically altered that situation.

    and that

    The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.

    Confronted by Brian with the issue of LHC results, Dine looks rather uncomfortable, but claims that there still is hope for string theory and the landscape, that now big data and machine learning can be applied to the problem (for commentary on this, see here). He doesn’t though expect to see success in his lifetime.

  • Andy Strominger doesn’t discuss supersymmetry in particular, but about the larger superstring theory unification idea, tries to make the case that it hasn’t been a failure at all, but a success way beyond what was expected. The argument is basically that the search for a unified string theory was like Columbus’s search for a new sea route to China. He didn’t find it, but found something much more exciting, the New World. In this analogy, instead of finding some tedious reductionist new layer of reality as hoped, string theorists have found some revolutionary new insight about the emergent nature of gravity:

    I think that the idea that people were excited about back in 1985 was really a small thing, you know, to kind of complete that table that you put down in the beginning of the spectrum of particles…

    We didn’t do that, we didn’t predict new things that were going to be measured at the Large Hadron Collider, but what has happened is so much more exciting than our original vision… we’re getting little hints of a radical new view of the nature of space and time, in which it really just is an approximate concept, emergent from something deeper. That is really, really more exciting, I mean it’s as exciting as quantum mechanics or general relativity, probably even more so.

    The lesson Strominger seems to have learned from the failure of the 1985 hopes is that when you’ve lost your bet on one piece of hype, the thing to do is double down, go for twice the hype…

Update: The Breakthrough Prize campaign to explain why supergravity is important despite having no known relation to reality has led to various nonsense making its way to the public, as reporters desperately try to make sense of the misleading information they have been fed. For instance, you can read (maybe after first reading this comment) here that

Witten showed in 1981 that the theory could be used to simplify the proof for general relativity, initiating the integration of the theory into string theory.

You could learn here that

When the theory of supersymmetry was developed in 1973, it solved some key problems in particle physics, such as unifying three forces of nature (electromagnetism, the weak nuclear force, and the strong nuclear force)

Update: On the idea that machine learning will solve the problems of string theory, see this yesterday from the Northeastern press office, which explains that the goal is to “unify string theory with experimental findings”:

Using data science to learn more about the large set of possibilities in string theory could ultimately help scientists better understand how theoretical physics fits into findings from experimental physics. Halverson says one of the ongoing questions in the field is how to unify string theory with experimental findings from particle physics and cosmology…

Update: Physics World has a story about this that emphasizes the sort of criticism I’ve been making here.

As mentioned in the comments, I took a closer look at the citation for the prize. The section on supersymmetry is really outrageous, using “supersymmetry stabilizes the weak scale” as an argument for SUSY, despite the fact that this has been falsified by LHC results.

Update: Jim Baggott writes about this story and post-empirical science here.

Noah Smith here gets the most remarkable aspect of this right. String theory has always had the feature that the strings were not supposed to be visible at accessible energies, so not directly testable. Supersymmetry is quite different: it has always been advertised as a directly testable idea, with superpartners supposed to appear at the electroweak scale and be seen at the latest at the LHC. Giving a huge prize to a theoretical idea that has just been conclusively shown to not work is something both new and outrageous.

Update: Tommaso Dorigo’s take is here, which I’d characterize as basically “any publicity is good publicity, but it’s pretty annoying the cash is going to theorists for failed theories instead of experimentalists”(he does say he wanted to entitle the piece “Billionaire Awards Prizes To Failed Theories”):

[Rant mode on] An exception to the above is, of course, the effect that this not insignificant influx of cash and 23rd-hour recognition has on theoretical physicists. For they seem to be the preferred recipients of the breakthrough prize as of late, not unsurprisingly. Apparently, building detectors and developing new methods to study subnuclear reactions, which are our only way to directly fathom the unknown properties of elementary particles, is not considered enough of a breakthrough by Milner’s jury as it is to concoct elegant, albeit wrong, theories of nature. [Rant mode off]

Going back to the effect on laypersons: this is of course positive. Already the sheer idea that you may earn enough cash to buy a Ferrari and a villa in Malibu beach in one shot by writing smart formulas on a sheet of paper is suggestive, in a world dominated by the equation “is paid very well, so it is important”. But even more important is the echo that he prize – somewhere by now dubbed “the Oscar of Physics” – is having on the media. Whatever works to bring science to the fore is welcome in my book.

Last Updated on

Posted in Uncategorized | 48 Comments

Quick Links

A few quick links:

  • Philip Ball at Quanta has a nice article on “Quantum Darwinism” and experiments designed to exhibit actual toy examples of the idea in action (I don’t think “testing” the idea is quite the right language in this context). What’s at issue is the difficult problem of how to understand the way in which classical behavior emerges from an underlying quantum system. For a recent survey article discussing the ideas surrounding Quantum Darwinism, see this from Wojciech Zurek.

    Jess Riedel at his blog has a new FAQ About Experimental Quantum Darwinism which gives more detail about what is actually going on here.

  • This year’s TASI summer school made the excellent choice of concentrating on issues in quantum field theory. Videos, mostly well worth watching, are available here.
  • This month’s Notices of the AMS has a fascinating article about Grothendieck, by Paulo Ribenboim. It comes with a mysterious “Excerpt from” title and editor’s note:

    Ribenboim’s original piece contains some additional facts that are not included in this excerpt. Readers interested in the full text should contact the author.

  • I’ve finally located a valuable Twitter account, this one.

Last Updated on

Posted in Uncategorized | 12 Comments

Prospects for contact of string theory with experiments

Nima Arkani-Hamed today gave a “vision talk” at Strings 2019, entitled Prospects for contact of string theory with experiments which essentially admitted there are no such prospects. He started by joking that he had been assigned this talk topic by someone who wanted to see him give a short talk for a change, or perhaps someone who wanted to “throw him to the wolves”.

The way he dealt with the challenge was by dropping “string theory”, entitling his talk “Connecting Fundamental Theory to the Real World” and only discussing the question of SUSY (he’s still for Split SUSY, negative LHC results are irrelevant since if SUSY were natural it would have been seen at LEP, and maybe a 100km pp machine will see something, or ACME will see an electron edm).

He did discuss the string theory landscape, and explained it was one reason that about 15 years ago he mostly stopped working on phenomenological HEP theory and started doing the more mathematical physics amplitudes stuff. David Gross used to argue that the danger of the multiverse was that it would convince people to give up on trying to understand fundamental issues about HEP theory (where does the Standard Model comes from?). It’s now clear that this is no longer a danger for the future but a reality of the present.

In order to go over time, Arkani-Hamed dropped the topic of his title and turned to discussing his hopes for his amplitudes work. The “long shot fantasy” is that a formulation of QFT will be found in which amplitudes are given by integrating some abstract geometrical quantities.

The conference ended with a “vision” panel discussion. Others may see things differently, but what most struck me about this was the absence of any sort of plausible vision.

Update: Taking a look at the slides from the ongoing EPS-HEP 2019 conference, Ooguri seems to strongly disagree with Arkani-Hamed, claiming in his last slide here that a CMB polarization experiment (LiteBIRD) to fly in 8 years, “provides an unprecedented
opportunity for String Theory to be falsified.” I find this extremely hard to believe. Does anyone else other than Ooguri believe that detection/non-detection of CMB B-modes can falsify string theory?

Last Updated on

Posted in Strings 2XXX | 20 Comments

Against Symmetry

One of the great lessons of twentieth century science is that our most fundamental physical laws are built on symmetry principles. Poincaré space-time symmetry, gauge symmetries, and the symmetries of canonical quantization largely determine the structure of the Standard Model, and local Poincaré symmetry that of general relativity. For the details of what I mean by the first part of this, see this book. Recently though there has been a bit of an “Against Symmetry” publicity campaign, with two recent examples to be discussed here.

Quanta Magazine last month published K.C. Cole’s The Simple Idea Behind Einstein’s Greatest Discoveries, with summary

Lurking behind Einstein’s theory of gravity and our modern understanding of particle physics is the deceptively simple idea of symmetry. But physicists are beginning to question whether focusing on symmetry is still as productive as it once was.

It includes the following:

“There has been, in particle physics, this prejudice that symmetry is at the root of our description of nature,” said the physicist Justin Khoury of the University of Pennsylvania. “That idea has been extremely powerful. But who knows? Maybe we really have to give up on these beautiful and cherished principles that have worked so well. So it’s a very interesting time right now.”

After spending some time trying to figure out how to write something sensible here about Cole’s confused account of the role of symmetry in physics and encountering mystifying claims such as

the Higgs boson that was detected was far too light to fit into any known symmetrical scheme…
symmetry told physicists where to look for both the Higgs boson and gravitational waves

I finally hit the following

“naturalness” — the idea that the universe has to be exactly the way it is for a reason, the furniture arranged so impeccably that you couldn’t imagine it any other way.

At that point I remembered that Cole is the most incompetent science writer I’ve run across (for more about this, see here), and realized best to stop trying to make sense of this. Quanta really should do better (and usually does).

For a second example, the Kavli IPMU recently put out a press release claiming Researchers find quantum gravity has no symmetry. This was based on the paper Constraints on symmetry from holography, by Harlow and Ooguri. The usually reliable Ethan Siegel was taken in, writing a long piece about the significance of this work, Ask Ethan: What Does It Mean That Quantum Gravity Has No Symmetry?

To his credit, one of the authors (Daniel Harlow) wrote to Siegel to explain to him some things he had wrong:

I wanted to point out that there is one technical problem in your description… our theorem does not apply to any of the symmetries you mention here! …

It isn’t widely appreciated, but in the standard model of particle physics coupled to gravity there is actually only one global symmetry: the one described by the conservation of B-L (baryon number minus lepton number). So this is the only known symmetry we are actually saying must be violated!

What Harlow doesn’t mention is that this is a result about AdS gravity, and we live in dS, not AdS space, so it doesn’t apply to our world at all. Even if it did apply, and thus would have the single application of telling us B-L is violated, it says nothing about how B-L is violated or what the scale of B-L violation is, so would be pretty much meaningless.

By the way, I’m thoroughly confused by the Kavli IPMU press release, which claims:

Their result has several important consequences. In particular, it predicts that the protons are stable against decaying into other elementary particles, and that magnetic monopoles exist.

Why does Harlow-Ooguri imply (if it applied to the real world, which it doesn’t…) that protons are stable?

What is driving a lot of this “Against Symmetry” fashion is “it from qubit” hopes that gravity can be understood as some sort of emergent phenomenon, with its symmetries not fundamental. I’ve yet though to see anything like a real (i.e., consistent with what we know about the real world, not AdS space in some other dimension) theory that embodies these hopes. Maybe this will change, but for now, symmetry principles remain our most powerful tools for understanding fundamental physical reality, and “Against Symmetry” has yet to get off the ground.

Update: Quanta seems to be trying to make up for the KC Cole article by today publishing a good piece about space-time symmetries, Natalie Wolchover’s How (Relatively) Simple Symmetries Underlie Our Expanding Universe. It makes the argument that, just as the Poincaré group can be thought of as a “better” space-time symmetry group than the Galilean group, the deSitter group is “better” than Poincaré.

In terms of quantization, the question becomes that of understanding the irreducible unitary representations of these groups. I do think the story of the representations of Poincaré group (see for instance my book about QM and representation theory) is in some sense “simpler” than the Galilean group story (no central extensions needed). The deSitter group is a simple Lie group, and comparing its representation theory to that of Poincaré raises various interesting issues. A couple minutes of Googling turned up this nice Master’s thesis that has a lot of background.

Last Updated on

Posted in Uncategorized | 18 Comments

What happens when we can’t test scientific theories?

Just got back from a wonderful trip to Chile, where the weather was perfect for watching the solar eclipse from the beach at La Serena.

While I was away, the Guardian Science Weekly podcast I participated in before leaving for Chile went online and is available here. Thanks to Ian Sample, Graihagh Jackson, and the others at Science Weekly who put this together, I think they did a great job.

The issues David Berman, Eleanor Knox and I discussed in the podcast will be familiar to readers of this blog. Comparing to the arguments over string theory that took place 10-15 years ago, one thing that strikes me is that we’re no longer hearing any claims of near term tests of the theory. Instead the argument is now often made, by Berman and others, that it may take centuries to understand and test string theory. This brings into focus the crucial question here: how do you evaluate a highly speculative and very technical research program like this one? Given the all too human nature of researchers, those invested in it cannot be relied upon to provide an unbiased evaluation of progress. So, absent experimental results providing some sort of definitive judgment, where will such an evaluation come from?

Last Updated on

Posted in Uncategorized | 12 Comments


First something really important: chalk. If you care about chalk, you should watch this video and read this story.

Next, something slightly less important: money. The Simons Foundation in recent years has been having a huge (positive, if you ask me…) effect on research in mathematics and physics. Their 2018 financial report is available here. Note that not only are they spending \$300 million/year or so funding research, but at the same time they’re making even more (\$400 million or so) on their investments (presumably RenTech funds). So, they’re running a huge profit (OK, they’re a non-profit…), as well as taking in each year \$220 million in new contributions.

Various particle physics-related news:

  • The people promoting the FCC-ee proposal have put out FCC-ee: Your Questions Answered, which I think does a good job of making the physics case for this as the most promising energy-frontier path forward. I don’t want to start up again the same general discussion that went on here and elsewhere, but I do wonder about one specific aspect of this proposal (money) and would be interested to hear from anyone well informed about it.

    The FCC-ee FAQ document lists the cost (in Swiss francs or dollars, worth exactly the same today) as 11.6 billion (7.6 billion for tunnel/infrastructure, 4 billion for machine/injectors). The timeline has construction starting a couple years after the HL-LHC start (2026) and going on in parallel with HL-LHC operation over a decade or so. This means that CERN will have to come up with nearly 1.2 billion/year for FCC-ee construction, roughly the size of the current CERN budget. I have no idea what fraction of the current budget could be redirected to new collider construction, while still running the lab (and the HL-LHC). It is hard to see how this can work, without a source of new money, and I have no idea what prospects are for getting a large budget increase from the member states. Non-member states might be willing to contribute, but at least in the case of US, any budget commitments for future spending are probably not worth the paper they might be printed on.

    Then again, Jim Simons has a net worth of 21.5 billion, and maybe he’ll just buy the thing for us…

  • Stacy McGaugh has an interesting blog post about the sociology of physics and astronomy. His description of his experience with physicists at Princeton sounds all too accurate (if he’d been there a couple years earlier, I would have been one of the arrogant, hard-to-take young particle theorists he had to put up with).

    McGaugh’s specialty is dark matter and he has some comments about that. If you want some more discouragement about prospects for detecting dark matter, today you have your choice of Sabine Hossenfelder, Matt Buckley, or Will Kinney. I don’t want to start a discussion of everyone’s favorite ideas about dark matter, but wouldn’t mind hearing from an expert whether my suspicion is well-founded that some relatively simple right-handed neutrino model might both solve the problem and be essentially impossible to test.

  • Lattice 2019 is going on this week. Slides here, streaming video here.
  • Strings 2019 talk titles are starting to appear here. I’ll be very curious to hear what Arkani-Hamed has to say. His talk title is “Prospects for contact of string theory with experiments (vision talk)” and while he’s known for giving very long talks, I don’t see at all how this one could not be extremely short.

On a more personal front, yesterday I did a recording for a podcast from my office, with the exciting feature of an unannounced fire drill happening towards the end. Presumably this will get edited out, and I’ll post something here when the result is available.

Next week I’ll be heading out for a two week trip to Chile, with one goal to see the total solar eclipse there on July 2. Will start out up in the Atacama desert.

Update: John Horgan has an interview with Peter Shor. I very much agree with Shor’s take on the problems of HEP theory:

High-energy physicists are now trying to produce new physics without either experiment or proof to guide them, and I don’t believe that they have adequate tools in their toolbox to let them navigate this territory.

My impression, although I may be wrong about this, is that in the past, one way that physicists made advances is by coming up with all kinds of totally crazy ideas, and keeping only the ones that agreed with experiment. Now, in high energy physics, they’re still coming up with all kinds of totally crazy ideas, but they can no longer compare them with experiments, so which of their ideas get accepted depends on some complicated sociological process, which results in theories of physics that may not bear any resemblance to the real world. This complicated sociological process certainly takes beauty into account, but I don’t think that’s what is fundamentally leading physicists astray. I think a more important problem is this sociological process leads high-energy physicists to collectively accept ideas prematurely, when there is still very little evidence in favor of them. Then the peer review process leads the funding agencies to mainly fund people who believe in these ideas when there is no guarantee that it is correct, and any alternatives to these ideas are for the most part neglected.

Update: I think John Preskill and Urs Schreiber miss the point in their response here to Peter Shor. Shor is not calling for an end to research on quantum gravity or saying it can’t be done without experimental input. The problem he’s pointing to is a “sociological process” and so potentially fixable. This problem, “collectively accept[ing] ideas prematurely”, not realizing the difference between a solid foundation you can build on, and a speculative framework that may be seriously flawed is one that those exposed to the sociological culture of the math community are much more aware of. Absent experimental checks, mathematicians understand the need to pay close attention to what is solid (there’s a “proof”), and what isn’t.

Last Updated on

Posted in Uncategorized | 17 Comments

Not So Spooky Action at a Distance

I’ve recently read another new popular book about quantum mechanics, Quantum Strangeness by George Greenstein. Before getting to saying something about the book, I need to get something off my chest: what’s all this nonsense about Bell’s theorem and supposed non-locality?

If I go to the Scholarpedia entry for Bell’s theorem, I’m told that:

Bell’s theorem asserts that if certain predictions of quantum theory are correct then our world is non-local.

but I don’t see this at all. As far as I can tell, for all the experiments that come up in discussions of Bell’s theorem, if you do a local measurement you get a local result, and only if you do a non-local measurement can you get a non-local result. Yes, Bell’s theorem tells you that if you try and replace the extremely simple quantum mechanical description of a spin 1/2 degree of freedom by a vastly more complicated and ugly description, it’s going to have to be non-local. But why would you want to do that anyway?

The Greenstein book is short, the author’s very personal take on the usual Bell’s inequality story, which you can read about many other places in great detail. What I like about the book though is the last part, in which the author has, at 11 am on Friday, July 10, 2015, an “Epiphany”. He realizes that his problem is that he had not been keeping separate two distinct things: the quantum mechanical description of a system, and the every-day description of physical objects in terms of approximate classical notions.

“How can a thing be in two places at once?” I had asked – but buried within that question is an assumption, the assumption that a thing can be in one place at once. That is an example of doublethink, of importing into the world of quantum mechanics our normal conception of reality – for the location of an object is a hidden variable, a property of the object … and the new science of experimental metaphysics has taught us that hidden variables do not exist.

I think here Greenstein does an excellent job of pointing to the main source of confusion in “interpretations” of quantum mechanics. Given a simple QM system (say a fixed spin 1/2 degree of freedom, a vector in C2), people want to argue about the relation of the QM state of the system to measurement results which can be expressed in classical terms (does the system move one way or the other in a classical magnetic field?) . But there is no relation at all between the two things until you couple your simple QM system to another (hugely complicated) system (the measurement device + environment). You will only get non-locality if you couple to a non-local such system. The interesting discussion generated by an earlier posting left me increasingly suspicious that the mystery of how probability comes into things is much like the “mystery” of non-locality in the Bell’s inequality experiment. Probability comes in because you only have a probabilistic (density matrix) description of the measurement device + environment.

For some other QM related links:

  • Arnold Neumaier has posted a newer article about his “thermal interpretation” of quantum mechanics. He also has another interesting preprint, relating quantum mechanics to what he calls “coherent spaces”.
  • Philip Ball at Quanta magazine explains a recent experiment that demonstrates some of the subtleties that occur in the quantum mechanical description of a transition between energy eigenstates (as opposed to the unrealistic cartoon of a “quantum jump”).
  • There’s a relatively new John Bell Institute for the Foundations of Physics. I fear though that the kinds of “foundations” of interest to the organizers seem rather orthogonal to the “foundations” that most interest me.
  • If you are really sympathetic to Einstein’s objections to quantum mechanics, and you have a lot of excess cash, you could bid tomorrow at Christie’s for some of Einstein’s letters on the topic, for instance this one.

Last Updated on

Posted in Book Reviews, Quantum Mechanics | 62 Comments

Various News Items

For physicists:

  • For the latest news on US HEP funding, see presentations at this recent HEPAP meeting. It is rarely publicly acknowledged by scientists, but during the Trump years funding for a lot of scientific research research has increased, often dramatically. This has been due not to Trump administration policy initiatives, but instead to the Republican party’s embrace of fiscal irresponsibility whenever there’s a Republican in the White House. After bitter complaints about the size of the budget deficit and demands for reduction in domestic spending during the Obama years, after Trump’s election the congressional Republicans turned on a dime and every year have voted for huge across-the-board spending increases, tax decreases, and corresponding deficit increases. Each year the Trump administration produces a budget document calling for unrealistically large budget decreases which is completely ignored, with Congress passing large increases and Trump signing them into law.

    For specific numbers, see for instance page 20 of this presentation, which shows numbers for the DOE HEP budget in recent years. The pattern for FY2020 looks the same: a huge proposed decrease, and a huge likely increase (see the number for the House Mark).

    The result of all this is that far greater funds are available than expected during the last P5 planning exercise, so instead of having to make the difficult decisions P5 expected, a wider list of projects can be funded.

For mathematicians;

  • Michael Harris has a new article in Quanta magazine, mentioning suggestions by two logicians that the Wiles proof of Fermat’s Last Theorem should be formalized and checked by a computer. He explains why most number theorists think this sort of project is besides the point:

    Wiles and the number theorists who refined and extended his ideas undoubtedly didn’t anticipate the recent suggestions from the two logicians. But — unlike many who follow number theory at a distance — they were certainly aware that a proof like the one Wiles published is not meant to be treated as a self-contained artifact. On the contrary, Wiles’ proof is the point of departure for an open-ended dialogue that is too elusive and alive to be limited by foundational constraints that are alien to the subject matter.

    I don’t know who the “two logicians” Harris is referring to are, or what the nature of their concerns about the Wiles proof might be. I had thought this might have something to do with number theorist Kevin Buzzard’s Xena Project, but in a comment Buzzard describes such a formalization as currently impractical, with no clear motivation.

    Taking a look at the page describing the motivation for the Xena Project, I confess to finding it unconvincing. The idea of revamping the undergraduate math curriculum to make it based on computer checkable proofs seems misguided, since I don’t see at all why this is a good way to teach mathematical concepts or motivate undergraduate students. The complaints about holes in the math literature (e.g. details of the classification of finite simple groups) don’t seem to me to be something that can be remedied by a computer.

  • For some cutting-edge number theory, with no computers in sight, see the lecture notes from a recent workshop on geometrization of local Langlands.
  • Finally, congratulations to this year’s Shaw Prize winner, Michel Talagrand. Talagrand in recent years has been working on writing up a book on quantum field theory for mathematicians, and I see that Sourav Chatterjee last fall taught a course based on it, producing lecture notes available here.

    For a wonderful recent interview with Talagrand, see here.
    I first got to know Michel when he started sending me very helpful comments and corrections on my QM book when it was a work in progress. He’s single-handedly responsible for a lot of significant improvements in the quality of the book.

    I’ve recently received significant help from someone else, Lasse Schmieding, who has sent me a very helpful list of mistakes and typos in the published version of the book. I’ve now fixed just about all of them. Note that the version of the book available on my website has all typos/mistakes fixed. For the published version, there’s a list of errata.

Update: For more about the Michael Harris vs. Kevin Buzzard argument, see here, or plan on attending their face-off in Paris next week.

Last Updated on

Posted in Uncategorized | 18 Comments