I noticed recently that Nima Arkani-Hamed was giving a talk at Cornell, with the title Three Cheers For “Shut Up And Calculate!” In Fundamental Physics. No idea whether or not video is now or will become available.
From the abstract one can more or less guess what sort of argument he likely was making, and it’s one I’m mostly in agreement with. “Shut Up and Calculate!” is pretty much my unspoken reaction to almost everything I read purporting to be about foundational issues in quantum mechanics. I have in mind in particular discussions of the measurement problem, which often consist of endless natural language text where one struggles to figure out exactly what the author is claiming. An actual calculation showing what happens in a precise mathematical model of a “measurement” would be extremely helpful and likely make much clearer exactly what the problem is (or, sometimes, whether or not there even is a problem…). Such calculations are all too few in a huge literature.
Over the last few years, while teaching and writing a book about the mathematics of quantum mechanics, the tedious exercise of trying to get all signs right in calculations has sometimes turned out to be quite illuminating, with tracking down a mysterious inconsistency of minus sign leading me to realize that I wasn’t thinking correctly about what I was doing. I’m all too aware that this kind of calculational effort is something I too often avoid through laziness, in favor trying to see my way through a problem in some way that avoids calculation.
On the other hand, I’m not quite ready to sign up for “Three Cheers”, might just stick to “Two Cheers”. For a perfect example of what’s wrong with the “Shut Up and Calculate!” philosophy, one can take a look at the forthcoming Workshop on Data Science and String Theory planned for Northeastern in a month or so. They have a Goals and Vision statement which tells us that they plan to:
treat the landscape as what it clearly is: a big data problem. In fact, the data that arise in string theory may be some of the largest in science.
About being the “largest”, I think they’re right. The traditional number of 10500 string theory vacua has now been replaced by 10272,000 (and I think this is per geometry. With 10755 geometries the number should be 10272,755). It’s also the case that “big data” is now about the trendiest topic around, and surely there are lots of new calculational techniques available.
The problem with all this is pretty obvious: what if your “data set” is huge but meaningless, with nothing in it of any significance for the problem you are interested in (explaining the Standard Model)? This is not a new project, it’s an outgrowth of the String Vacuum Project, which I wrote about here, here and here. This started with a 2005 funding proposal, ended up getting funded by the NSF during 2010-2014. From the beginning there were obvious reasons this sort of calculational activity couldn’t lead to anything interesting, and as far as I can tell, nothing of any value came out of it.
For an opposite take to mine on all this, see the paper Big Numbers in String Theory, by Bert Schellekens. It contains an odd June 2017 preface explaining that it was supposed to be part of special issue of Advances in High Energy Physics devoted to “Big Data” in particle and string phenomenology (“all the ways we use high performance computing in addressing issues in high energy physics, and (in particular) the construction of databases of string vacua”). This issue was cancelled “as requested by the Guest Editors”. I wonder what the reason for this cancellation was, in particular whether it had anything to do with part of the topic of the special issue being considered by some to be obvious nonsense.