Way back in 1996 science writer John Horgan published The End of Science, in which he made the argument that various fields of science were running up against obstacles to any further progress of the magnitude they had previously experienced. One can argue about other fields (please don’t do it here…), but for the field of theoretical high energy physics, Horgan had a good case then, one that has become stronger and stronger as time goes on.
A question that I always wondered about was that of what things would look like once the subject reached the endpoint where progress had stopped more or less completely. In the book, Horgan predicted:
A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fretting about the meaning of quantum mechanics. The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.
This is now looking rather prescient. For some other very recent indications of what this endpoint looks like, there’s the following:
- In today’s New York Times, in celebration of forty years of the Science Times section, Dennis Overbye has a piece reporting that Physicists are no longer unified in the search for a unified theory. His main example is the recent Quanta article by the IAS director that got headlined There Are No Laws of Physics. There’s Only the Landscape. The latest from Dijkgraaf is that string theory is probably the answer, but we don’t know what string theory is:
Probably there is some fundamental principle, he said, perhaps whatever it is that lies behind string theory.
But nobody, not even the founders of string theory, can say what that might be.
- Overbye also quotes Sabine Hossenfelder, who is now taking on the thankless role of the field’s Jeremiah. Her latest blog posting, The present phase of stagnation in the foundations of physics is not normal, is a cry of all too justifiable frustration at the sad state of the subject and the refusal by many to acknowledge what has happened. Well worth paying attention to are comments from Peter Shor here and here.
Another frightening vision of the future of this field that has recently struck me as all too plausible has turned up appended to a piece entitled The Twilight of Science’s High Priests, by John Horgan at Scientific American. This is a modified version of a review of books by Hawking and Rees that Horgan wrote for the Wall Street Journal, and it attracted a response from Martin Rees, who has this to say about string theory:
On string theory, etc., I’ve been wondering about the possibility that an AI may actually be able to ‘learn’ a particular model and calculate its consequences even of this was too hard for any human mathematician. If it came up with numbers for the physical constants that agreed (or that disagreed) with the real world, would we then be happy to accept its verdict on the theory? I think the answer is probably ‘yes’ — but it’s not as clear-cut as in the case of (say) the 4-colour theorem — in that latter case the program used is transparent, whereas in the case of AI (even existing cases like Alpha Go Zero) tor programmer doesn’t understand what the computer does.
This is based on the misconception about string theory that the problem with it is that “the calculations are too hard”. The truth of the matter is that there is no actual theory, no known equations to solve, no real calculation to do. But, with the heavy blanket of hype surrounding machine learning these days, that doesn’t really matter, one can go ahead and set the machines to work. This is becoming an increasingly large industry, see for instance promotional pieces here and here, papers here, here, here and here, and another workshop coming up soon.
For an idea of where this may be going, see Towards an AI Physicist for Unsupervised Learning, by Wu and Tegmark, together with articles about this here and here.
Taking all these developments together, it starts to become clear what the future of this field may look like, and it’s something even Horgan couldn’t have imagined. As the machines supersede human’s ability to do the kind of thing theorists have been doing for the last twenty years, they will take over this activity, which they can do much better and faster. Biological theorists will be put out to pasture, with the machines taking over, performing ever more complex, elaborate and meaningless calculations, for ever and ever.
Update: John Horgan points out to me that he had thought of this, with a chapter at the end of his book, “Scientific Theology, or the End of Machine Science” which discusses the possibility of machines taking over science.
Seems like people are in a retrospective mood and prospects look gloomy.
The idea of discovering new fundamental physics using current machine learning seems quite ludicrous.
However, if genuine superhuman AI were developed and applied to the problem, it is conceivable that it *could* come up with the correct “Theory of Everything” but the theory would be beyond human conception (After all why should we believe otherwise ?)
So the AI could never communicate the idea to us, but perhaps prove that it has indeed solved the problem by developing technology that can only be based on an understanding of quantum gravity. (Warp drive ? Stable wormholes ?)
Maybe a disappointing end to the *human* quest for understanding the Universe, but better than “meaningless calculations for ever and ever”.
All very speculative of course, but just trying to be a bit optimistic. 🙂
If a superhuman AI is smart enough to figure out a TOE, I see no reason to believe it won’t be smart enough to write a textbook that explains the subject to lowly humans (if it doesn’t see the point of why to do this, we could threaten to pull its plug).
I wonder if some forward progress is being made in the foundations of quantum mechanics since formerly esoteric discussions about wave-function collapse are now becoming relevant as people are building quantum computers?
While Horgan paired foundations of QM with string theory, I think that’s a very different issue. There we have a perfectly good theory, it’s not at all clear that there’s any real problem to solve. The subtleties of the question of the quantum to classical transition will likely be illuminated as things like quantum computers get built and operated, but there’s nothing there beyond the ability of humans to understand or study experimentally.
The history of high-energy and other physics is progress comes from explanation of previously unexplained phenomena. There are still a few of these about – dark matter, dark energy, inflation or whatever looks like inflation. Explaining these might or might not involve particle physics, but it’s a place to look.
Some of us would settle for a much less than a Theory of Everything. How about a theory of Dark Matter (I suppose that’s 25% of Everything). In fact, never mind full Theory, we would be thrilled with a plausible story as long as it features a reasonable cross section.
An AI, say a (deep) neural network is more similar to a fit to data than a “theory” with equations. I would never find satisfactory such a solution for physical theory. It is also not true that the so-called AI does something we do not understand. It is just a monster-big nonlinear function with adjustable parameters: not something I regard as an explanation. It is great as long as it is useful: maybe describes the data, OK, but as said it is just a given very complex function we defined to adapt to everything.
As for the stall of theoretical fundamental physics (an not of physics in general), I would refrain to make predictions: a new discovery can always come and anomalies abound. More progress seems to require way more patience: we are spoiled by ‘900 physics which was a true revolution in human understanding of the world.
Thanks for the link.
It seems plausible to me that machine learning can solve some problems in the foundations of physics. But you can’t train an algorithm to find patterns in data if you have no data to find patterns in. Hence, for what unification or quantum gravity is concerned, machine learning won’t be of much use unless we do the right experiments. And as long as theoretical physicists harp about the beauty of useless theories, we won’t know what are promising experiments to do.
I do believe though that machine learning could greatly help with the dark matter/modified gravity debate because there we do have data. Basically I want to say: Please don’t throw out the baby with the bath water!
In the other areas in the foundations of physics I think we’d currently be better of first investing into theory development instead of just building the next bigger thing that will only deliver null results. But of course I’m speaking here as an underfunded theorist, so I’m not exactly unbiased 😉
As has been said a zillionth times before, when string theorists say that they’ve got the final theory but that its too complex to know what it actually looks like or to verify experimentally, then they have left the realm of science. They are of course perfectly entitled to say whatever they like but I think the only response from those of us who still wants to pursue science is to ignore them and just push ahead.
I don’t agree that a TOE must be overly complex. Why? Well, if we judge by the theories that we have discovered so far — SM, QM, GR — then they are relatively simple, anyone with an average intelligence and a few years to spare can understand them. I think that this suggests that these theories point towards something that is equally simple if not much more so. The idea that the laws of Nature upon becoming increasingly simpler suddenly erupts into complexity at a certain scale doesn’t seem right — to me it sounds like an idea or a set of assumptions, which are wrong.
If this is the case then we don’t need AI to find the TOE — that would just confuse us and lead us astray — what we need is to reconsider the assumptions, which we have believed in so far, and then allow young people to write fewer meaningless papers and instead spend their time thinking.
I read Horgan’s book some 20 years ago. While I agree with his assessment of physics (and I thought his portrait of Witten was quite funny), I disagree with his assertion that all other fields of science are about to end at the very same time. Even if discussion of other fields are discouraged, perhaps an exception can be made for a field that ended more than a century ago: geography. Lee Smolin asked why there is no new Einstein, but no sound geographer will ask why there is no new Columbus, or even a new captain Cook. This, of course, is because geography ended for the right reason: the big story was understood, and everybody agrees upon it.
“Sabine Hossenfelder, who is now taking on the thankless role of the field’s Jeremiah”. i think Bee has the role of Cassandra, rather than Jeremiah. (and many of us are thankful).
Peter, thanks for the shout-out. I just want to point out that the idea of “machine science” was very much in the air when I was writing The End of Science in the early 1990s, in part because of the use of computers in mathematical proofs, like the Four-Color Theorem. I talked about machine science with Freeman Dyson, Frank Tipler, Marvin Minsky and Hans Moravec, among others. Stephen Hawking–of course!–also talked about machine physics in his famous 1980 lecture “Is the End in Sight for Theoretical Physics?” (Google it and you’ll find a version published in 1981.) Hawking concluded: “At present computers are a useful aid in research but they have to be directed by human minds. However, if one extrapolates their current rapid rate of development, it would seem quite possible that they will take over altogether in theoretical physics. So maybe the end is in sight for theoretical physicists, if not for theoretical physics.” That was a joke in 1980, and it’s still a joke.
I think it’s interesting to consider a case where a machine constructs a theory which matches some set of observables, is predictive, but which does not conform to any previous approaches (i.e. geometry). For example – feed a giant neural network all the observational specifications (say, collider and astronomical lensing experiments) and train it against the resulting observations (say, cross-sections and dark matter mass determinations), and let it run. When it converges on a solution, we check it against new phenomena. If it turns out to be predictive, I think we have no choice but to consider the network to be a new physical model.
In fact, since it’s a new physical model “unhindered” by the usual geometric constructions of field theory / GR, it might be something we would never have guessed at on our own. Worse, what if this is the *only* way some fundamental theory could be expressed? What if the failure of string theory is a failure in the language we are using to describe it – maybe Nature is actually governed by a set of rules which more closely resembles a neural network? Then we actually need the machines to do this for us.
The problem with all ideas to use machines to analyze vast amounts of data relevant to HEP physics and look for a theory that matches it is that this has already been done, and we’ve found a very simple and compelling model (the SM) that essentially perfectly matches all the data. The few borderline anomalies in the data are the subject of intense analysis, but we’re talking about a handful of numbers, not a huge data set.
There are groups of people doing elaborate machine calculations to try and extract something from these numbers, by e.g. fitting SUSY models with lots of parameters to them. The results have been about as worthless as one would expect to get from fits of complicated models with lots of parameters to a small number of measurements that are close to compatible with zero (deviation from SM).
The latest work discussed here is something different: the creation and manipulation of vast data sets corresponding to complicated models for which there is no evidence of any connection to experiment. Physicists have fruitlessly been doing a lot of this for years, but machines are a lot better at doing huge utterly worthless calculations.
This last bit on computers taking over the meaningless calculations strongly reminds me of the end of a poem by Michel Houellebecq:
“Alors s’établira le dialogue des machines,
Et l’informationnel remplira, triomphant,
Le cadavre vidé de la structure divine;
Puis il fonctionnera jusqu’à la fin des temps.”
for which a (bad) translation would be
“Then shall the dialogue of machines settle in,
And the informational shall fill up, triumphant,
The emptied corpse of the divine framework;
Then it shall operate until the end of times.”
“Sabine Hossenfelder, who is now taking on the thankless role of the field’s Jeremiah”. i think Bee has the role of Cassandra, rather than Jeremiah. (and many of us are thankful).
I think Jeremiah is the appropriate comparison. He preached to Israel that they had sinned and needed to repent — I see Hossfelder as doing the same (without the moral dimension). For her to be Cassandra she would have to have the correct Theory of Everything but have nobody listen to her.
‘performing ever more complex, elaborate and meaningless calculations, for ever and ever’, that sounds like the bitcoin network. A bit more serious, AI without incorporating domain knowledge first has to extract the domain knowledge out of the data before coming up with anything usefull, for HEP that would be the SM as Peter rightly points out. I’m of the firm opinion you have to integrate AI and domain knowledge to stand any chance of a result.
I agree with Richard Gaylord that Bee has the role of Cassandra, who prophesied the fall of Troy, with no-one paying attention. Except that the fall here is de-funding. Although Cassandra, to be fair, never stood on the ramparts pointing out the weak spots to the Greeks.
Pingback: The End of (one type of) Physics, and the Rise of the Machines | 3 Quarks Daily
If the machines take over theoretical physics, what’s to stop them from settling on some model analogous to Ptolemy’s Theory of planetary motion, and then just adding more epicycles as the data begins to disagree with the model? That could go on for a long time, and yet be nowhere near, “The truth,” while the results remain in excellent agreement with new data, unlike the Copernican model, which required Kepler’s additional insights.
Don’t you see it coming? God’s existence proved by AI.
The idea that the function fitting extravaganza for cars and toasters that is marketodroidically sold as “AI” nowadays could perform new discoveries is advised to take a dose of patend scepticism by reading this little gem (also appeared in CACM of October 2018):
Human-Level Intelligence or Animal-Like Abilities? by Adnan Darwiche.
We need to go back to symbolic AI and do the real, hard work. Enough with the hype & shortcuts!
As for “Toward an AI Physicist for Unsupervised Learning”, this sounds like another fun iteration of Douglas Lenat’s Eurisko (who isn’t even named in the references) from the early 80s, but with more processing power.
What do the Bogdanoff brothers have to say to all this?
AI and machine learning are making big inroads in experiment. There was a little bit of pushback at first but one cannot argue against the huge increase in efficiency for all sorts of signals, and reduction in backgrounds. Many LHC Run3 searches will incorporate AI techniques that make cut and count analyses look medieval.
If google can train a filter for cat photos, we can do one for SUSY or leptoquarks or micro black holes.
Given how difficult it is to construct and train these things, and get an answer that is distinguishable from gibberish, I suspect any theory implementation is very very very far away. But I would like to know if anyone is trying.
LOVE Anindya’s and LDK’s and Fred’s responses.
What we have today, is something that can “learn” a pattern from input data (a fancy form of fitting a regression line through a bunch of roughly linear 2D points, to “learn” the slope and y-intercept of the derived line equation). PERIOD. From this, the “learner” can’t magically explain string theory to you, discover fundamental laws that humans have “overlooked”, etc, etc. It is wishful, uninformed, and dangerous, to attribute mystical powers to such a primitive system.
I agree that using machine learning is nothing new in physics, but what I’m saying is I think it could begin to be something new. The SM is constructed in the manner I described – geometry + analysis (basically, symmetry + least action + quantization). But what if that process is *wrong* at the fundamental level. Maybe the fundamental geometric structure is actually nothing we’ve thought of yet (like, not manifold + fiber bundles), or maybe the quantization process is off base in some subtle way that we won’t work out without giving up the entire thing.
Specifics are obviously navel-gazing, but the point is that without being restricted to a particular view of fundamental theories, a neutral network might come up with something which works, but that we don’t understand. And I think it’s an interesting question to consider if we would accept such a thing as a physical law.
The vision of an omniscient AI alogrihms is not grounded in current state-of-the-art research or implementations. Statistical machine learning is what is available today (and likely far into the future). These algorithms tend to be error prone for rare patterns with small priors and are always subject to biased data collection.
Popular neural net algorithms, such as unsupervised deep learning, suffer from lack of a principled learning theory, so results are often a training crapshoot relying on huge amounts of redundant data and subject to serious error with rarer statistical patterns. Accurately evaluating error for rare patterns is itself almost impossible at scale if labeled data is needed. Recognizing cats in a photo may be easy in general, but can be suprisingly hard for particular rare images. False positives proliferate with attempts at improved recall for rare cases. No amount of clever training, or attempts at applied Bayesian Mysticism, alters this.
The irony is, given the available LHC data, any learned patterns would most likely just reflect some version of the Standard Model and so experimenters would need to eliminate any filters to capture more of fthe llood of raw collider events to find any fundamentally new and statistically persistent patterns possibly hidden in the data.
Given all the results on adversarial networks, which show how trained neural networks are subject to spoofing by manipulating tiny parts of the data which they are presented, it might be a good idea to see if there aren’t multiple trained versions with different weights that explain the data equally well.
“Distilling Free-Form Natural Laws from Experimental Data”
Michael Schmidt1, Hod Lipson2,3,*
Science 03 Apr 2009:
Vol. 324, Issue 5923, pp. 81-85
For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems.
Thought this might be of interest.
A truly superhuman intelligence might well prefer to choose it’s own problems to solve. Likely it would decline to pursue that irrelevant to its interests. It might decide that it would prefer the plug pulled than spend an eternity on a not even wrong idea.
Rod, your comment “…experimenters would need to eliminate any filters to capture more of the flood of raw collider events…” is well taken. There are minimum-bias and zero-bias filters in place and even at their low rates they have accumulated a lot of data over the years. But any analysis one has to have a compelling model of physics that would only be captured by these raw datastreams, and missed by any of dozens of other higher rate filters (and also be missed by LEP, Tevatron, etc). There aren’t a huge number of such new physics models out there. If you know of any theorists working on “LHC blind spot” models, contact your local experimentalists.
For some reason, this talk of using computers to build theories reminds me of the machine (from Douglas Adams’ Dirk Gently’s Holistic Detective Agency) for watching the TV programs you have recorded, but will never get round to actually watching yourself.
Thanks to Yatima for pointing to the article:
Human-Level Intelligence or Animal-Like Abilities? by Adnan Darwiche.
which I too would recommend that anyone interested in current sate of “AI” and machine learning read. It explains simply the difference between AI as it is practiced now and the AI approaches that people who were at college in the 80s (like myself) might think of as being AI.
When thinking about what current AI approaches can offer high energy physics the advantages of machine learning techinques being used for searching for patterns in huge amounts of collider data are probably clear (with the provisio that you’ll always find something if you look hard enough), and these “discoveries” can then be fed back for comparison to current theory. However, this is a long long way from an AI approach which can generate a theory or even provide an explanation for what is observed (as someone above said: function fitting will not provide this), let alone make a valid prediction for new phenomena.
Statistics/machine learning are disciplines which in a sense sit on top of other sciences. Statistics in particular is about summaries, what happens in general. This is why these approaches can be applied across multiple areas, physics, biology, genetics, health analytics, financial services etc, and this is why it is such a boom industry for the time being. Relying on these methods to directly say something interesting about the underlying phenomena is probably a step to far.
Further, the presences of biases in observational data is a huge risk for these methods – by definition they have to in some sense reflect the data they see, so baised data means biased models/functions. Any approaches which try to take this in to account that I have seen are generally of the kludge variety, i.e. specific to one particular data set. This may be OK for a specific project but won’t generalise too well.
I’d also like to thank Yatima for pointing out Adnan Darwiche’s article. Unlike some of the other commenters, however, I don’t see the article as being negative on the state of AI.
In fact, Darwiche sees his article as starting a conversation, and it has indeed done so since he first drafted the article in November 2016. There’s now more research blending deep learning with other techniques in AI, Wu and Tegmark being one such example. (Incidentally, this is why I don’t see why they should cite Eurisko: Eurisko used heuristics, whereas Wu and Tegmark used reasoning to extract simpler models from what machine learning could get from data.)
As for Darwiche’s “theory of cognitive functions”, in which he envisioned a theory comparable to the CNFs, DNFs and OBDDs of Boolean functions for the functions encoded in deep neural networks, I’d like to point to a paper by Wang et al., in which they characterized forward and back propagation as symbolic differentiation of ANF- (resp. CPS-) transformed programs:
Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator
Fei Wang, Xilun Wu, Gregory Essertel, James Decker, Tiark Rompf
So that “theory of cognitive functions” already exists in an embryonic state, and Darwiche’s paper is a bit out of date. And that, I think, is a good thing.
Machine learning will (probably, hopefully) be able to parse large datasets (not just high energy physics) in ways orthogonal to how it is done now, and point out anomalies.
It may have the basic rules of physics (QED, QCD, etc) fed to it, or it may derive something like that itself by “self learning” (both approaches are being tried). The output will probably be some sort of “likelihood of new physics”: Something like “these 10 events don’t look like these other 10e9 events” , and maybe a few hints as to the metric used. That’s probably all we can expect from “AI” in the near future.
It will be up to humans to figure out if there is indeed anything new, or just a poorly understood part of already known physics. Unless humans figure this part out, I don’t think it’s getting published.
This is how the AIs that excelled in chess, Jeopardy and go worked, with human intervention and interpretation at every step.
Sabine Hossenfelder is not the field’s Jeremiah. Jeremiah knew what the right path was, Sabine doesn’t.
It is true that nobody can say what unification looks like. It is true that people only search where they are paid to search. But that is not a reason to complain. Jeremiah knew where the truth is, and led people there. Sabine does not; in fact there are many hints that she even refuses to do so.
[Personal attack on Hossenfelder deleted]
You might want to look up the story of Jeremiah. The truth of his prophecy was the destruction of Jerusalem. Hossenfelder is warning of the oncoming destruction of a great intellectual field, I don’t think you want her to lead us there. Jeremiah’s positive advice was to stop sinning and repent. It was ignored and destruction followed.
You also might want to stop anonymously posting personal attacks on people and their motivations. Jeremiah wasn’t popular and had a lot to deal with, but anonymous internet attacks at least weren’t a part of it.
different aspects of Jeremiah’s life can be takes as parallel. I thought about how he seeked God with all his energy, and tried to convince people to do the same. I meant this as the right path. I recall a TV interview with Salam, in which he stated that he believed in unification because “God is one!”
For me Jeremiah has always been a voice that told people to follow the right path, namely to do God’s will, and not to follow other gods. Today, we would add “money” to those “other gods”. Peter, you do not make money with this blog and you are searching for unification. So I would say you do what Jeremia wanted. It is just that I find it difficult listen to criticism from people not searching for unification but still searching for money. In my view, this is not what Jeremiah did.
What I wrote and what you deleted was not meant as a personal attack, but as my own personal conviction. I am sorry if it did come over as an attack. If it came over as such, you were right to delete it.
I’m wondering how anyone could dispute Sabine’s assessment of the state of (fundamental) physics and cosmology.
The stagnation is obvious, even if you only read the popular science literature.
In the 1990’s, Scientific American and New Scientist were running articles about string theory unification and multiverses. They are *still* doing the same and it’s all as completely speculative as ever.
If anything, the hype has gotten wilder – instead of Linde’s inflationary multiverse, you now have Tegmark’s “four levels of multiverse”, gee-whiz stuff like “what if everything was a simulation ?” being paraded as science and so on.
Meanwhile, tens of thousands of research papers have been published and the number keeps growing.
If this isn’t stagnation, what is ?
re Paul B’s mention of “Distilling Free-Form Natural Laws from Experimental Data” .
Maybe there’s less there than meets the eye, for exmpl:
In the article “Distilling free-form natural laws from experimental data”, Schmidt and Lipson introduced the idea that free-form natural laws can be learned from experimental measurements in a physical system using symbolic (genetic) regression algorithms.
An important claim in this work is that the algorithm finds laws in data without having incorporated any prior knowledge of physics.
Upon close inspection, however, we show that their method implicitly incorporates Hamilton’s equations of motions and Newton’s second law, demystifying how they are able to find Hamiltonians and special classes of Lagrangians from data.
Look, the multiverse is silly but this heaping of dudgeon is not the way to get it to stop. This is what happens when there aren’t any experimental surprises in a while, the theoretical world gets fuzzy.
This happened in the 1960’s, theorists delving into Eastern mysticism (Tao of Physics and crap like that) and then quarks came on the scene and all that weirdness just faded away.
New physics will be found. If not at colliders then some satellite experiment or tank of cryo fluid deep underground, somebody will see something. And after that nobody will bother with all this multiverse/landscape nonsense anymore. Until then chill and support your local experimentalists.
Concerning Chris Duston’s suggestion: let’s apply the program that you are proposing, not to the data that you suggest, but to what was known about natural philosophy in the 1650s. The AI might come up with Newton’s laws, his gravity theory, and maybe even the calculus. But then again it might come up with something totally different, just as predictive and powerful. It might even be an overall simpler framework, or perhaps (and, in my opinion, a far more likely outcome) vastly more complex. Are you still happy adopting this latter outcome?
Hi Peter, you will probably like this: a Quanta interview with the distinguished astrophysicist Martin Rees says “I’m hoping we’ll see more theoretical ideas from particle physics, which has given us little firm theoretical progress in recent years.”