For presentations a couple days ago at the latest HEPAP meeting, see here. One piece of news, from this presentation, is that there likely will be a delay in the scheduled startup of the HL-LHC, with the next LHC run (Run 3) extended for an additional year (through 2024), and the next shutdown (LS3) extended by a half year. The HL-LHC would then start physics in 2028.
Most of the HEPAP discussions have to do with funding. The pattern of recent years has been one of huge decreases in funding proposed by the Trump administration. These are completely ignored by both the Democrats and Republicans in Congress, which passes large increases in funding (then signed into law by Trump). For FY2020 this continues: at DOE the HEP budget for FY2019 was \$980 million, for FY2020 the White House budget request was a massive cut to \$768 million. This was taken no more seriously by anyone than the last few of these, with the FY2020 House Mark \$1,045 million, the Senate Mark \$1,065 million. The FY2020 budget remains to be finally finished and passed, in the meantime the federal government has been operating under a sequence of continuing resolutions.
Specifically on theory funding, JoAnne Hewett has a presentation on The State of Theory. It has no numbers in it, but the DOE numbers given here show an increase from \$60 million in FY2017 to \$90 million in FY2019 for Theoretical, Computational and Interdisciplinary Physics. But within this category, pure theoretical HEP is pretty flat, with big increases for Computational HEP and a huge new investment in Quantum Information Science (\$27.5 million in FY2019). There does seem to have been some sort of decision to de-prioritize conventional theoretical HEP in favor of newer trendy areas.
Hewett describes the general consensus on current problems with theory funding as
- Universal concern on ever decreasing levels of funding for university groups: concern that university programs are dying.
-Private institutions attempt to offset cuts with non-federal funding sources.
-Cuts to program further accumulated in 2019. Many postdocs learned in May 2019 that their contracts would not be renewed for the fall. It was then too late to apply for new positions.
- Lab theory programs are also losing researchers.
- Even distribution of cuts across U.S. theory program has indirect proportional effect to small programs.
- Large fluctuations cycle-to-cycle is making groups less cohesive and more inclined to opt for “safer” research projects.
- There is the perception that the recent emphasis on QIS comes at a cost to more traditional HEP theory research.
- Summer salary has been capped or reduced to 1 month in many cases. Removal of summer salary across the board is demoralizing.
and ends with
The situation is becoming increasingly unstable.
University-based theory is suffering its most serious crisis in decades.
Its future is in jeopardy.
It would be interesting to see some numbers on the size of new private research funding going to HEP theory (for instance funding from the Simons Foundation or the private funding of the CMSA at Harvard). I don’t know of such numbers but I’m curious whether what is happening is that the total funding level has seen reasonable growth, but increases in funding are going to a small number of elite institutions, with the rest of the field in decline.
On the question of caps or reductions in summer salary, I doubt that any significant number of researchers is reacting to only getting 1 month of summer salary by signing up for another job (e.g. teaching summer school) and not doing research during the other two months of the summer. There has been another huge influx of money to the field that in some sense replaces grant-funded salary supplements: the multi-million dollar Breakthrough Prizes. A sizable number of HEP theorists have now partaken in all or part of one of the \$3 million prizes. If you add in this money, on average HEP theorists may have been seeing significant increases in income, however with almost all of it going to a small number of people (at the same elite institutions that are doing well). What we’re seeing may just be the same trend as in the rest of the US economy: a move to a star system with ever larger increases in inequality.
Another problem for the field of HEP theory may be that funding is stagnating because the DOE and NSF are skeptical about its intellectual health. Hewett notes that “Formal theory resides solely in university environment and has undergone significant funding cuts.” Trying to make the positive case for this part of the field, she lists three areas of advances, but oddly, the first two are identical. The two areas of advances in formal theory she describes are:
Advances in strongly coupled quantum field theory (gravity/field theory duality, bootstrap program, amplitudes) has implications for particle physics, cosmology and beyond.
Geometric advances in particle physics constructions from String/F-theory has implications for the “swampland program”.
For the second of these, it’s quite possible that most physicists don’t see this as an advance at all.
Update: Physics World has more about the delay here. It is supposed to be announced on Tuesday. The cause evidently is a budget gap caused by some planned contributions from non-member countries now not happening. The story doesn’t explain which non-member countries are involved or why their planned contributions are now not expected.
As I have tried to tell particle physicists for 5 years or so, even the dumbest politician will eventually see that they don’t live up to the promises they’ve been making (basically since the 1980s), which will result in funding cuts unless they make drastic changes.
The problem is that people right now are responding all too well to the incentive structure of the grant system. If you want to get your grant renewed, the worst possible thing you could do is acknowledge that what you have been doing with previous grant funding was a failure. The incentives are to deny the obvious (that what you have been doing doesn’t work), make some dubious claim of success, and try and reorient what you are doing to fit the latest hot topic popular at funding agencies.
For example, instead of admitting that finding new “string vacua” has been a waste of time, the incentive is to keep working on this, but claiming a machine-learning/AI/big data aspect to your research. Instead of admitting that your quantum gravity research has gone nowhere, claim a connection to quantum information theory.
Actually, a bibliometric analysis shows a decline of most elite institutions. A few decades ago one elite institution contributed up to ≈7% of the world-wide output, while now it’s ≈2%. Only IAS remains better.
Hmmm… seems to me like accelerators and experimentalists have lived up pretty well to the promises they made… resolutions/triggers/data throughput rates/recorded integrated luminosities at the LHC detectors; Daya Bay nailed theta_13, T2K is plugging along, IceCube has good data. All sorts of experimental activities… a whole new program of (g-2), mu2e, DUNE is making its way through milestones.
Gosh excoriating accelerator and detector builders for a year or two or three, given the complexity of the projects… particularly the LHC… including incredible financial gymnastics of dealing with many world currencies and distinct science bureaucracies… is not at all broad-minded. Marie Curie, a strong supporter of internationalism and collaboration, would be shaking her head.
Lots of rockets blew up on the pads prior to successful rockets… gosh they still blow up sometimes. An awful lot of mistakes were made in motorcycles, cars, and planes (hey, ask Boeing about their 737)… theorists sitting on the sidelines and acidly criticizing the folks fighting the real world to get their experiments built quicker and cheaper is not a winning style.
No one is excoriating anyone for the potential HL-LHC delay. May be just as well, allowing a longer run 3 so more data available to analyze during the LS3. The only thing of longer-term significance that this pushes back is the availability of the LHC tunnel for a next generation machine (assuming the length of the HL-LHC run kept constant). Given that any next generation machine looks like it will take a long time to agree on and get funding for, having the LHC tunnel in use for a year or two longer doesn’t matter at all.
I see that Physics World now has something about this, says announcement Tuesday. Evidently the delay is being caused by a budget gap of 100 million pounds or so, caused by expected contributions from non-member countries not happening. I’ll add something to the posting about this.
running LHC for a few years more does not matter because it seems anyhow unrealistic that high energy physics would survive until 204X. The main hope is that China decides that they want to become the new world center of high energy physics by having the next collider while LHC runs ad nauseam. If China starts, most Europeans would likely join.
Yes, the existing incentive structure of the academic system encourages scientists to claim that what they have previously done was successful and continues to be exciting and promising. But that in and by itself is not the problem. The real problem is that we all know that this leads to research bubbles, have known it for a long time, and yet don’t do anything about it.
And that is despite the problem is (1) patently obvious (2) much talked about and (3) not difficult to solve.
The current structure of the system reinforces, rather than alleviates, cognitive biases. In this concrete example that’s loss aversion — if you’ve been working on something for a long time, it’s hard to admit (to yourself as well as to other people) that this time was wasted. Any sensible organization of the scientific system should therefore have incentives to *prevent* such cognitive biases instead of reinforcing them.
Of course this is not specific to particle physics, but that’s not an excuse to accept it.
For the people who always falsely claim that I do not know how to improve the situation, let me remind you that I have a list with things you can do to help here.
It’s a peculiar situation. Other than the stimulus year (2007), the last couple of years have seen the strongest increases for DOE HEP in a long time (basically, since the demise od the SSC). The budget is over a billion dollars, with 90M (if enacted, I think) going to theory. The puzzle is why more of it is not trickling down to University based theory. I suspect the answer does not have much to do feelings at the DOE (or by politicians in congress) about strings, the swampland, or the multiverse. DOE is a mission driven agency, and when new money comes in they want to spend it on projects, like support for experimental programs at the labs, computation, or QIS. At the same time there has been all this philanthropic money for fundamental theory, so that in the end the only people left out are theory groups at public universities who do not get much private support. I actually wonder whether part of the thinking at DOE is that they see no need to subsidize private spending on fundamental theory with public money (although agencies like NSF have a long history of supporting the IAS, for example). I also wonder what happens to all these postdocs at the IAS, Harvard, etc. that are supported by private funds. Historically, the majority of them found jobs, many at public universities.
Are Breakthrough Prize winners generally distributing their prize money to their groups and/or departments? That was my impression, though I’m too lazy to generate any evidence.
I haven’t heard of many examples of that, although had head that Kontsevich did distribute prize money (he got two breakthrough prizes, $6 million).
One group that I haven’t heard of getting any prize money are the theorists at non-elite institutions who are now finding DOE/NSF grant money harder to come by.
David Brahm, Peter Woit,
A single data point in favour of David’s claim is professor Bell Burnell donating her breakthrough prize (£2.3m) for a PhD scholarship award.
Where can I find the bibliometric data which you refered to in your first comment? Is it publicly available, or published anywhere?
I’m guessing that what he is referring to is the data discussed in section 5.1 of
The unnamed “elite institution” is CERN.
Dear Jackiw ,
I compute InSpire data in the way described in the paper mentioned by Peter.
The plot below shows how much some main institutions contributed to the total bibliometric output from 1970 to ≈2017
It is a mistake to ever think that SUSY or string theory or whatever theoretical fashion of the moment was a dominant motivating factor for the LHC or the SSC or the Tevatron or LEP or the SLC or the SppbarS or whatever expensive accelerator that required a vast team you’d care to list.
The motivation is that particle physics discoveries most convincingly appear with increased beam energy. Theorists grab a lot of bandwidth in the discussion, but hardly one hardbitten machine builder or experimentalist believes a-la-mode theory: the exploration into the unknown is the point.
Yes, many if not most experimental particle physicists must plead guilty to having a cognitive bias in favor of clear and compelling empirical evidence. And thank goodness they do.
Alternate particle physics techniques… that eventually quantum loops gave estimates of the charm quark mass, the top mass, and the Higgs mass… that neutrino oscillation and mass is pretty much the only existing clear and compelling beyond-the-SM hard data… are useful but their interpretations always were ambiguous… the superweak description of K0-K0bar CP violation suggested new physics at 10^5 TeV, but turned out that that CP violation was milliweak and the details were filled in slowly and surely by the (at that time) high-energy frontier.
Theorists on the sidelines really have no idea of all the team dynamics that go into actually getting big accelerators and big experimental teams to function successfully. It is hardly surprising that those teams have a persistence time… it is easy to understand that the LHC did not pass a new amazing threshold, but thinking that the LHC should be turned off in an instant because some au courant theory wasn’t verified is immature.
The teams and institutions will persist for a while: and maybe something amazing will come out of it, like precision Higgs couplings not lining up with expectation.
And if someday we do get a 100 TeV machine at some reasonable price tag, it will be young and creative LHC team members will almost surely by the motivators and achievers. They won’t really care more than a pfennig about string theory or SUSY.
That the course of particle physics has been some sort of psychological hallucination brought about by the sociology of cognition is an idea that has come and gone multiple times. See “Constructing Quarks” by Pickering, for example.
In fact, shocks from unexpected and sociologically sterile (prior to discovery) empirical results are far more common… results which the theoretical particle physics never predicted. Hence the famous phrase, “Who Ordered That?”… the neutron, the muon, the kaons, CP violation, the J/Psi, the long b-life, neutrino oscillation and mass, the rather large t-quark mass, etc.
I don’t really significantly disagree with you. Especially at this point, where speculative theory has been such a failure in recent decades, experimentalists should mostly ignore theorists, do what they can to study what happens at the energy frontier, and hope for something unexpected. The next 15 years or so, for better or worse, the path is pretty clear: get the HL-LHC working and get as much out of the LHC as you can. While doing this, best to ignore theorists who want to tell you about ever more obscure ways to look for SUSY or some other dubious speculative model.
Speculative theorists are sometimes like politicians… you might find them phony and self-aggrandizing, but progress in fundamental physics or government without them would be even more meager.
Unfortunately both speculative theorists and politicians can become untethered… in California the politicians clearly had no idea how to keep the high voltage electric distribution system maintained through our PUC, for example, to suppress wildfires.. echoing their cluelessness with Enron 2 decades ago… or their innumeracy on public pensions about that same time. We can all notice the recent shambles politicians have made out of our foreign service recently.
While speculative theorists chase their tails, Fermilab is pressing forward in (g-2), mu to e, and DUNE. Thank goodness the theorists are ignoring those programs, they probably would impede them.
The theorists are, however, paying attention to dark matter searches, and not helping… they seem to forget that heavy DM interactions with nucleons might be any of the 5 fundamental Lorentz structures – S,V,T,A,P… back in the 1950’s, their antecedents thought for sure the weak interactions was S and T, until the tau-theta puzzle (Who ordered that?) induced the fantastic experimentation of Chien-Shiung Wu and a shift to V-A took place. Ironic that no experimentalists got a Nobel Prize for that work… only theorists Lee and Yang (who did do crucial work).
So today speculative theorists want to declare WIMPs dead based on S alone, and ignore the other 4 Lorentz structures, based essentially on ignorance.
However, lots of great energy for other types of dark matter searches… boson-like, low-mass, etc… has come from speculative theorists lately. Bully for all that.
That MOND has been experimentally addressed (on earth), and was a rejected (by other theorists, not experimentalists) portion of the LISA project has been ignored by speculative theorists. They seem to think experimentalists weren’t paying attention, or form a false narrative that experimentalists are ignoring MOND.
Sabine & Peter,
Another way to *solve* the problem is to let it play out precisely how it currently is! Thanks in part to both of you, it is becoming ever more obvious that funding *should* be cut to Theoretical HEP. As you yourselves continuously note, nothing good is coming from it and the return on investment for the public is in many ways *worse* than nil.
Theoretical HEP is in failure mode and deserves its funding cut. Period. Full stop.
My point of view (and I think Sabine’s is similar) is not that cutting theoretical hep funding is a good idea or the solution to anything, but that what is needed is to change the reward structure so that the incentives provided by grant funding stop being incentives to keep working on the same failed ideas.
In the case of the DOE HEP budget, the part going to theory is relatively small. Cuts in it seem to have been redirected to small increases in the funding of a few large experimental projects, with likely marginal impact on the science. I don’t think this is a net plus, nothing is being done about the underlying problem. Far better would be if DOE would take seriously the problems Sabine and others have been pointing out and make some changes in how hep theory grant proposals are evaluated.
“make some changes in how hep theory grant proposals are evaluated” – do you actually have any concrete suggestions for what changes would be beneficial? Or is this just a gripe?
One simple change: panels should start rejecting grant proposals that propose to continue a failed research program. If you’re proposing to work on much the same idea that hasn’t worked for thirty years, your proposal should include a serious discussion of why this hasn’t worked in the past (and what your proposal is to solve the problem).
If people find they can’t get funded to pursue a program that isn’t working, they’ll have to find a new idea to work on (or drop out of the competition for grants, freeing space for someone who does have a new idea).
Well, yes, but this is just apple pie. You need a change to the structure of the system, not just some vague advice to the people making the decisions. If the panel is composed of people who don’t agree with your definition of “failed”, then this proposal doesn’t work. The whole system of peer review rewards people who conform to the prejudices of the senior people in the field. Exhorting these senior people to ignore their prejudices is experimentally proven to fail.
No one said this would be easy. The reason many senior people in the field won’t publicly admit failure of a research program is precisely because they are aware of the likely cost of that admission (the defunding of this research program).
While politics may make it not practically relevant, the fact remains that all that is needed is intellectual honesty about failure and its implications.
Peter and no idea,
You are making the assumption that panels are made of senior people. In fact, many panels in theory have quite a few young people (including assistant professors). But your points are well-taken. How does one tell panels to reject grant proposals that propose to continue a failed research program? I have noted that there have been relatively few funded proposals to study MSSM phenomenology, so maybe things are changing, but too slowly. Still, do you have a specific suggestion?
I do have a specific suggestion, but Peter always censors it. My suggestion is that grant proposals should be funded at random. Or at least some significant proportion of them. My point is that no-one can predict the good ideas in advance, and it is foolish to try. Of course, one has to do a little work to devise a system that discourages the freeloaders, but one needs to break the feedback loop that stifles diversity and originality.
Interesting idea. Check out Nature, this week. An article is subtitled “A growing number of research agencies are assigning money randomly”. This is being done in Switzerland and New Zealand. The problem in theory, however, is that you might find very important grants (like Stanford or Princeton theory) unfunded, which would be catastrophic. And there are also real crackpot type grants. Maybe the very best and very worst proposals would not be randomly evaluated, but the others could (maybe with a weighting system). Also grants have widely different funding levels (and the requests are usually much bigger than they eventually get), so I’m not sure how it could work. But it is obviously not crazy with serious funding agencies considering it.