Over at Dynamics of Cats, Steinn Sigurdsson has a posting on Yarn Theory, to some extent a review of Lee Smolin’s The Trouble With Physics. Steinn is an astrophysicist, but started out life as a string theorist before getting disillusioned with the subject. His account of his early career is well worth reading, doing a good job of capturing the atmosphere around 1990 at Caltech, a major center of string theory research. He was working on orbifold heterotic string compactifications, discouraged by how many there were. Not much has changed…
His general point of view on string theory is that it has led to the discovery of a family of unified theories, but that it is missing some basic principle that would tell us which is the true theory and why. He is sympathetic to Smolin’s criticisms of the dysfunctional way theoretical research is being pursued, focusing efforts on one very speculative idea and making it unlikely that people who chose to try and work on others will be able to make careers for themselves:
… a lot of people will not only not believe that this is a real problem, they will make sure nobody else believes you either. Someone out there quite likely is already on the right track to the true theory, but their odds of survival in the current academic system are not wonderful. We may just have to wait a generation or two for a good approach to be rediscovered, which is a shame, cause some of us want to know! Now!!
… [Smolin’s] points on groupthink, and the systematic bias which discourages innovation and risk taking by young researchers hit painfully home – it is all too true, and yet it self-perpetuates because the mechanisms which reinforce conservatism in science are there for reasons. The system is flawed, and possibly broken, but the fix is not as simple as Smolin suggests – funding agencies are terrified of funding bad science, since there is so much pretty good science it is safe to fund, and as a community scientists are very harsh when bad science is mistakenly given precious resources.
It is the same market flaw that gives us beautiful flawless large red apples in supermarkets – with no taste. To get the old intense flavour varieties that everyone loves when they taste, we would have to choose small bruised discoloured apples when we shop, and leave the flawless big red apples with no taste in the bins. But collectively we do not, and the market responds. All for the fear of being the one
department head consumer to go home with an occasional rotten apple. The real shame is that the big red shiny tasteless apples are rotten just as often, they just look so good sitting there, waxed and sprayed, in the bin. ‘Course if you only get to buy one apple every three years you learn to be very conservative in your choice; don’t want a rotten or even tart apple this decade.
I think Steinn gets to the rotten core of the problem here. There are very good reasons for conservatism in deciding what kinds of research to encourage, but with the very difficult situation that particle physics now finds itself in, the standard mechanisms for making these decisions have led to a seriously problematic situation. There are various things one could imagine doing to help get out of this, but even getting started on a discussion of them first requires that the powers-that-be in the field acknowledge the existence of a problem, and that has yet to really happen.
On a completely different subject, there’s a new preprint by Michael McGuigan which manages to cite both Not Even Wrong (the book), and a Lubos Motl blog entry. The citation of my book seems unnecessary, surely there are other sources critical of string-based unification that have priority. The article is about the “see-saw” mechanism for getting the right magnitude of the cosmological constant, and it is for this that Lubos’s blog gets a citation. This does seem to be a more promising idea about the CC than many. I for one think it will be a wonderful development if the field of particle theory turns around, stops promoting pseudo-science justified by the Weinberg anthropic CC “prediction”, and heads instead in a more promising direction, all based on an entry in Lubos’s blog…
fear that is where string theory went. It discovered a family of theoretical approaches and solutions which clearly encompass an essential truth of quantum gravity and its union with the standard model and post-standard model physics, but we missed on the deep guiding principle which tells us how to select the true theory (such as it is) and where the exact theory came from and leads to.
A comment like this is simply mysticism. You don’t discover “essential truths” in physics. There are no “inessential truths”. If something is right in context, then it is right. String theory has never been right in any context. In fact, to coin a phrase, it’s not even wrong.
The current blog is essentially unique. It is empty of both mysticism and shameless egotism. It is wonderful that we have at least one physics blog with these properties.
Oh Peter, I saw your book in Barnes and Noble. It made me smile. Well done!
We’re having a discussion relating to ‘groupthink’ over at the Café. I agree that its unacknowledged presence is at the root of the problem. You may enjoy the Thom quotation in this comment.
Delightfully tongue-in-cheek take on string “theory”:
The citation of my book seems unnecessary, surely there are other sources critical of string-based unification that have priority.
This is an unwarranted condescension towards the author who probably knows best what has “priority” for him. Your “modesty asides” are becoming annoying. This is about the last thing one wants to hear from someone who put himself in the public sphere, after all. The fact that he cites you does not mean that he grants you the authority to tell him whom to cite. There are good reasons for prefering to cite you rather than those “other sources”, and I don’t think I have to spell them out. You’ve been overrating these sources all along.
I think there is a good point in not putting oneself at the forefront – you are not the first one to fall if things turn awry, while in case you are running for the prize you can always make that extra step later.
I do not think that is the reason of Peter’s acknowledgement that others have been doing what his book does before him or more authoritatively. But it is a good reason nonetheless…
Ralf and Tommaso,
It just seems a bit peculiar for me to be getting credit for the fairly obvious idea that string theory based unification is an idea that hasn’t worked out. Many particle theorists have always thought this, and some have expressed themselves publicly on the topic over the years. I am happy to take some credit for successfully drawing a lot more attention to this issue, but that’s a bit different. If the LHC finds supersymmetry, with a pattern that identifies a particular compactification of string theory, producing real successful predictions, that will certainly make me look like a fool. I’m willing to take that chance…
Yes, but here is an example of string theory based unification that has worked out.
One modest proposal to attack the groupthink problem, and the red apple problem, not just in physics but in the rest of the US scientific community: Establish a source of money (say a private foundation) that makes grants to research proposals that have been rejected by the NSF (or NIH) BUT have extreme variance in ratings from the panelists. The benefits of this idea are;
1) Breakthrough research (as well as plausible but wrong ideas) is likely to polarize expert opinion.
2) Disagreements about the prospects of a research proposal suggest that there might be high information content in the outcome of the research.
3) At least some qualified people consider the research interesting and feasible.
4) Nobody has to rewrite their proposals! Anything that gets scientists to spend more time doing science and less time writing proposals to do science makes me a happier citizen.
A colleague once said to me that if two academics have the same opinions, one of them is redundant. I wouldn’t go that far, but the spirit of the idea applies to this proposal. Maybe Templeton should shift some of its resources in this direction…
The groupthink problem is inherently unsolvable. Any foundation or
organization which is designed to encourage ideas outside the norm
will have its own set of preferences (or prejudices). A foundation
which rejects NSF proposals will have its own criteria, which will either
quickly conform to groupthink or fade into obscurity.
The situation isn’t new. There is always resistance to new ideas. Organizations which attempt to change the situation disappear
or merge into the mainstream in order to survive (the Nobel
foundation is a good example. Nobel prizes were origninally
supposed to encourage junior people who hadn’t yet made their
most significant contributions).
The only way to deal with groupthink is make an effort to change
what the group thinks. That is a realizable goal, but only after a lot of yelling and screaming.
Peter: I’m sorry if my suggestion wasn’t presented clearly.
I’m not proposing a foundation to “reject NSF proposals.” I am proposing that someone fund NSF-rejected proposals that had high variance in the ratings they received from the NSF review panels.This agency wouldn’t have groupthink–they wouldn’t have to have any-think–because they would work off the scores already generated by the NSF evaluators. And since they would fund proposals where the reviewers for the NSF DISAGREED with one another, groupthink in evaluation would be ruled out from the start.
This idea may be impractical for a number of reasons, but I don’t think an ex cathedra statement that groupthink is inevitable, without showing specifically why this exact mechanism would fail, is very persuasive. And “changing what the group thinks” isn’t a big help if it just creates a new set of unsupported orthodoxies.
srp, it sounds like a good idea. Maybe bring it to the attention of the Bill and Melinda Gates foundation?
Arun: very funny. George Soros, perhaps?
How appropriate that someone would move from string theory to astrophysics. Astrophysics is in much the same state, today, except in lacking eloquent voices like Smolin’s and Woit’s to puncture it.
I would disagree strongly that astrophysics experiences anything like the degeneration we observe in string “theory” (how can it be called a scientific theory when it fails to make any experimentally faslsifiable preidctions?).
Astrophysics remains vibrant and fertile because _new experimental data keep coming in_. This is the real Achilles’ heel of HEP right now. The data at energy ranges and luminosities and cross sections required to disconfirm even low-range estimates on the superpartners, let alone the Higgs, has simply not been there. We reached the limits of current accelerators and hit a wall. You can only do so much, and then you have to build a whole new machine — which nowadays takes decades. Calibrating the detectors on these accelerators is a gigantic undertaking. The computer technology involved is simply staggering. You’re talking about events so rare, and amounts of data so large, that it boggles the mind.
By contrast, with astrophysi9cs you just send up a new satellite. Or sometimes not even that — dump some cleaning fluid in a tank underground and set up photodetectors. That’s cheap! It’s quick. It’s easy compared to building an ILC or LHC. So astrophysics has been awash in new experimental results, while HEP has been starved for ’em. That to me is the core of the problem in HEP right now.
Nathan, I recall that Peter answered this point before. The small positive cosmological constant, or some alternative idea which does the same job, is required to fit GR to the observations of supernovae redshifts, explaining why gravity isn’t slowing down the recession.
String theory by contrast provides speculative explanations for things which are not observed, such as unification at the Planck scale (inventing supersymmetry to explain that speculation), and inventing gravitons (inventing supergravity to explain gravitons). String theory thus ‘explains’ speculations, not observations.
The point Peter made before about cosmology is that at least it is being led by observations, unlike string theory.
Mainstream models for observations might turn out wrong, but that’s better than being not even wrong. Approximations like flat earth theory, caloric, and phlogiston could later be disproved by evidence. Epicycles could provide an endlessly complex mathematical way of representing observed data, but were eventually replaced by Kepler’s simpler laws for convenience, which in turn could be explained by a simple inverse square force law.
String theory doesn’t even model or duplicate any existing observations, so it is worse than Ptolemy’s epicycles: Feynman’s criticism of string theory was largely that it doesn’t address the parameters of the standard model. It’s not even wrong because it doesn’t even model anything known to exist.
You took my misprint too seriously. I garbled the message earlier.
Yes, I know you didn’t mean another round of rejections.
I also have no objection to your idea of some private foundation funding proposals rejected by the NSF (or DOE). The more funding, the better.
My point was that such measure would not cure the groupthink sickness, whatever their other virtues might be.
Just to add. The reason I don’t think groupthink would be cured
is because any such private funding would quickly succumb to
the same pressures the NSF and DOE have to deal with. Maybe not
in the first year or two, but before the funding could alter the
course of the field.
To add my voice to those above…
Experimental cosmology is thriving. The theorists have not caught up
with the data yet. Explaining dark matter wil take more investigation
(maybe particle physics experiments), and nobody has a clue about
dark energy. This is a healthy field, even if theorists haven’t satisfactorily
explained what is going on.
>This is a healthy field, even if theorists haven’t satisfactorily
>explained what is going on.
I’d change “even if” to “because”.
John Baez has just released his Week246 writing about your book and Smolin’s.
Peter: Still not reading what I wrote. The proposed innovation isn’t really a new funding SOURCE. It is a new funding RULE, a rule which explicitly prevents groupthink by funding ideas which draw high-variance, divergent responses from reviewers. If you have a theory as to how this RULE could be corrupted by groupthink, I would be interested.
I tossed out the idea that a private foundation might implement this rule only because I see significant political obstacles to its adoption by the standard funding agencies. The essence of the proposal is not the funding source, however.
The idea of a rule that forces a group of people not to make decisions based on “groupthink” is sort of interesting… in a paradoxical sort of way. As described by “srp”, it seems to rely on at least some pre-existing non-groupthinkers, because he says to award money to proposals that meet with very divergent responses. In a pure “groupthink” environment, there wouldn’t BE any divergent responses (more or less by definition), so the scheme would only have a chance if we suppose there are some non-group thinkers in the system with votes that matter.
But this raises the interesting question of whether a scheme could be devised that would actually enable a pure groupthink environment to free itself from its own groupthink. Suppose everyone agrees on what they should do… but they are worried that they might all be suffering from groupthink. How could they address this worry? I’m reminded of a Seinfeld episode in which George decided that, from now on, he was going to do the opposite of whatever he thought he should do in every situation.
But seriously, I think the folks at google and other search engine designers have considered how groupthink can cause their search criteria to go unstable, because they feed back to themselves. (Give high ranks to pages that have lots of links, and those pages then tend to get even more links, which gives them even higher rankings…) If you notice, they sometimes “dither” their rankings, to “shake things up” a bit, and see if other pages would rise to the top after a re-shuffle. (Dithering is a technique carried over from control theory.) Maybe what the funding agencies need is a little dithering. (Off topic, but I suspect a healthy human mind must have some degree of “dithering” (not to be confused with blithering), or it would just become fixated on a small number of specific things.)
Why is it that nobody is taking down string theory from its very inception and input? Why bother with any of its “content” in the first place?
Doesn’t “supersymmetry” really mean not understanding the nature of fermions and bosons? Doesn’t “GUT” really mean not understanding the nature of the observed gauge interactions? Isn’t the idea that virtually any observed structure is due to some kind of spontaneous symmetry breaking (and the “anthropic principle”) transparently the cynical nihilism of intellectual bankruptcy?
Up to now progress in the understanding of fundamental structures has always been marked by an increase in simplicity and efficiency and explanatory prowess.
The principles associated with string theory are not only “not even wrong”, they are of fundamental stupidity and lack of sophistication. The undeserving heirs squandering the wealth the fathers worked so hard to gain. To truely believe in string theory one must have never understood any physics in the first place. It’s so 19th century!
Another way to describe that healthy tendency to dither might be a tendency to get bored and frustrated with whatever rut one has gotten into, due to social influences, financial inducements, or one’s own overripe enthusiasm.
I think that allegedly tongue-in-cheek take on string “theory” (hep-th/0011065) was intended fairly seriously. What the hell—it beats someone’s posting yet another goddawful braneworld scenario.
Amos: My understanding, not based on systematic data but anecdote, is that in real life there are many grant proposals to the NSF and NIH that receive high-variance scores–some judges love them and some hate them. Supposedly, these usually get rejected with only the proposals that get high scores from all evaluators funded (and not always all of those, depending on budgets). So whether or not it can happen in theory, divergent evaluations appear to be an empirical phenomenon. Maybe a moderate psychological level of groupthink is institutionally amplified by the current rules of only funding proposals that everyone agrees on.
Dithering, simulated annealing, and other ideas for randomly jiggling things to get away from being trapped on local peaks may be good analogies for what I’m suggesting. One common approach to avoiding groupthink is to use dialectical processes, such as an adversary system, to surface as many points as possible for and against something. But that only works if the judges of the dispute aren’t subject to groupthink. My proposed rule finesses that problem.
Another approach is to cultivate multiple funding sources drawn from sociologically separate communities. The military and ARPA used to function this way to a degree, but I’m not sure that still holds true for basic science. There are lots of people in Congress and the bureaucracy who argue for “rationalizing” the system by centralizing decision-making in a body like the NSF.
I am still skeptical. How does one re-evaluate proposals with diverging
scores to encourage diversity of ideas? One institution which does essentially such a re-evaluation is the Physical Review, as you probably
know. I don’t think their refereeing policy has specifically encouraged
more directions of research (nor, do I think it was intended to).
Though re-evaluation proposals might not be a bad idea, I can’t see
it altering the dominant culture.
John Baez sais something on NEW and TTWP
Peter O: Here’s one way to do it: A scientist whose grant proposal got rejected at the NSF submits the same proposal to the hypothetical new funding source, along with the reports and scores of the NSF review panel that rejected him. The new funding agency sorts these proposals according to the within-panel variance in NSF ratings. It then funds the highest-variance proposals first and works its way down the list until it exhausts its budget (or the proposals start to look too gamy).
I am not aware of the Physical Review process (I’m not a natural scientist), so if they do something similar I would be interested to hear how it works (in both senses).
How do you write a proposal if what you have is an idea that might lead to a breakthrough, but you can’t possibly put the idea itself in the proposal because that would effectively mean giving it away? Doesn’t the funding process virtually exclude anything that doesn’t fall under “normal science”?
Ralf: Wow! Are these panelists really that unethical that you can’t report your ideas without them ripping you off? That would be a lot more serious problem than groupthink.
In any case, I’m not saying that my proposal would guarantee funding to all breakthrough ideas, merely that current proposals that have more breakthrough potential than average could be funded.
Here is the Physical Review refereeing process (as far as I can
tell from my experience submitting and refereeing papers). Most
papers are sent to two referees. If there is one rejection and one acceptance it goes to a third referee. If that referee rejects it, it
usually goes back to the author with a rejection from the editors.
The author can (and often does) challenge this assessment.
Sometimes the editors can make a clear judgement of rejection or acceptance – but if they can’t, it goes to another referee. And so on.
If the author can get two acceptances, the paper is usually published.
So at some level, the process resembles your suggestion.
While in principle I like your idea, I am not sure it would work
effectively if real money is involved. Financial considerations will
eventually make the people running your system unwilling to take too many risks.
An insight into peer review, as practised in the UK, circa 1980, long before science became as venal as it is today. (And this is tawdry applied physics/engineering, in which the stakes are not that high, and one is thus less likely to compromise one’s mortal soul.) A barely post-doctoral physicist puts together a paper, relating to a finer point of detail relating to laser operation, and submits it to a respectable journal. Nothing, beyond the usual politesse of receipt, is heard for several months; thereafter discreet enquiries elicit only the reassurance that due process is in hand. Some nine months on, the second referee’s report arrives, which grudgingly accepts that the substance of the MS is sound, but would benefit from some experimental verification (contact with reality, hep-th dudes) which the referee would be happy to provide, in a joint publication. Honour amongst thieves or what? And if a trip to Stockholm was up for grabs? Oh dear.
The private sector is much better. If my own experience is anything to go by, then I cannot say that people behave any more honourably, but at least they are scrabbling for something more tangible than glory. It is also a thing that can be exchanged for goods or services.
Peter O: You’re right that any attempt to institutionalize behavior at a non-profit foundation is quite difficult. Random personnel turnover eventually leads to a critical mass in management and among trustees who want to circumvent or change the deceased founder’s original vision. John MacArthur probably wouldn’t be too thrilled with today’s MacArthur Foundation, and I wouldn’t even want to try to estimate the rpms in Henry Ford’s grave over the antics of the Ford Foundation.
On the other hand, if a founder’s mandate is fairly narrow and specific, it can only be overturned by the trustees going to court and proving that changing conditions have made that mandate non-practicable or absurd. That doesn’t happen very often. So, for example, a foundation that was tasked to give out college scholarships to people residing in a given town would be hard to mess with unless the town became depopulated or colleges became obsolete and disappeared.
Anyway, I am off this thread. Thanks for the feedback.
“There are very good reasons for conservatism in deciding what kinds of research to encourage, but with the very difficult situation that particle physics now finds itself in, the standard mechanisms for making these decisions have led to a seriously problematic situation.”
Yep. As best I can tell, the main citerion for determining who gets jobs is: “Which influential senior physicists are you buddies with, and what have you done to help advance their research programs?” This might work fine when the way forward is clear, e.g. during the development of the Standard Model. After all, senior influential physicists usually get to be that way for a reason, and young people could do much worse than to follow their lead. But in a situation where, to paraphrase Polchinski, there is “remarkable convergence on an unproven idea” it might be good for physics if there was a possibilty for people to work on other things and still have the chance of a career. Btw, “other things” doesn’t just mean the QG alternatives of Smolin&co but also the various topics in formal theory that Susskind refers to as “near-coast exploration”. Physics advanced quite far by near-coast exploation in the past, so it might be a good idea to allow it to continue in the present rather than forcing everyone who wants a career to go off on some or other epic voyage.
Actually there is an easy way to fix this situation if people really want to: Take away the NSF/DOE funding which currently goes to individual faculty members or research groups to hire postdocs and use it to fund postdocs directly via an open competition. Schemes like this already exist, e.g. the EU’s Marie Curie fellowships. A crucial feature of the scheme would be that the applications are assessed by a broad committee with representatives from all areas of theoretical physics. Then the string theory member of the committee will have to explain to colleages from condensed matter and atomic physics why string theory applicant X, who has never done anything on his own but whose famous advisor has declared him to be “brilliant”, is more deserving of support than non-string applicant Y who has published single-author papers in PRL. Many, perhaps most, of the people supported under such a scheme will be the same ones who find support under the current system, but there will also be others following more independent paths who currently don’t have a chance. For it to be worthwhile the scheme would have to be supplemented with a similar one at the junior faculty level, e.g. along the lines of the UK’s PPARC Advanced Fellowships, but with assessments again being made by a broad committee with representatives from all areas. If the reps are all from particle theory it will just be buddies rubbing each others backs and the outcomes will be no different than under the current system.
Pingback: D. R. Lunsford, Lubos Motl, and Quantum Gravity « Quantum field theory