According to a press release from UCSB, three theoretical physicists have proposed “the most viable test to date for determining whether string theory is on the right track”. This is based on a paper about cosmic strings where the authors manage to cook up a highly unlikely scenario where large strings exist and produce gravitational radiation observable by LIGO in the next couple of years.
Normally in the English language, calling something a “test” of a scientific theory would indicate that if it doesn’t work the theory is wrong. When LIGO doesn’t see this effect in the next two years I kind of doubt that there will be wholesale abandonment of string theory.
That’s an interesting looking article, and the older article by Feynman they reference is probably worth reading. I don’t know much about this stuff, but have often seen how tricky gauge theories are exactly because there is no simple set of gauge-invariant variables to work with.
Just noticed this paper this morning which reviews the variational principle in nonperturbative qcd:
“Variational techniques in non-perturbative QCD”
by Kovner and Milhano, hep-ph/0406165
Progress appears to be relatively slow over the years in finding a reliable variational setup, when taking account of gauge invariance issues.
That’s fine. If I put something on the weblog, it’s for public consumption and you quoted me accurately and fully. I just wanted to explain why I wasn’t even trying to answer your further question, and to reiterate that I’m no expert on this and definitely not speaking for d’Hoker or Phong.
Re: superstring finiteness
Peter, I apologize if you don’t like to see your summary on sps. I thought that since you made it publically available here at your weblog you are content with its public distribution. I took care to fully indicate your qualifications that you just reported, possibly incompletely, the opinion of third person(s). I hope nobody is harmed by the fact that Phong and D’Hoker see a subtlety in Berkovits’ approach.
Since I’m not an expert, I really don’t want to get involved in a discussion of the details of this issue. If Berkovits, D’Hoker and Phong want to discuss this issue in a public forum that would be great, but I’ve already perhaps gone too far in reporting a perhaps garbled version of private conversations.
As far as I can tell though, everyone involved agrees that there is no proof of higher-loop finiteness, with one difficulty being understanding what Berkovits calls the “unphysical divergences” in moduli space.
Re: superstring finiteness
Peter paraphrased Phong and D’Hoker as saying:
Many thanks for this information about your discussion with Phong and D’Hoker. I have taken the liberty to quote your comment over at sci.physics.strings. Maybe we get the chance to see Nathan Berkovits’ opinion on this issue.
For instance, is D’Hoker&Phong’s criticism dealt with by the remarks on p. 14 of Berkovits’ hep-th/0406055?
Has anyone ever successfully applied the variational technique in examining “nonperturbative” or strong coupling phenomena in any quantum field theories? The variational technique at getting approximate solutions seems to be in almost every quantum mechanics textbook, while it seems to be a rarity in most quantum field theory books.
If I didn’t know any better, the only “trial functional” that appears to be easily integrable is something that resembles a gaussian. I can’t think of any other obvious “trial functional” that could be easily dealt with analytically in the variational setup.
You can’t claim to have a proof based on an unproven assumption.
I don’t want to speak for D’Hoker and Phong, but this is my probably somewhat garbled understanding from having discussed this with both of them:
Berkovits’s claim of finiteness explicitly assumes “there are no unphysical divergences in the interior of moduli space”. This assumption (that these divergences cancel) is exactly what is hard to prove in the two-loop case and no one knows how to do for higher loops. In conformal gauge Berkovits argues that “there are no obvious potential sources for these unphysical divergences in the interior of moduli space since the amplitudes are independent (up to surface terms) of the locations of picture-changing operators.” D’Hoker and Phong have found that the correct definition of picture-changing operators is quite subtle here. These are operator products at a point and their definition is ambiguous. To make them well-defined in a way that is gauge invariant requires understanding some global terms. Unless you do this you don’t have well-defined picture-changing operators and can get whatever answer you want.
Again, I’m obviously not an expert at this, but that is my understanding of what the experts told me. While I don’t want to speak for them, from what they told me I am under the strong impression that these experts don’t believe that Berkovits has a proof.
Concerning the higher-loop finiteness of string theory:
A couple of days agao Nathan Berkovits claimed to have a proof for the finiteness of the superstring at every order.
See his message on s.p.s.: apparently one unproven assumption enters the proof, which is argued to be true in covariant contexts at least.
I was thinking of something outside of the various lattice models, such as a Thirring or Sine-Gordon type of model beyond 2 dimensions.
Lately I’ve been reading Kleinert’s book on path integrals, where he presents a path integral way of doing many of the traditional quantum mechanics problems like the hydrogen atom, infinite square well, etc …. and other interesting looking cases I’ve never seen before. I was thinking more along the lines of whether somebody has ever come up with an exact analytical expression for the path integral of an “interacting” quantum field theory in 3 or more dimensions, without resorting to any approximations.
On a slightly different track, awhile ago I was reading some book (don’t recall the author’s name offhand) about various “quasi-exact” solutions to the Schrodinger equation for various potentials with terms like x^4 and/or x^6 with particular coefficients that makes it easier to get an analytical solution of some sort. (On the surface, it appears to be a very sophisticated way of doing a WKB approximation). Though still an approximation, I wondered whether the same “quasi-exact” tricks can be applied to some interacting quantum field theories to get a non-trivial path integral, which doesn’t use perturbation theory. The upshot of the “quasi-exact” method seems to be finding coefficients for the x^4, x^6, etc … terms such that the expression under the square root sign inside the WKB integral takes on a “nice” form. Usually this “nice” form looks like a factorization of the expression under the square root sign, for the less horrible cases.
Naively I tried to see whether it can be applied directly to something like phi^4 or QED, but so far I’ve been stumped. I went searching through the literature for path integral treatments of ‘quasi-exact’ type of problems in quantum mechanics, but so far they haven’t led to much further insight. After awhile I just dropped the problem and moved on to something else.
If “quasi-exact” types of solutions can be found for something like phi^4 theory, it would be interesting to see what a semiclassical calculation around these “quasi-exact” solutions would produce.
“Has anyone ever found any exact analytical solutions to any “interacting” quantum field theories in 3 or more dimensions, without using any approximations at all?”
Would exact solutions to integrable lattice models count? In 2D, a sufficient (and in practice necessary) condition is that one has a solution to the Yang-Baxter equation,
R_12 R_13 R_23 = R_23 R_13 R_12,
where R_12 acts one the tensor product of three spaces, trivially on the last. At criticality, such a lattice model is described by a conformal field theory.
There is an analogous condition in 3D, known as the tetrahedron equation,
R_123 R_145 R_246 R_356 = R_356 R_246 R_145 R_123,
which acts on the tensor product of six spaces. A solution to this equation can be translated into a solution of some 3D lattice model, which would be described by a field theory at criticality.
Unfortunately, I am not aware of any good solutions to the tetrahedron equation. Zamolodchikov found one solution in 1981 (and introduced the tetrahedron equation in the same paper), but his model lack unitarity; Baxter later translated his work into a lattice model and showed that some of the Boltzmann weights are negative. Nevertheless, the Zamolodchikov model is believed to be at a critical point and should therefore be a field theory.
Has anyone ever found any exact analytical solutions to any “interacting” quantum field theories in 3 or more dimensions, without using any approximations at all?
In the context of string theory, where the perturbation expansion is the only thing that is well-defined, “non-perturbative” is to some degree a synonym for “things we don’t understand but would like to exist”
In many quantum field theories, the theory is well-defined outside of a perturbation expansion and there can be lots of different ways of trying to do “non-perturbative calculations”, e.g.
1. lattice Monte-Carlo
2. 1/N expansion
3. semi-classical methods: take a non-zero solution to the equations of motion and do your perturbation theory about that instead of about the zero field. This is where instantons, solitons, etc. come in.
I get the sense folks like to throw around the word “nonperturbative” as if it was a “fudge factor” that solves all the “diseases” and problems in their theories.
I see that the popular choices of nonperturbative things that many folks like to invoke, seem to be objects like monopoles, instantons, or some other “soliton” type of object. Are there other “nonperturbative” effects and/or objects than these ones, which are not too horrible to deal with analytically and algebraically?
Over the years whenever I saw the word “nonperturbative” in many papers, I started to become very skeptical. Many seem to be mostly “hot air” and hype, than anything concrete. Though one case that looked impressive on the surface was the Seiberg-Witten stuff from a decade ago, in calculating “nonperturbative” instanton corrections to N=2 SUSY Yang-Mills. To a lesser extent in the early 1990’s, the mirror symmetry stuff in doing instanton calculations in string theory also looked impressive on the surface at the time.
You can set up the perturbation series for any choice of Calabi-Yau, with any choice of moduli parameters fixing its size and shape. If the series is only asymptotic you can hope that unknown non-perturbative effects choose the Calabi-Yau and fix the moduli. If the series is finite, you have a consistent theory for each Calabi-Yau and each value of the moduli.
What’s the argument behind point 3, where there would be an infinity of consistent string theories if the full perturbative series is finite?
I was thinking of writing a blog entry related to this sometime, but here are the facts:
1. One-loop and two-loops are finite, the latter is a recent result of Phong and D’Hoker.
2. Higher than two loops are conjectured to be finite, but this has not been shown.
3. The full series is conjectured to be asymptotic. No reason to believe it is finite (and if it were there would be an infinity of consistent string theories).
Has anyone ever shown that the perturbative expansion of superstring theory is renormalizable or finite, to all orders in perturbation theory? Is it even a convergent series, or at best only a asymptotic series? Can this even be done at all in a rigorous manner without much “hand waving”?
No, I’m not happy. You still don’t seem to understand the difference between having a well-defined theory and not having one. QCD has a precise and simple non-perturbative definition (lattice QCD) and well-understood calculational methods that give controlled approximations to the exact theory. For some physical variables (the low-lying spectrum) you can do reasonably accurate calculations with control of the errors and, within the errors, you get results that agree with experiment. For others (S-matrix elements), you have a well-defined theory, but no good calculational methods. If string theory is ever going to be useful it probably would be in attacking this problem, rather than as a TOE.
People can do calculations in 11d supergravity and, when asked about the divergences, can say “oh, I’m really doing M-theory, the magical, mystical ultraviolet completion of supergravity” if they want. But they should be a lot more honest about what is wishful thinking and what is real science. You’re welcome to tell us that supergravity is just an “effective theory”, but normally when people work with an effective theory it is one that can be compared to experiment. In QCD the sigma-model is an effective theory for pions; you can calculate things about pions and compare them to experimental results. The “effective theory” you are working with predicts absolutely nothing, it is a complicated setup for making excuses for not being able to predict anything, not real physics.
For QCD, read “QCD at strong coupling”. Happy now? Then perhaps you would give the reference of the paper which predicts the proton-proton scattering cross-section at low energies from the QCD action. Maybe the lattice people will get round to it about the same time as they do matrix theory.
Supergravity is an *effective theory*. Just like the theory of pions at low energies. It works if corrections to it are sufficiently small, which is true in the small curvature and low energy regimes.
Or it can be thought of as like electromagnetism. You don’t need to know the short-distance structure of an electron to use Maxwell’s equations. However, the fact that Maxwell’s equations admit a solution with a pointlike charged object is a strong indication that a more complete theory should include something that looks like such an object at long distances. In supergravity (mutatis mutandis) this is a p-brane.
Polchinski showed that the supergravity calculation of the force between two p-branes and the perturbative string theory calculation of the force between two D-branes exactly corresponded. This calculation, and the fact that the electric and magnetic charges behave correctly, is regarded as a “good enough” reason to admit branes as objects in string theory. They have not led to any fatal internal contradiction so far.
Indeed string theorists are very commonly doing calculations of a similar sort, finding correspondences between different theories that lead many people to believe that (unless there is some stupendous conspiracy of coincidences) the objects of apparently different string theories can be translated from one into another, and into the objects of 11d supergravity including solitonic (magnetic) and singular (electric) charged solutions.
Some people call this “M-theory”: of course this is just a name. If you want to attack string theorists for saying the words “M-theory”, you are free to do so, but it is not a strong scientific argument. Such an argument would begin to focus on specific claims about specific theories, e.g. whether 11d supergravity on R^10 X S^1 does really correspond to the IIA superstring.
Actually many of the “M-theory” things I see are just classical supergravity. There’s often no quantum theory, much less higher loops. As far as I can tell, absolutely no one has the slightest clue about how to make progress on finding the new magical theory that will do what they want. The existence of this now ossified ideology of the magical eleven-dimensional theory is part of what is killing off the field. Everyone runs around repeating to each other the same misguided wishful thinking, and people end up spending their whole careers wrapped up in trying to make sense of an idea which just doesn’t work.
When folks are doing calculations of supergravity and/or Yang-Mills theory in 4 or more dimensions, what exactly are they trying to find if the theory is nonrenormalizable at higher loops? Are they searching for some “miracle” that will cancel out the nonrenormalizable stuff? I remember a number of older supergravity papers from the 1980’s which attempted to find “miracles” in higher order loop calculations. In the end many papers had a conclusion of the sort “there are no miracles in field theory”.
Every time I came across papers which attempted to deal with nonrenormalizable theories via “nonperturbative” means, I always got the impression many of these papers were mostly “hot air” and hype than anything concrete. Many of the “nonperturbative” calculations done weren’t entirely convincing.
The only thing at all like even a conjectural definition of M-theory is certain versions of Matrix theory, which only work on special backgrounds (e.g. flat 11d). With seven compactified dimensions, there isn’t even a conjecture of what might work, just wishful thinking that something will. Whatever M-theory is, it is supposed to reduce to 11d supergravity at low energies while somehow getting around the non-renormalizability problems at high energies. In practice when people say they are calculating something in “M-theory”, they almost always mean they are doing an 11d supergravity calculation.
Has anyone ever come up with a better “definition” of M-theory, besides just saying that it has a low energy limit of 11 dimensional supergravity and/or it reproduces the other string theories as some limit in 10 dimensions? Everybody I’ve asked this question to, almost always gave those two limits as a “definition”. A few folks stated “M-theory” as being the limit of some matrix theory like BFSS a number of years ago, which doesn’t seem to be as popular these days. I haven’t heard of any better “definitions”, which does NOT use the statement “in the limit of …*blah blah blah*”.
At times I wonder whether this is just pure wishful thinking on the part of string folks, by being purposely vague about what M-theory is really all about.
So, if the conjecture that there is an underlying M-theory is just wrong, and no such theory exists, string theorists will never, ever give up?
Comparing a theory that doesn’t exist (M-theory) to QCD is really just silly. There are two huge differences:
1. QCD is a beautiful, well-defined theory with no free parameters (1 if you count the theta angle), which makes an infinite number of detailed, specific predictions. The reason a mathematics foundation has a prize for its solution is that it’s a well-defined problem to rigorously prove things about QCD. Many things can be reliably calculated to fairly high precision, using perturbation theory or lattice methods.
2. Not only does the theory make predictions, but they all work. To within the accuracy that one can calculate, you get complete agreement with experiment. These predictions cover a huge range of particle physics phenomena: e+ e- annihilation, deep inelastic scattering, phenomena at hadron colliders, properties of charm and bottom bound states, etc. The theory has been tested and tested and tested again and has passed every test. For an example, see the first figure in Wilczek’s hep-ph/0212128.
Compare this to M-theory, where there is no theory at all, just a bunch of people’s wishful thinking that a theory might exist with properties that they would like. This non-existent theory makes zero physical predictions, so it can never be tested, allowing some people to spend twenty years going on about how wonderful their “theory” is and now doing really silly things like claim that its status is similar to that of QCD.
Susskind’s baby technicolor was very popular at one point, until it was realized that one can’t really calculate much in it, due to strong coupling, and that so far as one could calculate the electroweak loop corrections probably went in the wrong way compared to data.
Nowadays strongly coupled gauge theory is understood slightly better – thanks mainly to SUSY and string theory! – but it’s still difficult to get technicolor off the ground in the sense of agreeing with LEP precision data. It’s a very elegant idea, more so even than SUSY, but somehow Nature doesn’t appear to be sold a bunch on it.
As to what would cause string theory to be abandoned: if a real underlying M-theory were formulated and solved and none of the solutions were anything like the real world.
In other words, the theory isn’t sufficiently well understood that it can be discarded.
As for renormalizable gauge theories providing precise testable predictions, this hasn’t really happened with QCD. It’s taken several decades of lattice computations to come up with a reasonable (to within a few percent) baryon and meson spectrum. Chiral fermions were only recently latticized. And this hypertrophy of computation, although impressive, is about as far from a simple and elegant explanation of observed facts as can be. The million dollar prize for explaining confinement in QCD goes unclaimed.
For decades, QCD has also satisfied the description “not sufficiently well understood that it can be discarded”…
The cases of physics folks being denied tenure I’m familiar with, usually fit into one of two categories:
1 – folks who changed from a trendy hot area of research to another field that wasn’t as trendy or hot
2 – folks who change from one trendy hot area of research to another field that was also trendy and hot
The first category of folks were frequently denied tenure already at the department level. Their papers were not getting many citations, if any citations at all, besides citing their own papers. These particular cases seemed to be pretty clear cut as to why they were denied tenure.
The second category of folks weren’t quite as clear cut. They were commonly cranking out average to below-average papers which resembled “resume padding” stuff in their new field, compared to their papers in their previous field which got more citations and were somewhat better than “average”. Perhaps their tenure committees used the “decline” of the quality of their papers, as an excuse to deny them tenure? A few cases even used teaching evaluations as an excuse to deny tenure, just to get rid of a person who they didn’t like.
The exceptional cases of folks changing fields who eventually got tenure, usually were producing better papers in their new field and were able to attract many more citations. To top it off, some were even invited to lecture at summer schools such as TASI or Les Houches. Being denied tenure would have been surprising for these particular exceptional cases, unless it had to do with nonresearch reasons like politics or misconduct.
It’s pretty premature to speculate about what will happen if SUSY and string theory are no longer perceived as promising things to work on. But many people in this business are very used to shifting gears and if something else replaces SUSY/string theory they’ll jump on that pretty quickly.
What do you think will happen to all those grad students, postdocs, and assistant professors up for tenure review, who invested many years on SUSY and/or string theory if both SUSY and string theory die and become “illegitimate” fields of physics research? (ie. “illegitimate” in the sense of folks still doing research on something like bootstrap analytic S-matrix theory after the wholesale abandonment of it, for example). Arguably for folks who already have tenure, their careers could die away like Geoff Chew’s career (ie. a painful decline into irrelevance), if they don’t change into another field
I suppose the grad students and early postdocs (ie. folks on their first postdoc) could always change fields more easily. I wonder if the later postdocs and assistant professors (without tenure or on tenure track) can easily change fields without much “career disruption”. Without mentioning any names, I knew of a number of cases of assistant professors in physics who attempted to changed fields about a year or two after getting their faculty jobs. By the time they were up for tenure review, almost every single one of them were denied tenure regardless of how many citations their papers were getting, except in a few very exceptional cases where the person in question became a “superstar” overnight.
Yes, in the technicolor idea the Higgs would be something like a Cooper pair.
The vague idea about chiral gauge theories is more something like this: when there’s an anomaly one way of thinking about what happens is that the gauge degrees of freedom acquire non-trivial dynamics. Then there are problems with non-renormalizability and/or unitarity, which is why people think the such theories are inconsistent. In the standard model, the quarks and leptons each separately have an anomaly, but they cancel against each other. Perhaps there’s more to this story than one sees in perturbation theory, and it might have something to do with the Higgs.
That is very nice, I’ve been thinking a lot about directional time as well 🙂
You need a natural idea of orthogonality in time.
So the technicolor theory is basically Cooper pairs for the Higgs?
As for “non-perturbative quantization of chiral gauge theories”, do you mean something like the Dirac monopole argument? It always seems that gamma_5 is the center of the mystery.
The standard speculation is some new strong dynamics for which the Higgs is a bound state (technicolor).
Two completely ill-defined speculations I’ve always found attractive, but have no idea how to turn into anything real are:
1. The non-perturbative quantization of chiral gauge theories is more subtle than we think and if we understood it better we’d find that the Higgs appeared naturally.
2. Despite appearances, we really live in Euclidean 4-space, with group of frame rotations Spin(4)=SU(2)xSU(2). One of the SU(2)s is spatial rotations, the other is the electroweak gauge group. A choice of time direction is what breaks the electroweak SU(2).
You really shouldn’t take any of the above seriously unless I can someday figure out how to turn them into a well-defined proposal of a new theory.
“something more interesting than a scalar Higgs”
Care to speculate? 🙂
I think it was Max Planck who commented that the way science progresses is by people dying, not by them changing their minds. There will be plenty of supersymmetry die-hards, but I get the impression that Witten and others are already starting to get used to the idea that the whole low-energy supersymmetry scenario might be wrong.
Optimistically, what will happen in 2008 is that the LHC will find evidence for what is really causing electroweak symmetry breaking and it will be something more interesting than an elementary scalar Higgs. If so, most people will drop low energy supersymmetry immediately, pretending the whole thing never happened.
Any bets on whether string theory will die if a light Higgs is NOT found at LHC? If I didn’t know any better, I would think many string folks will just keep on pushing up the SUSY breaking scale and attempt to justify SUSY only as a symmetry at higher and higher energies such as at the GUT or Planck scales. It seems like a neverending game of “suspending disbelief” where the masses of SUSY particles keep on getting heavier and heavier with time.
Perhaps it will take another generation or so for SUSY and/or string theory to completely disappear from the thinking of physicists, if LHC rules out SUSY at the low energy scales.
It would be interesting to look back into physics history more than a century ago, and examine how long it took to shake off the idea of an “ether” in many physicists’ thinking. I wouldn’t be surprised if many physicists of the generation directly preceding Einstein, were still “true believers” in an “ether” to the day they died (ie. Lorenz, etc …).
Actually at this point the only thing I can think of that would cause a wholesale abandonment of string theory is Witten publicly giving up on the idea. And I don’t think that’s going to happen until 2008. He and others who are completely invested in the idea will hang on until results from the LHC come out, hoping that superpartners will appear, thus validating part of the whole scenario.
Typical string-theoretic hype aimed again at convincing the public–one thing the theory has been very highly successful at. Well at least they are trying to connect with experiment and real observations of the universe. However, I would agree that a theory is supposed to be discarded once its experimental predictions simply don’t happen. That is the central crux of what science is about, but this won’t ever happen in string theory, and that is my main problem with it and the people who do it (not so much the theory itself). There is more chance of a ufo landing in your back yard and Elvis getting out of it than there is of this cosmic string scenario and its predictions actually coming about. I would confidently bet money (a fair bit) that these signals will never appear. Such is the current state of theoretical physics–these guys are tied up in cosmic strings, anthropic Lenny S is lost and wandering in the “Landscape”, Ed has exhumed twistors, and the Brothers Bogdanov are again obsessing over their fluctuating metric signature and quantum groups! No disrespect to any of these guys actually (and none intended) but my point is the field is getting crazier each year and people are really in danger of getting lost in/obsessed with their own mathematical fantasy scenarios. The line between crackpotdom and science is now very blurred indeed. Also, some seem so totally convinced they are right that they feel obliged to bring out popular books:the Bogdanovs have one out now and I hear LS is putting one out on the “Landscape”. I don’t know what the way forward is and maybe (hopefully) things will be much clearer near the end of the decade. But in the meantime, some of these guys could check themselves into the “Rest Home for Deranged Scientists” as was featured in the Thomas Dolby video of the 80s hit “She Blinded me With Science”. Not that I can talk–this jacket is too tight, forcing me to type one letter at a time with a pencil in my mouth, and they won’t give me a sharpener for my crayons…
There must be something in what you say, as Lubos Motl, the most in-your-face String theorist around, is also an Astrologer.
What exactly do you think will cause a wholesale abandonment of string theory?
It seems like having no viable physical predictions isn’t going to stop people from being true believers in string theory. Historically, was the first demise of string theory and Geoff Chew’s analytic S-matrix theory in the 1970’s caused by renormalizable gauge theories producing precise physical predictions that were testable in accelerator experiments, while string theory hardly produced anything other than Regge trajectories at the time?
Offhand I can’t think of any obvious compelling reasons that would cause a wholesale abandonment of string theory as if it was like a sinking ship. Even Geoff Chew was still cranking out papers on the bootstrap principle well into the 80’s, despite everybody else moving on to gauge theories many years before.
Only naive reason I can think of for a wholesale abandonment of string theory, is if somebody ever found an easy way around the Sagnotti nonrenormalizable 2-loop divergence result in 4 dimensional pure quantum gravity and where additional divergences can be renormalized when fermions, gauge bosons, etc … are added in. This naive scenario would probably require a HUGE miracle to be pulled off, and I’m not particular optimistic about it happening.
At times I wonder if areas like string theory, loop quantum gravity, SUSY, twistor theory, etc … are resembling how economics research is done. Various schools of economics don’t appear to be much more than “normative” prescriptions imposed by decree, especially when it agrees with a particular brand of political ideology. Some schools of economics seem to have diehard “true believers” regardless of any empirical data, such as the “supply siders” behind Reaganomics, the Keynesians which managed the economy in the 1970’s to 20% inflation in America (or worse, such as 300% inflation in Israel in the early 1980’s), the Monetarists which ran the monetary policy in America, Germany, England, etc … in the 70’s and 80’s by managing monetary aggregates, etc ….. There’s some folks who don’t even believe in “supply and demand”.
It’s amusing about a year ago or so, the economics Nobel laureate Milton Friedman finally admitted that his “monetarist” theories of managing monetary aggregates were a total failure in the end, despite being popular policies in government central banks over the last 30+ years. It took millions of lives and decades of economic mismanagement for economists to finally figure out that Marxist/communist, Nazi/fascist, and other totalitarian types of policies run in these sorts of regimes were total economic basketcases and failures in the end.
Perhaps there’s a lot of truth in what economist John Kenneth Galbraith said:
“The only function of economic forecasting is to make astrology look respectable.”
Can the same be said about the various schools of “quantum gravity” type of research?