- Resonaances has an excellent posting about the latest WMAP9 CMB measurements, and the value Neff for the number of implied light degrees of freedom. When the WMAP numbers were released late last year, they quoted
Neff=3.89+/-.67, 3.26+/-.35, 2.83+/-.38
for the results of fits to their data and others (see section 4.3.2). Jester described this as “like finding a lump of coal under the Christmas tree”: the value Neff=3 implies no new light degrees of freedom beyond the known 3 light neutrinos. A rumor soon appeared on his blog that this result was in error and would be corrected.
The corrected version is now out, with new results
Neff=3.89+/-.67, 3.84+/-.40, 3.55+/-.49
and a note about the correction: “slight correction to Neff for case with BAO.”
which seems reasonable if you regard the difference between finding no unknown degrees of freedom and discovering a new unknown one as “slight”.
- Martin Perl has an interesting blog entry entitled What Me Worry About The Future of High Energy Physics? He describes his views about the problems facing HEP, what he thinks of the Fundamental Physics Prize, and some comments on the history of physics (as well as some kind words about this blog).
- On the Beauty front, you can watch a video of Enrico Bombieri’s lecture at the IAS on Beauty in Mathematics. On February 15 in Boston the big AAAS annual meeting will include a session on Is Beauty Truth? Mathematics in Physics from Dirac to the Higgs Boson and Beyond.
- viXra log has a posting about video released by CMS of the session on June 15th where their convincing evidence for the Higgs in gamma-gamma decays was first unveiled to the larger collaboration. It was at this point that most of the 3000 or so physicists in CMS knew for sure they had a Higgs discovery. One can speculate about what the graph of number of people in the world aware of this would look like as a function of time, but I’m sure by June 17th when I first heard about it, it was already much more than 3000, and growing exponentially.
This was about three weeks before the public announcement on July 4. Of course now what we all want to know is what the full 2012 CMS dataset says about gamma-gamma, and whether it agrees with the SM or not. The general assumption is that this will be made public at the March 2-9 conference in Moriond. So, based on the timetable last time, one can guess that within the next week or two such results will be disclosed to the full CMS collaboration.
- As every year, one can follow the latest trend in US particle theory hiring at the tenure track level here. Lubos Motl describes the current situation as one of hep-th being subjected to terrorism, I guess by hep-ph.
Quantum Theory, Groups and Representations
Not Even Wrong: The Book
Subscribe to Blog via EmailJoin 421 other subscribers
- Not Quite What Happened 14
Peter Woit, Jesko Sirker, Sabine Hossenfelder, Peter Woit, Dimitris Papadimitriou, jack morava [...]
- Yet More on the Wormholes 13
Mitchell Porter, Mitchell Porter, Peter Woit, Dan Winslow, Peter Woit, JdM [...]
- Lost in the Landscape 20
Peter Woit, JdM, Peter Woit, Topologist Guy, Leonid, Amitabh Lath [...]
- Various and Sundry 8
Justcurious, anon. fool, Anon, Peter Woit, Anon, Michael Weiss [...]
- Physics With Witten 6
Peter Woit, clayton, Dimitris Papadimitriou, Alex, Peter Woit, Noah
- Not Quite What Happened 14
- abc Conjecture (16)
- Book Reviews (120)
- BRST (13)
- Euclidean Twistor Unification (7)
- Experimental HEP News (153)
- Fake Physics (7)
- Favorite Old Posts (50)
- Film Reviews (15)
- Langlands (42)
- Multiverse Mania (163)
- Not Even Wrong: The Book (27)
- Obituaries (34)
- Quantum Mechanics (23)
- Quantum Theory: The Book (7)
- Strings 2XXX (22)
- Swampland (19)
- This Week's Hype (130)
- Uncategorized (1,255)
- Wormhole Publicity Stunts (11)
- Alex Youcis
- Alexandre Borovik
- Anton Hilado
- Cathy O'Neil
- Daniel Litt
- David Hansen
- David Mumford
- David Roberts
- Emmanuel Kowalski
- Harald Helfgott
- Jesse Johnson
- Johan deJong
- Lieven Le Bruyn
- Mathematics Without Apologies
- Noncommutative Geometry
- Pieter Belmans
- Qiaochu Yuan
- Secret Blogging Seminar
- Silicon Reckoner
- Terence Tao
- The n-Category Cafe
- Timothy Gowers
- Xena Project
- Alexey Petrov
- Angry Physicist
- Capitalist Imperialist Pig
- Chad Orzel
- Clifford Johnson
- Cormac O’Raifeartaigh
- Doug Natelson
- EPMG Blog
- Geoffrey Dixon
- Georg von Hippel
- Jacques Distler
- Jess Riedel
- Jim Baggott
- Joe Conlon
- John Horgan
- Lubos Motl
- Mark Goodsell
- Mark Hanman
- Mateus Araujo
- Matt Strassler
- Matt von Hippel
- Matthew Buckley
- Peter Orland
- Physics World
- Robert Helling
- Ross McKenzie
- Sabine Hossenfelder
- Scott Aaronson
- Sean Carroll
- Shaun Hotchkiss
- Stacy McGaugh
- Tommaso Dorigo
Some Web Pages
- Alain Connes
- Arthur Jaffe
- Barry Mazur
- Brian Conrad
- Brian Hall
- Cumrun Vafa
- Dan Freed
- Daniel Bump
- David Ben-Zvi
- David Nadler
- David Vogan
- Dennis Gaitsgory
- Eckhard Meinrenken
- Edward Frenkel
- Frank Wilczek
- Gerard ’t Hooft
- Greg Moore
- Hirosi Ooguri
- Ivan Fesenko
- Jacob Lurie
- John Baez
- José Figueroa-O'Farrill
- Klaas Landsman
- Laurent Fargues
- Laurent Lafforgue
- Nolan Wallach
- Peter Teichner
- Robert Langlands
- Vincent Lafforgue
You can hardly say the CMB measurements together with the BAO have “discovered” an unknown degree of freedom. Neff of 3.046 is inconsistent with the data by at most 2 sigma. That’s not much evidence on which to base any sort of claim for discovery, so WMAP are quite right not to do so. Especially given all the other parameters that could be varied, and the fact that ACT and SPT appear not to be very consistent with each other, and that some of the BAO data looks a little fishy too.
> Lubos Motl describes the current situation as one of hep-th being subjected to terrorism, I guess by hep-ph.
Send in the hep-drones for Great Justice!
Seriously though, no mention of the Proton Size Discrepancy?
But according to Lubos, everything is terrorism — or at least, dangerous post-modern nihilism — especially opposition to the Holy and Beautiful Theory of Extended One-Dimensional Thingies.
Looks like Neff = 4 is not impossible, BTW. It’s a new “slight” degree of freedom!
Speaking of beauty, I’m curious about your reaction to this article, which mentions you. It seems to me that it somewhat mischaracterizes your position.
I’ve understood you more to think that people are doing the wrong theorizing, not that theorizing is dead (and similarly, that string theory is inelegant, not that elegant is useless). But perhaps I’m wrong.
About the Orrell book: I think it’s a well done history of the “beauty” issue in physics, and right now this is a big issue because a lot of people are interpreting the failure of string theory and supersymmetry as a failure of the idea that you should pursue such beauty. I definitely disagree with Orrell’s conclusions, but he represents a view which I think is going to be getting more traction.
I talked to the Chronicle writer who wrote the review, and I think he decided not to quote me because my point of view complicates the issue and it’s complicated enough to explain to a general readership already. So, he added me in with Gleiser and Orrell as claiming the “predictive power of theorizing in physics waning”, which in some sense is true, although my reasons for thinking this are rather different than those of Gleiser and Orrell (crudely, I’d say that the problem is a sociological one of people pursuing failed ideas, claiming them to be beautiful even if they aren’t).
i’d rather guess by nucl-th…
I think most of these people are hep-ph.
It’s interesting to note that, while most of the jobs are going to ph people, there still aren’t that many jobs! At top U.S. institutions over the last five years, for example, I’d estimate there have at most been O(10-20) ph hires. Who, in the LHC era, can blame the community for wanting to hire at least 10-20 excellent young professors to carry model building into the next generation? This will saturate in the next few years and the hires will swing back towards formal theory a bit.
From Dr. Perl’s blog about the Fundamental Physics award: “The time scale for physics progress is a century not a decade. There are no decade scale solutions to worries about the rate of progress of fundamental physics knowledge.”
This raises the question of how long one must wait for experimental validation before one rejects a theory. I am sympathetic to remarks that string theory has been a three decade long rabbit hole in a quest for a unified theory. But what makes three decades special, if the “time scale for progress…is a century…” as Dr. Perl suggests? Does one wait only a year (no), a decade (no), a few decades (?), a century (?), or even longer to abandon a theory? How do you decide? I suspect that up until this point in modern physics, theories with no experimental support were eventually replaced by theories that did garner such support – thereby rendering the question moot, since a clear alternative **with** support existed. But today, such a trajectory may not occur, since as Dr. Perl points out, it is becoming increasingly difficult to even build machines capable of testing theories at the scale where we want to answer the TOE questions.
In short, if no TOE rival to string theory emerges with experimental support, then how does one decide how long to give string theory before calling it quits? If an elegant theory exists without experimental support or falsification, should it be abandoned when no alternative exists to replace it?
If data falsifies string theory (and I am not competent to evaluate this question), then by all means let us abandon the falsified theory. But, if I understand some of the arguments in this forum correctly, the gripe against string theory is less that it has been falsified, than that it cannot be falsified – a bold claim indeed, given that (i) unforeseeable progress in technology could occur in the future conducive to such testing; or (ii) unforeseeable progress in string theory could occur giving rise to testable claims.
Speaking as an outsider, I wonder out loud if one of the dilemmas faced by string theorists is that, even if they were to acknowledge string theory’s failings, they still believe it to be far more promising than any other TOE in existence (or that they can conjure themselves). Until that changes, they have no other TOE to work on, and will therefore stick with strings.
Actually a theory is abandoned rather than rejected, and that happens when an alternative comes along which does a better job of explaining things. So it was with the luminiferous aether; it was abandoned when Einstein produced the theory of relativity, but there was no explicit `rejection’ of the aether theory per se. So also with the phlogiston theory of earlier years. It was eventually abandoned because other theories could do a better job. When it was claimed that phlogiston must have negative mass, people just lost interest in it. But there is no real time scale for these events (changes of paradigm?). String theory will likely persist until something else comes along which does a better job of explaining things. There is no time scale for that.
Gleiser’s comment in the Chronicle article that RH amino acids don’t occur in nature is technically false, they do occur – even in humans – although they certainly aren’t common. His claim about the beauty inherent in asymmetry – and the example he gives – also seems wrong. Personally, I’m sympathetic to the notion that beauty and truth are highly correlated but I’m not sure why this issue is brought up in connection with the standard model or any extension of it since the standard model has always struck me as rather ugly. Are there really people who think that CKM mixing is elegant? It looks, to me, like an ugly kludge. And using singlets for RH states but doublets for LH states also strikes me as profoundly ugly. I don’t really understand how Gleiser – or anyone else – could regard that as an elegant or beautiful piece of mathematics.
Link to Arkani-Hamed talk at STSCI (about 80 minutes long)
Does somebody know if this paper
means that Planck will announce a non flat universe with OmegaK ≈ 10^-3?
if all experiments confirm the standard model (with a Higgs and nonzero neutrino masses), then the arguments from the hep-th community stating that the standard model is incomplete must be wrong.
It looks as if 30 years of a so-called “incomplete standard model” are coming to an end. Would it be possible to have an overview of all these arguments and to check them one by one?
It seems very bizarre to me that for 30 years a complete physics community has been convinced that the standard model is incomplete or even wrong, but that this conviction now turns out to be mistaken. What have they been telling to students for the past 30 years, and why? Where are the mistakes? Why have they been made?
There are solid arguments that in principle the SM is incomplete (no quantum gravity, bad short distance behavior of scalar and U(1) gauge fields, a large number of parameters whose origin is mysterious, perhaps dark matter, if it is not astrophysical in origin). But the arguments that the SM is incomplete at the TeV scale have always been quite weak and highly speculative. I hope that the news from the LHC will have the effect of causing people to rethink some things often claimed to be strong arguments, but which really aren’t. First amongst these is the “hierarchy” or “fine-tuning” problem (which I’ve often gotten into arguments with people about here, not interested in repeating now…)
Unfortunately, so far what I’ve been seeing from prominent theorists is no desire to re-examine their assumptions, but instead things like “maybe X that is needed to complete the SM is at a little higher mass than we thought, let’s wait for 2015-6 before doing anything”, or “OK, if there is no X, then we give up on science and go for anthropics as the explanation”.
thank you. The distinction between “incomplete at TeV scale” and “incomplete at Planck scale” is a good one.
Then the present experiments can be summarized by saying that the standard model is (probably) complete at the TeV scale.
At the Planck scale, quantum gravity and parameter explanation must kick in. Yes, dark matter is still open – if it is an issue at all – and even the short distance behavior might be a non-issue. Even certain Czech bloggers admit that the Planck length is the smallest physical distance.
So we could arrive at the situation that only two issues are open: quantum gravity, which we will probably never confirm experimentally, and the parameter calculations.
If that is true, our beloved “theory of everything” will be extremely narrow in scope. In practice, it will only explain the parameters! Are we all prepared to live with such a “basic” TOE? It will not fascinate, not excite passions, not change our world view, not produce new technologies, and not provide any power.
Could the TOE really be a footnote at the end of textbooks on the standard model? Are we all prepared for this anti-climax?
“Are we all prepared for this anti-climax?”
The progress of physics has to end one day, either in the discovery of the final truth, or in coming up against barriers to knowledge that we cannot surpass.
It could also come to a fake conclusion for a while, where a certain set of barriers are treated as unsurpassable even though they are not, and so the philosophical culture of physics might settle into vague affirmations of a favored cosmology, such as one already sees among some proponents of a “landscape multiverse”, and John-Horgan-esque musings about the limits of knowledge.
However, I do not expect to see that really happen, not in a big way. Of course certain opinion leaders and their followers may settle for such opinions. But in the big bad world outside narrow schools of thought, the situation is still discord, confusion, clash of ideas, a trickle of genuine empirical discovery, and enormous new vistas of thought still waiting to be explored.
If we actually had a theoretical framework that could predict all the parameters of the standard model, that would be an intellectual revolution that wouldn’t just stay within physics. The cultural impact would be as big as relativity. So if you want to understand how having a true theory of everything would feel, consider the cultural significance of relativity.
We don’t go around talking about relativity every day, but it’s hardly a technical curiosity buried at the ends of textbooks either. Relativity is one of the fixtures of modern intellectual culture. It’s a beloved and challenging set of ideas for people who like science – and a constant spur to the invention of crackpot alternatives for people who don’t like it – and the source of new philosophies like the idea of a “block universe” – as well as a reference point in cultural discussions.
Human culture is diverse, contradictory, always reconfiguring itself, contains a thousand elements, and changes in response to an ever-changing world. The place of a verified final theory in human culture would not remain fixed, but it would be a very heavy presence in a number of areas of thought and life.
M: It’s possible of course, but I don’t see why you should especially think that. As far as I can tell, Marc Kamionkowski is not on the Planck team and Phil Bull definitely is not, so they shouldn’t have any special insight into what Planck are seeing – the confidentiality restrictions on this data are pretty tight (until March).
There we go again about CMS collaborators eager to spill out “secrets”….. When will the lack of actual spilling be properly acknowledged as an unusual, for humankind’s standards, demonstration of respect for one’s work and for one’s colleagues’ work?
Do you want a more pragmatic point? Here it is. What will anyone* on earth gain by the “secret” given away a few days before its announcement? The only thing you’ll manage will be to spoil the end.
* Not talking about the bloggers who will host the giveaway of course.