Short Items

A few short items:

  • The New Yorker has its own coverage here of the NSA GenCyber summer camp program for children that was discussed here.
  • The LHC is about to start doing physics again at 13 TeV, with beam intensity slowly ramping up in coming days and weeks. You can follow what’s happening here.
  • Some filmmakers are planning an IMAX film about the LHC, more information available here.
  • Online media stories with skepticism about the multiverse continue to appear. The latest one is by Shannon Hall at Nautilus, with the title Is it Time to Embrace Unverified Theories? (I think it’s a general rule that the answer to all questions in titles is No). I like one of the comments on the piece, arguing that some speculative physics is best thought of not as science or religion, but as a game.
  • It’s behind a paywall and I haven’t seen the full story, but this week’s New Scientist has a piece entitled What if .. Most of reality is hidden? A large amount of theoretical activity in recent decades has gone towards ways of figuring out how to hide new physics from any possible interaction with experiment. It seems this is another way of characterizing the problem discussed in the Nautilus article of unfalsifiable theories. Again, since it’s the title of an article, the answer should be No.
This entry was posted in Multiverse Mania. Bookmark the permalink.

21 Responses to Short Items

  1. Stephen Olsen says:

    “(I think it’s a general rule that the answer to all questions in titles is No.)”

    see: http://ccdb5fs.kek.jp/cgi-bin/img/allpdf?199007095

  2. Ben R says:

    The existence of that paper (“Is Hinchliffe’s rule true?” by Boris Peon) does mean that Hinchliffe’s rule (as defined in the abstract) can’t be true, but the abstract is wrong to say “Hinchliffe’s assertion is false […] only if it is true”. It’s false whether or not it’s true.

  3. unification says:

    what about theories that could be verified in principal but the technology to verify them do not exist or are too expensive or too difficult ? i.e GUT scale physics

  4. Peter Woit says:

    unification,
    For the multiverse and string theory, the problem is not being falsifiable even in principle. For things like GUTs which are much more well-defined, there may be legitimate predictions, in principle testable (SUSY GUTs typically make proton decay predictions that could be feasibly testable).

    There is an obvious problem though with theories that claim some new physics at untestably high energy scales, when such theories don’t explain anything about observable physics. My theory that the universe is made up of very small green turtles, turtles which appear to be pointlike at distance scales below the Planck scale, is in principle testable. That doesn’t mean you should take it seriously as science…

  5. Michael says:

    In media, the principle is known as Betteridge’s law:

    https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines

    and a few amusing Twitter accounts apply it to the day’s news:

    https://twitter.com/yourtitlesucks
    https://twitter.com/betteridgeslaw

  6. Jesper says:

    Perhaps I don’t get it, but is it not a problem to assign a color to your turtles, when they are that small?

  7. Low Math, Meekly Interacting says:

    It’s a metaphor for a new force, quantum chromophorodynamics.

  8. bb says:

    I was under the impression that GR received experimental validation well before the 60s (Mercury’s perihelion, Eddington’s measurement of the bending of light, …).

  9. Carl says:

    The New Scientist piece is a mix of stuff. The reference to string theory is around extra dimensions wrapped too tightly to be probed with foreseeable technologies, rather than any “landscape” stuff.

    There’s also some talk about bubble universes beyond our horizon, and the usual quantum Many Worlds stuff. It’s all very muddled (as, in fact, is most of the larger article that this is a part of.)

  10. CU Phil says:

    Jasper,

    The color is confined at longer distances for “in-shell” final states.

  11. Peter Woit says:

    Jesper,
    The physical laws at the turtle scale are quite different, and if you create a macroscopic state of many turtles, it should interact with visible light such as to have a greenish appearance. Note that this theory is quite falsifiable: if you get a lot of these turtles together, shine light on them, and it looks blue, the theory is falsified.

    Your comment does make clear that there remains lots of research to be done on turtle-TOE theory, it’s a wide-open field…

  12. Nobody says:

    “turtle-TOE theory”

    Call the media!

  13. Jon Lennox says:

    If a headline asks a question that isn’t yes-or-no, the answer is ¯\_(ツ)_/¯.

  14. Scott Church says:

    This and other threads of late have gotten me to thinking… It’s no secret that we cannot test string/multiverse theory and will not be able to anytime soon. It’s doubtful we’ll be able to in our lifetimes. But… what if we could? Would anything be different? Suppose we actually did have a working solar system-sized collider or similar device that allowed us to probe Planck-scale distances and energies. This is all speculative of course, but it’s reasonable to assume we’d find new particles and/or symmetries, a viable inflaton candidate perhaps, and more. What would we do with these discoveries? It seems to me that even if all this comes to pass we’ll still be faced with a fundamental problem.

    Historically, successful theories have been able to make predictions because they offered fundamental paradigm shifts that led to them via their formalism. General relativity predicted things like gravitational lensing and the perihelion shift of Mercury’s orbit because it postulated a fundamentally different sort of space-time than classical mechanics did, and new field dynamics to go with it. The SM proposed actual underlying symmetries in the universe unlike those of its predecessors, and it was those that led to predictions of otherwise unexpected new particles, including the Higgs. All viable theories come in two parts: 1) A paradigm that proposes some new physical entity, state, or behavior; and 2) A mathematical framework that describes it. Successful theories are validated not by their ability to mathematically describe observation, but by their ability to verify their underlying paradigms within the frameworks they propose.

    So here we are… our amazing super-duper-Planck-collider has scraped the universe right up to Planck scale energies and filled our databases with new particles, fields, etc. I suspect that regardless of what we find, all of that data will fit quite nicely with at least one possible string vacuum state. We will only have discovered which of the the 10^500 theoretically possible string vacua describes our universe. This raises a dilemma. For string/M-theory, the new paradigm is string/brane objects embedded in extra compactified dimensions. The mathematical framework that formalizes it has proven to be powerful but as we’ve seen, fluid enough to describe anything, including virtually anything our super-duper-Planck-collider manages to find. So the question is… how does the paradigm get verified here? How do we know that our “theory” of reality is an explanation of new physics in our universe and not simply an arcane but beautiful mathematical description of it?

    The only way around this dilemma I can imagine would be to verify a multiverse… that is, to somehow directly observe other regions with different string vacuum states. But that is impossible, in principle as well as in practice. Unless I’m missing something, it seems to me that regardless of how elegant or workable the string/M-theory formalism is or how well it describes our observations, if it does not provide a way to verify its underlying paradigm it is nothing more than an arcane but lovely mathematical framework. It describes everything, but explains nothing… even if we do have Planck-scale observations.

    Thoughts?

  15. David Metzler says:

    Peter, regarding the LHC, do you know of an easily accessible primer and/or glossary for the bewildering variety of acronyms and specialized accelerator physics notions that are tossed around, say at the morning meetings? I’ve become a bit of an LHC junky but there’s a lot that goes on that’s still over my head.

  16. Peter Woit says:

    David Metzler,
    Sorry, maybe someone else can help. Some of this one can figure out with a bit of googling, but for some of it you probably need an expert to explain the significance of what they’re doing. At least the bottom line is clear: they’re back in business as of last night, now doing physics with 86 bunches in the beam. They will be increasing the number of bunches in the beam, trying to get up to something like 2800 bunches.

  17. Bill K says:

    “Peter, regarding the LHC, do you know of an easily accessible primer and/or glossary for the bewildering variety of acronyms and specialized accelerator physics notions that are tossed around, say at the morning meetings?”

    Try http://lhccwg.web.cern.ch/lhccwg/Bibliography/UsefulAcronyms.htm

  18. Chris W. says:

    Scott Church,

    That sounds like a very clear characterization of what Peter has always considered the problem to be—at least, leaving aside the practical computational difficulties of matching one or more vacuum states with the data.

    The vacuum states become surrogates or “fall guys” for the underlying theory. This or that vacuum state may fail to match the data, but another will eventually work. The theory is never threatened, i.e., by construction, no data can contradict it.

    Some people may thoughtlessly consider that to be a virtue—the ultimate hope for any theory…

  19. srp says:

    Now for a thought experiment: A string theorist, before the Ultra-Hyper-Mega accelerator goes online, comes up with a particular vacuum configuration (call it Turtle Conformal Theory) that produces the SM and also makes predictions for the Planck-scale phenomena. So long as that configuration were a) provably unique in producing the SM and b) made unique predictions at the Planck scale, then the UHM would provide a real test and would tend to confirm belief in TCT if its new particles and fields matched what the theory predicted. But if either a) or b) were not true, then the theory would probably still have lots of doubters.

  20. Chris W. says:

    srp,
    The UHM would provide a real test of TCT, i.e., the particular vacuum configuration. It wouldn’t provide anything close to a test of the underlying theory, unless that theory implied that TCT was the only configuration that might work, i.e., that wasn’t already ruled out by established tests of the Standard Model.

    If the latter was the case one could view the TCT as string theory combined with a correspondence principle, which eliminates the other (10^500?) conceivable vacuum configurations. On that basis one could consider the TCT as the physical theory being proposed, with string theory relegated to the status of a mathematical framework used in identifying this viable candidate. Of course, given the uniqueness of the TCT, if it fails the mathematical framework fails as the basis for as a fundamental theory. (It may still have many other uses in mathematical physics.)

    Of course we’re nowhere close to arriving at this milestone. Also note that the idea of multiverse becomes basically irrelevant—just so much untestable metaphysical window dressing.

    Aside from this, note that the initial arbitrariness of the vacuum configuration—before applying the correspondence principle—sounds a lot like the empirically determined and otherwise freely adjustable elements of the Standard Model we were hoping to eliminate. Achieving the milestone and passing the subsequent tests could be taken as evidence that the given arbitrariness is unavoidable, and reflects the choice of vacuum configuration.

    That would be very interesting, but again, we’re nowhere close to that point, and we don’t really have a good idea how to get there.

  21. Pingback: Pushback vs. non-evidence-based science? | Uncommon Descent

Comments are closed.