Various News

  • Reader Chris W. pointed me to this story about what Cédric Villani, aka the Lady Gaga of French mathematics, has been up to. I see that the report of the “Mission Villani” is now available (in French or English) and it’s front page news at le Monde. There’s also an AI for Humanity website now up, and plans for all sorts of events tomorrow (video here) involving Villani and French president Macron.

    For insight into what this means, you’ll need an AI expert. I’m curious to hear if there’s anything really surprising in the report.

  • Neil Turok and collaborators have a new proposal for how to understand the Big Bang, with the headline version “The universe before the bang and the universe after the bang may be viewed as a universe/anti-universe pair, created from nothing.” There’s a short summary here, a longer paper with details here.

    The papers make various claims of predictions, I’m curious to hear from cosmologists what they think of these. Much of the papers does look like fairly straightforward QFT calculations, which I’ll try to look at more carefully when I find time.

  • The LHC is now in a machine checkout phase, ready for resurrection around Easter Sunday, with the start of beam commissioning for the 2018 run.

Update: A wealth of analyses of physics papers is available at this website and this preprint.

This entry was posted in Uncategorized. Bookmark the permalink.

14 Responses to Various News

  1. Anon says:

    I had noticed Turok’s and collaborators paper. I do HEP, not cosmology. However I have a question/comment. What is the CKM phase in the “anti-universe”?

    There is something I am confused about — In the anti-universe if the physicists there take their time “t” as positive, like we do, and call their atoms to be made of “particles” like we do, and write their Standard Model Lagrangian for sub-TeV scale particle physics, will they end up with exactly the same Lagrangian we have, with the same CP violating phase in the CKM matrix? I think from the way the introductory passages in the paper have been written, they would.

    Then isn’t this just a duplication of the universe (rather than an anti-universe)? Or is it that their Lagrangian would have \delta_CKM reversed in sign. In which case their particle physics would be very different and the following statement they make would pbly not be true “In other words, the density of particles of species j with momentum p and helicity h at time t after the bang equals the density of the corresponding anti-particle species with momentum p and helicity −h at time −t before the bang.”

    Looks to me that these are two universes duplicated rather than a universe-antiuniverse pair but I could be totally wrong.

  2. Fred P says:

    “For insight into what this means, you’ll need an AI expert.” I was trained as an AI expert, although I’ve rarely worked as one. That said, assuming that you’re referring to https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf , I don’t think knowing much about AI is required; it’s a policy document, with a lot of wishlist items.

    I find their description of “exposed” jobs (jobs that may be replaced – pages 83-84) interesting. As someone who works primarily in medical robotics, I think that they are missing the costs and benefits in their analysis. As an example, both waitstaff in restaurants and chemists in blood labs have been highly automatable for well over a decade. The former is rarely done, whereas the later is ubiquitous. The reason is that automation of the former doesn’t save a lot of money – and has significant costs (space, maintenance, less social interaction between the customers and the restaurant, etc), whereas automation of the later saves large amounts of money and space.

  3. Tim says:

    It’s not two positive universes, although it appears that way to people who are limited to each universe. It’s a subtle and interesting point about the paper.

    The paper, if I understand it correctly, makes the point that in order to understand the Big Bang, you need to use a consistent set of characterizations (metric, wavefunctions, etc) that handle BOTH the universe and the anti-universe, and not just the late time situation, where the “other” universe can be ignored. In particular, this involves a particular choice of the CPT invariant vacuum, which is halfway between the asymptotic vacua of the universe and antiverse. All of the CPT things are actually reversed in the antiverse, including which direction time flows, what is a particle and what is an antiparticle, CP violation, etc. But the universe + antiverse is CPT invariant. This only matters when considering the Big Bang, but the consequences for both universes are huge, including eliminating the need for inflation and predicting that there is an undetectable (except by gravity) background of stable right handed neutrinos that could explain dark matter.

    People who actually understand this stuff can correct me, but it seemed like a very interesting paper to me when I read it.

  4. GreatDoofus says:

    Anon & Tim,

    I think the term ‘anti-universe’ is misleading. The analysis is still at the level of QFT on a classical spacetime background. What they’re doing is ‘enhancing’ symmetry of the FRW metric (which is already invariant under parity, spatial rotations and translations) by making it invariant under reversals of the conformal time. To do this you just need to extend the geometry of the universe backwards, past the Big Bang.

    The extra symmetry simplifies the structure of QFTs and imposes some new constraints on neutrinos and dark matter. Also, it can explain the origin of scalar perturbations without the need for inflation.

    You’ll need to ask a cosmologist for more details, but, I think testing this model should be straightforward.

  5. Pingback: Een opvallend idee: een universum-anti universum paar dat bij de oerknal uit het niets ontstaat

  6. ay says:

    The predictions seem rather weak to me. What can we do with a non-interacting neutrino with such a high mass? There are some restrictions on neutrino properties that, if verified, would not really single-out the theory. That’s not to say I have any reason to think it’s wrong. Just that deriving revolutionary new theories with a liklihood of testability seems difficult these days.

  7. Narad says:

    I don’t think knowing much about AI is required; it’s a policy document, with a lot of wishlist items.

    I’m with Fred P; I only did two years on a Ph.D. with one of Roger Schank’s recent grads (yes, case-based reasoning basically gave the world irritating voice-based phone trees at the end of the day), but two standouts were that the authors didn’t have enough AI to hand to use hyphenation and the line “[d]efining artificial intelligence is no easy matter” (re Schank, see his entry for the 2014 Edge ideas-to-be-retired question).

    How “explaining machine-learning algorithms has become a very urgent matter” is also a head-scratcher. It seems about as urgent as explaining how to interpret oscilloscope results for test points on 1970s color TVs was. The table that Fred P pointed out also seems to posit Rosie the Robot.

  8. Pascal says:

    > How “explaining machine-learning algorithms has become a very urgent matter”

    Well, assume for instance that a computer has decided that someone must undergo a certain medical treatment. It would be nice to be able to explain this decision to the patient or to (human) doctors! But many modern AI algorithms cannot explain in a human-comprehensible way how they reached a given decision. That’s because the “learning” often takes place by tuning a large number of parameters of a complex model (think “deep learning) on big data sets. Et voilà! We hope (and sometimes can prove) that because the model fits the training data, it will also perform well on future unseen examples.

    The “old AI” of the 1970’s of the “expert system” type followed a more explicit rule-based approach, and was probably better at giving explanations. But it was outperformed by more recent statistical approaches. I suppose Villani would like to have the best of both worlds.

  9. srp says:

    Pascal is correct; DARPA, for example, has projects explicitly aimed at trying to make intelligible the predictions of these regression equations (that’s what they really are, high-dimensional non-parametric regressions extrapolating from training data). Another unpleasant discovery has been the remarkable vulnerability of these data-mined regressions to various types of spoofing, with an arms race taking place over the last few months between the spoofers and those trying to make the systems more robust.

  10. Tim says:

    So there is a flaw in Pascal’s comment.

    Is there a need to explain the decision to a human, or simply a need to verify that it is right?

    Remember, checking that an answer is right is usually far easier than finding the answer (P != NP !!).

    For safety issues like medical diagnoses, you actually only need to verify that the answer is right. You do *not* have to actually understand how it was found.

  11. srp says:

    I haven’t had a chance to go over this yet, but the abstract is promising and the authors are very sharp:
    http://www.nber.org/papers/w24449

  12. Narad says:

    DARPA, for example, has projects explicitly aimed at trying to make intelligible the predictions of these regression equations (that’s what they really are, high-dimensional non-parametric regressions extrapolating from training data)

    Could you please give me a pointer? We had one fellow in the lab (who actually got the lone Ph.D.) who was working on CBR for, IIRC, targeted radiotherapy applications.

  13. srp says:

    Here’s a quote:

    “The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

    David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

    This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.”

    https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

  14. David J. Littleboy says:

    Another Shank disciple here: I have an all-but-thesis from him from the mid-1980s.

    Re: srp’s link. The gestalt in AI in the ’70s was that linear descent search in large spaces could, in principle, only find local minima, so the idea that big data and “machine learning” are going to solve all mankind’s problems seems unlikely in the extreme. A joke (if memory serves, in a Newell and/or Simon AI book at the time) illustrating this had a picture of a robot climbing a tree, pointing to the sky, and screaming “I’m getting closer to the moon”. IMHO, this describes the current state of AI way too well.

Comments are closed.