Various Links

A random assortment of possibly interesting links:

  • New videos from the IHES include new interviews, and talks from the recent Ofer Gabber conference. If you want to know more about prisms than what you can get from the video here, I hear rumors that Bhargav Bhatt will be our Eilenberg lecturer this fall at Columbia.
  • Another talk at the Gabber conference was the latest on local (quantum) geometric Langlands from Gaitsgory. Also on this topic, there’s a July 4 preprint from Arakawa and Frenkel (advertised here).
  • John Horgan has an interview with Jim Holt, headlined Why Does Jim Holt Exist?. For reviews of Holt’s two most recent books, see here and here.
  • For two interesting blog posts from HEP experimenters about news from their field, see Jonathan Link at SciAm on neutrinos, and Tommaso Dorigo on the Higgs self-coupling. Dorigo’s posting is the more technical one, explaining a new CMS result bounding the Higgs self-coupling.

    The LHC experiments still seem to be a long ways away from actually measuring the Higgs self-coupling, but may be able to do so in future higher-luminosity stages of the LHC program. The Higgs remains the least understood part of the SM, responsible for most of the undetermined parameters of the theory. Any measurement of its self-interactions is an important goal.

    While it often seems that experimental results relevant to going beyond the Standard Model are inaccessible due to the necessity of higher energies, these two blog posts point to important open questions about the SM that are hard to study not because of fundamental limits on collision energies, but because of small event rates and high backgrounds.

  • The question recently came up here (see this posting) of how good the SUSY GUT coupling constant unification prediction is. At a recent summer school lecture, Ben Allanach says the prediction is off by 5 sigma, i.e. that if you try and predict the strong coupling at the Z mass this way, you get 0.129 +/- 0.002, whereas the measured value is 0.119 +/- 0.002. Someone should tell Frank Wilczek…

Update: For more on the dS vacua issue, see this blog posting by Ulf Danielsson. Danielsson refers to criticism of the landscape at my blog, but ignores the point I’ve often made that string theory is in just as much trouble if the advertised dS vacua don’t exist, since then it has no known way to connect to the real world and its positive CC. Somehow he sees this as a virtue, that “These are exciting times”, which I find mystifying.

He also claims that, all is well, since:

By studying the mathematics of the theory we will find out what it predicts, and by comparing with observations we will learn whether it has anything to do with reality.

But the underlying problem here is that the mathematics of the theory is unknown. Read the responses above from two experts to the question of why this issue has not been conclusively determined. Neither of them present any possibility of a conclusive determination. For both of them, this is about weighing the plausibility of various conjectures about possible solutions to unknown conjectural equations. At this point I seriously doubt it is possible for this to be resolved one way or the other.

Update: ICHEP 2018 is winding up, talks available here. There was one plenary summary talk on “Formal Theory Developments”, by Tadashi Takayanagi. It begins by promoting string theory, then goes on to address the problem of how it relates to fundamental physics with:

However, please do not ask me questions like:
How to derive Standard Model from string theory?
Why do we live in 4 dimensions?
How to realize de-Sitter spacetimes in a well-reliable way?

String theory is still too infant to give complete answers to them.

The “complete” is intentionally misleading, since string theory now gives no answers at all to these questions. While celebrating the 50th anniversary of string this year, it seems that “string theory is too new an idea to evaluate” is the standard answer to anyone who points out its failures.

This entry was posted in Uncategorized. Bookmark the permalink.

30 Responses to Various Links

  1. Anony says:

    You’re misreading the slides. Those numbers are for non-susy GUTs.

  2. Deane says:

    I like the face Ofer made when the Langlands Program is mentioned, as well as what he says about Gaitsgory.

  3. Peter Woit says:

    Anony,
    No I’m not, you are. Interesting that the current SUSY error in gauge unification is of a size that you mistake for the non-SUSY error.

    The numbers I gave are from page 40, which explicitly says “in the MSSM”. If you look at the plot on page 58 you see the 5 sigma size problem, and that the numbers were generated by the SOFTSUSY package, described here https://softsusy.hepforge.org/
    From the plot the calculation uses benchmark point 40.2.5, from this paper
    https://arxiv.org/abs/1109.3859
    The numbers for this benchmark point look around current SUSY limits.

    It looks like anyone who wants to can run the software and generate their own numbers. I’m somewhat curious whether how sensitive the numbers are to assumed superpartner masses. Has the way the LHC has pushed up superpartner mass bounds also pushed up the error in the SUSY gauge unification prediction?

  4. Sabine says:

    I’m pretty sure that Wilczek knows this (the problem in general, maybe not the exact numbers) because he points out in the interview that’s in my book that gauge coupling unification gets worse the higher the susy breaking scale has to be. And we’ve meanwhile passed the point where it’s not working well any more.

    Of course the running might be more difficult, so that it still all fits, see eg this paper. So if you want to believe, you can still believe.

    Also, you could simply shrug it off because GUT and SUSY are two separate things, so who says that a supersymmetric extension of the standard model actually must have a gauge coupling unification in one point?

  5. James Wells says:

    The current LHC limits on supersymmetry in no way have the slightest impact on the attractiveness or viability of gauge coupling unification in a supersymmetric context.

    All the talk of 5sigma error is decidedly irrelevant, unless you speak of a very special approach called “precision unification”, where the high-scale corrections are assumed to be very small for some non-generic reason. Zero GUT threshold corrections is not expected if you have a real GUT gauge group in field theory, as high-scale threshold corrections from required high-mass rep remnants are formally of the same order in QFT as the weak scale corrections.

    The whole question is how big do you expect those corrections to be. In SUSY they are, and continue to be, surprisingly tiny corrections required for unification. (Perhaps too small even if you like lots of GUT reps.) In SM, the corrections are huge, but not altogether impossible if there are lots of high dimension GUT states hanging around.

    Lack of superpartners at LHC is not qualitatively changing the discussion of gauge coupling unification by experts. The reason is that the dependences are logarithmic rather than, say, quadratic (as in “finetuning” discussions, which have qualitatively changed). One order of magnitude change in mass limits is not much to a logarithm.

    A rather complete discussion can be found at https://arxiv.org/abs/1502.01362
    where Sebastian Ellis and I put susy and non-susy gauge coupling unification expectations in context, and tried to give a visual picture of the issues.

  6. Peter Woit says:

    James Wells,
    Thanks! Perhaps you should be the one to tell Wilczek about this. In Sabine’s book he tells her that “the changes of unification start getting worse when susy partners haven’t shown up around 2 TeV”, giving this as an argument for why the LHC might still see superpartners, just beyond current bounds.

    There’s a long history of these gauge unification calculations being sold as a strong argument for SUSY, rarely making clear the weaknesses of such an argument.

  7. Chethan Krishnan says:

    Dear James Wells,

    “The current LHC limits on supersymmetry in no way have the slightest impact on the attractiveness or viability of gauge coupling unification in a supersymmetric context.”

    Is the following a fair summary?

    Non-SUSY GUTs need high threshold corrections for unification to happen, while SUSY GUTs were generally expected to need only tiny threshold corrections. This is generally presented as a good thing, for reasons which are not very clear to me.

    (Of course, if I am not mistaken, Allanach’s slide 40 now says even SUSY requires bigger threshold corrections, possibly in light of more precise measurements of low scale couplings).

    You seem to be saying that the smallness of the threshold correction is more or less an arbitrary benchmark for the attractiveness of a GUT.

    If that is the case, according to you, shouldn’t the SM be as good or bad as the MSSM as a candidate for unification? In fact, from your paper that you linked, it looks like one has to (at least for some choices of GUT group) compensate for the threshold corrections s in supersymmetric theories, potentially making them less attractive than SM?

    On a general note-

    Do you know a reason why people (like Wilczek, or your refs[1-7]) seem to be implying that small threshold corrections in MSSM is a good thing? (Please correct me if I am wrong.)

  8. From [Ellis-Wells 17, p. 23-25]:

    We have discussed how gauge coupling unification generally only improves in a supersymmetric theory, even at very high scales, with respect to the SM.

    And before:

    Thus, even the Standard Model up to the high scale is compatible with gauge coupling unification from this perspective, although the corrections becomes quite large in that case, and one has to ask whether nature would rather have large corrections at the GUT scale for a SM GUT or very small corrections for a low-scale SUSY GUT.

    […] the introduction of supersymmetry both reduces the needed threshold corrections at the high-scale and increases the GUT scale […]. This latter element is helpful since one generally requires that the GUT scale be above about 10^{15} GeV so that the X,Y GUT gauge bosons do not induce too large dimension-six operators that cause the proton to decay faster than current limits allow.

    Thus, exact gauge coupling unification is viable for intermediate values of supersymmetry breaking, which are also compatible with the Higgs boson mass constraint.

    […]
    One important experimental prediction of minimal supersymmetry, even for superpartners at very high scales, is the existence of a relatively light Higgs boson. This has been seen by the LHC. We have shown in this paper that even arbitrary high scales of super-partners allow the light Higgs boson, due to the required matching of the SM effective theory Higgs self-interaction coupling to gauge couplings in the supersymmetric theory.

  9. paddy says:

    A question from someone not hep-th since mid 70s. I have seen 3 things used to support SUSY over the years since: (1) naturalness (2) gauge coupling unification at gut scale and (3) providing dark matter WIMP candidates. Are these pillars not crumbling one by one?

  10. Anon says:

    In GUTS and SUSY GUTS there is also tuning/finetuning needed now to get the fermion masses (including light neutrino masses) and mixing right. It would be interesting to have a table of tuning needed for popular SU(5), flipped SU(5) and SO(10) models from fermion mass and unification of couplings point of view. This is apart from usual finetuning needed for gauge hierarchy problem.

    Paddy — naturalness was the main reason to expect SUSY at weak scale. Now it is clear that the MSSM needs to be fine tuned to least a 1% level for the gauge hierarchy problem. And so that pillar has crumbled. A 100 TeV collider could potentially probe finetuning to 0.01% level.

    GUTs and WIMPs were more secondary reasons (though very remarkable reasons), and there is no evidence for them either. But there is no evidence for non-SUSY GUTs or WIMPS or for dark matter particle (other than due to gravitational effects) either.

  11. GoletaBeach says:

    “…. these two blog posts point to important open questions about the SM that are hard to study not because of fundamental limits on collision energies, but because of small event rates and high backgrounds.”

    A significant experimental thrust since, well, “re”-started in the late 1970’s with the IMB proton decay effort has been devoted to pushing the envelope on small event rates and backgrounds.

    Naturally there was even earlier work by Reines in South Africa, a group of Indian physicists working at the Kolar Gold fields, a Cornell/U Utah group that worked in mines in the Wasatch, and, most notably, Ray Davis and his group in South Dakota. There was even a low background group in the Manhattan Project, because initial samples of most of the interesting transactinides were at the nanogram level, meaning, decay rates were low.

    In the US, though, the low background effort has struggled… follow-ons to Ray Davis have had a rocky road with the NSF rejecting, and DUNE/Fermilab/DOE finally coming to the rescue; IMB was shut down and effort transferred to Kamioka in Japan, a large 1980’s effort was shunted to Gran Sasso in Italy (Macro and later Borexino), and SNO, conceived at UC Irvine, took place in Canada. Daya Bay in China had a huge US contingent as well. And of course at the South Pole the US has supported a grand high-energy neutrino experiment, IceCube.

    I doubt that the science program of any nation on earth has been as internationally-oriented as that of the US in low-background physics. Comparatively, the US-based low background program has grown weaker, and the consequence is much of the expertise at the nitty-gritty of low background experimentation lives in Germany, Italy, Japan, and Canada. At this point, frankly, the US abilities aren’t as good.

    I’m not arguing for US nationalism, but when it comes to conceiving and implementing new low background experiments, the US can feel like the old Soviet Union in accelerator-based physics: lots of ideas, but modest ability to execute. Perhaps the US low background program is a bit out of balance, and needs a tuneup.

  12. Amitabh Lath says:

    The majority of LHC limits on SUSY assume R-parity conservation (RPC). Models with RPC give you large missing momentum in the detector, which gives CMS and ATLAS great sensitivity.

    SUSY limits for R-parity violating (RPV) signatures are much softer. There is no honking big missing momentum signature to trigger on. So maybe it’s R-parity conservation that is in trouble, not all of SUSY?

    Full disclosure: I’ve worked on searches for RPV SUSY.

  13. Anon says:

    Amitabh,

    Generic R-parity violating terms at the TeV scale in MSSM will lead to quick proton decay. So generally RPV SUSY shd be at a much higher scale such as around the GUT scale.

    Only a restricted number of RPV terms that do not lead to proton decay are permitted at TeV/multi TeV scale. Ruling these out is pbly also important from hierarchy problem point of view.

  14. Amitabh Lath says:

    Anon: yes proton decay does put limits on RPV but you need both leptonic and baryonic RPV to make the proton decay. Nature could make just one of those non-zero, and make a large number of LHC searches blind to SUSY signal. Also, the limit on proton lifetime is 10^34 yrs so both leptonic and baryonic RPV could be nonzero but small enough to evade that limit.

    Look, my point is that experimentalists put limits on specific models, with many caveats (requirements on leptons, heavy flavors, jets, missing momenta…). This then somehow goes from “they have ruled out some very basic minimal SUSY models” to “ruled out MSSM” which quickly morphs into “killed SUSY”.

    We know our blind spots too well to endorse a statement like that. Maybe some mickey-mouse version of SUSY is dead, and maybe SUSY as a source of Dark Matter is in jeopardy, but overall I would say LHC has done a thorough preliminary survey for the easiest signatures and is working on the next level of complexity.

  15. Paul says:

    Peter,

    I have to say, I love your blog. You do a great job keeping us informed and up-to-date. I apologize for this off-topic comment. No need to have this posted. I see you’ve been to Paris many times. I am planning to visit the city and would love to visit sites in Paris that have a physics theme. I believe there’s a Marie Curie museum, as well as (I think) a museum dedicated to Becquerel. What other places should I visit that have a physics theme or otherwise have some historical significance to physics?

    Also, have you been to other European cities of historical significance to physics? I’m sure there are many, as Europe was at the center of science for so many years. I was thinking Berlin, Germany, and Bern, Switzerland because of Einstein. Geneva, for obvious reasons, but I’m not sure what others.

    Again, sorry for the off-topic comment. You can delete the comment, but if you could shoot me a brief email whenever you have time, that would be great!

  16. Low Math, Meekly Interacting says:

    “Killed SUSY” is of course too expansive because it’s impossible. That’s the problem, right? What’s dead is the idea that SUSY solves what was perceived to be a mystery in need of a solution. Now we know the Higgs mass is “unnatural”. The “WIMP miracle” looks to be a hoax. There may not be a GUT scale for all anyone really knows, so trumpeting the idea that couplings unify (when squinting at a log-log plot) may be an attractive fantasy. What motivates the effort to falsify SUSY moving forward? As pointed out on this blog many, many times, the fact that there are literally an infinite number of models that have not been ruled out yet is not in itself a reason to go about crossing them off a comprehensive list, one by one. However, some very well promoted ideas have been thoroughly debunked. In fact, the volume hype that has been deflated stretching back literally decades is itself quite staggering. Experimentalists have done a fantastic job, and you deserve far, far more lionization in the popular science press than you’re getting. You’ve hopefully saved a lot of people from further wasting their careers. I wouldn’t be so forgiving of those who are still openly tempting people to eschew all reasonable hope of empirical support in their lifetimes.

  17. Peter Woit says:

    Paul,
    That’s pretty off-topic, except for the fact that next week I’m leaving for a vacation trip that will start with a couple days in Paris (and then head off for the far, far North).

    Others may have better suggestions, but as a kid I was fascinated by the Palais de la Decouverte science museum. As an adult, an interesting museum more devoted to engineering than science is the Musée des Arts et Métiers, with a Foucault pendulum there and at the Pantheon
    https://en.wikipedia.org/wiki/Conservatoire_national_des_arts_et_m%C3%A9tiers

    The small Curie museum is part of a university complex worth visiting. I vaguely remember something about Becquerel’s old lab in or next to the Jardin des Plantes.

    Sorry, but never been to Bern, and during the couple short visits I’ve made to Berlin, there were all sorts of things to see that seemed of more interest than tracking down what’s left of where Einstein would have been.

  18. Peter Woit says:

    LMMI,
    I think the best motivation for continued SUSY searches is just that they provide well-studied examples of things to look for that are not already ruled out. Up to experimentalists to use their judgment for whether these searches are worth the effort, or other ideas are more promising. Would be great if theorists could provide better ideas of what is promising, that’s very difficult right now.

  19. Anon says:

    Thanks Amitabh for your note. I guess like experimentalists’ blind spots there are also the theorists’ cover ups…. what I mean is if you supersymmetrize the standard model, the most general model that emerges is R parity violating and leads to proton decay. So it is already ruled out in most of the parameter space (unless some couplings are very small which theorists don’t like). To save the situation R-parity conservation is introduced and it is not a happy situation as it is an additional symmetry. But miraculously this provides a WIMP dark matter candidate — in other words R-parity conservation can be motivated from Dark matter point of view rather than from saving SUSY from proton decay issues. So theorists are happy. But now experimentalists have ruled out R-parity conserving SUSY at TeV scale. SO now some R-parity violating can be considered, though not all terms together. But we lose the dark matter — so we are really saving SUSY at this point. And that is not really something that is theoretically pretty.

    In any case LHC has enough energies to also rule out R-parity violating MSSM as a solution to the hierarchy problem right. I guess that is what you are working on from experimental end and that is much needed to do.

    There are some models such as the minimal suppersymmetric left-right model, that automatically conserve R-parity, and where it has been shown that R-parity must be necessarily spontaneously broken with the sneutrino picking up a vev.

  20. CWJ says:

    I just got from Paris and second the Musée des Arts et Métiers. Somehow I missed it on previous visits, I don’t know how. The top floor in particular is scientific instruments.

    Another lovely museum with scientific instruments is the Museo Galileo in Florence, not far from the Uffizi. Excellent displays. Also includes Galileo’s finger in a jar. The rest of him is in the Basilica di Santa Croce across the city.

  21. lun says:

    Regarding the ICHEP2018 talk, the speaker (Takayanagi) claims the Ryu-Takayanagi formula was proven by Lewkowicz and Maldacena in 2013, presumably referring to
    https://arxiv.org/abs/1304.4926
    Is this really counted as a “proof”? It reads (and is claimed by the authors) as a plausibility argument based on a worked example. Or am I misunderstanding it?

  22. Thomas Van Riet says:

    Dear Peter,

    You say ” the point I’ve often made that string theory is in just as much trouble if the advertised dS vacua don’t exist, since then it has no known way to connect to the real world and its positive CC”

    The point made in Danielsson’s blog (& my paper with him and the papers of Vafa and friends) is that this is wrong. Cosmological observations do not necessarily imply a positive cc. The bounds on the equation of state parameter allow the dark energy to be time-dependent. Exactly as it would be for quintessence for instance. Secondly, it could already be at the quantum field theory level that de Sitter isometries (ie the constancy of dark energy) is broken. This has been ongoing work of Polyakov, Mottola, Woodard,…and many more since the eighties even. If that is correct then string theory better not give us dS vacua. [There is even the possibility that departures from the FLRW Ansatz for the universe, ie spatial inhomogeneities, take away the need for dark energy to explain observations. But I have no good understanding of that.]

    Secondly, you say ” But the underlying problem here is that the mathematics of the theory is unknown ” and then a bit later you say criticize people that actually point out that lots of the theory still needs to be developed to know things for sure. My point being, that your criticism seems to lack some consistency. Either way string theory always looses for you. If we have a landscape of dS, then string theory is not even wrong, if we do not have a landscape then string theory is wrong. You complain about not having finished the theory but then you critize those that say “give us more time”.

    Finally, it could be true that string theory is not sufficiently developed yet to think of phenomenology. However there are corners of the theory where we do understand it and we can study aspects. This is what drives string phenomonologists. Now some of us are claiming that those corners might show signs of a conspiracy against dS space. Which, if correct, is certainly a deep insight.

  23. Lee Smolin says:

    Dear Thomas,

    Your point is well taken. But, it seems all of us who are curious about how nature unifies gravity and the quantum nonetheless face a very peculiar situation in which there are several ways to get partway to a quantum theory of gravity. Each has its defining results, which give its enthusiasts reason to work on it. And each has its stubborn obstacles, which justify its critics’ skepticism. It seems to me if we want to demand consistency and an objective, scientific attitude, we should all put our cards on the table and answer two questions:

    -What negative result would cause me to give up my currently favoured approach?

    -What positive result within another, possibly new, approach would motivate me to switch my efforts to developing it?

    In addition, I would argue that no matter which of the approaches we currently work on, we should all be able to answer questions like: Name five approaches to quantum gravity,, and describe the key results, strong points and weak points of each.

    Thanks,

    Lee

  24. Peter Woit says:

    Thomas van Riet,

    To elaborate a bit on the “point I’ve often made” about metastable dS vacua: those pushing that program have a proposal for how they are going to get testable predictions. They will study the space of such vacua (nowadays, using “Big Data” techniques), find a way to put a measure on it, and make statistical predictions. I’ve often argued that this clearly can’t work (with a main reason they don’t have a theory well-defined enough to know what the space of vacua is, much less what the measure on it is), but that’s their proposal for how they are going to turn string theory into legitimate science that makes testable predictions.

    You can conjecture that these dS vacua don’t exist, so their program falls apart, but then what is your proposal for how you are going to get testable predictions out of string theory? The most straightforward implication of a conjecture that string theory is inconsistent with a positive CC is that it’s inconsistent with our well-tested standard cosmological model, so it’s wrong. But, you don’t like that implication, so you do what theorists always do when their model fails a test: try to evade this failure by making the model more complicated. Sure, you can add “quintessence” fields, then postulate that for unknown reasons they have exactly the properties needed to evade the (very strong bounds) that usually rule them out. This is the opposite of making a prediction, it’s just evading making one.

    This kind of thing is exactly what philosophers of science describe as a “degenerating research program”, it’s the way scientific ideas typically fail. While their idea is failing this way, the usual behavior of those trying to evade the failure is to argue “we just don’t understand the implications of our idea well enough, maybe someday we will find some new understanding that will turn failure into success”. This is what is going on with the idea of string theory as a unified theory.

    I don’t believe string theorists have any serious answer to Lee Smolin’s “What negative result would cause me to give up my currently favoured approach?” question above. You’ve shown that “no dS vacua in string theory” is a negative result that won’t cause you to give up, instead you move to quintessence fields. There are already very significant negative observational results about quintessence, and the very likely continuing negative such results clearly are going to have no effect on getting string theory researchers to give up on string theory unification.

  25. It’s not like outside of string cosmology there’d be no handwaving in cosmological model building.

    For instance, according to Buchert et al. 15 (CQG+) there are mistaken assumptions in Green-Wald 11, so that it remains open whether the cosmological constant is an artifact of neglecting backreaction of spatial inhomogeneities in the concordance model (as in Buchert 07).

    (Of course Danielsson-VanRiet 18 do mention this as one possibility, p. 27.)

  26. Shantanu says:

    Thomas: Just so that I understand, are you saying if future precision observations are consistent with a cosmological constant, then is string theory ruled out?

  27. Thomas Van Riet says:

    @ Lee Smolin:

    Thanks for engaging.

    -What negative result would cause me to give up my currently favoured approach?

    A reason to give up string theory would be when it simply does not pass internal consistencies. Say it would say something wrong and inconsistent about black hole entropy (for instance the coefficients of the log A contribution to the entropy)

    For all theories beyond the standard model we can say that whenever a prediction is not verified by experiment we can give it up as a model of the universe. It does not mean one should give it up as a model (I will still study Ising models although they do not describe real magnets). However as with all theories of physics beyond the standard model, and especially ALL approaches on quantum gravity, it seems direct verification requires energy scales we cannot yet access. Nonetheless there could be effects from the highest energy scales manifesting themselves at low energies. This is why hierarchy puzzles are outstanding. They are a failure of decoupling UV physics from IR physics. So lets see what our favorite UV complete theories of gravity and other forces can say about the cc problem. What already strikes me as amazing is that string theory is a framework where I can, in some corners, compute the vacuum energy (agreed, we are still fighting about the details of the computations. But in some corners we certainly agree). For quantum field theorists, that is a dream that cannot really be met.

    -What positive result within another, possibly new, approach would motivate me to switch my efforts to developing it?

    If the other approaches provide a framework to address questions in QFT that need UV completion. I gave already as example: computing vacuum energies in certain field theories. I can write a whole essay here and I will no do it (maybe I start blogging myself soon). For instance one of the amazing features of the string theory framework is our ability to understand strongly coupled field theories better, whether through holography or some D-brane picture of the field theories. In any case it is clear that those people that call themselves “string theorists” are typically versatile and willing to look into various directions. A clear manifestation of that is for instance the whole “entanglement&geometry” program. As far as I can tell that program is not really string theory. Perhaps “string theory inspired” a bit.

  28. Thomas Van Riet says:

    @ Shantanu,

    Since we have not yet settled the debate about the existence of dS vacua, I cannot tell. Just to be clear: I think the vast majority of string phenomenologists thinks the evidence for dS vacua is there. I do not agree, but at the moment I am part of a minority. I guess we just need to work more…

  29. Lee Smolin says:

    Dear Thomas,

    Thanks very much. I am sympathetic to much of what you say. For example, I agree that the recent revival of the idea that space emerges from quantum entanglement is highly promising. But are you aware that this idea was raised by Penrose in the 1960’s and that it motivated his invention of spin-networks? Or that it has been explored in a number of loop quantum gravity papers? (If you are interested look at recent papers by Etere Livine or papers by Daniele Oriti et al or myself which derive forms of the Ryu- Takayanagi relation within LQG.) Indeed I would suggest that this growing area offers an opportunity for people exploring diverse approaches to learn about other approaches and, perhaps even appreciate commonalities among them.

    I also appreciate your answer to my second question: “If the other approaches provide a framework to address questions in QFT that need UV completion” seems a clear criteria. Let me suggest a few papers from different approaches that might then interest you.

    -There is a claim that the asymptotic safety approach to QG forces a uv completion of the standard model, which relates its coupling constants and, in particular, determines the value of the top quark mass: https://arxiv.org/abs/1707.01107.

    -Computations in several approaches, including asymptotic safety, causal dynamical triangulation and LQG suggest the uv completion is reached through a dimensional reduction in which the uv limit of correlation functions scale in a reduced spacetime dimension.

    Thanks,

    Lee

  30. tulpoeid says:

    Paul,

    just noting that if you find yourself close to Bern (e.g. Geneva) and you’re interested in such tourist attractions, Einsteinhaus is definitely worth a small daytrip.

Comments are closed.