During the past year Erik Verlinde has made a splash (most recently in the New York Times). with his claim that the reason we don’t understand gravity is that it is an emergent phenomenon, an “entropic force”. Now he and Peter Freund are taking this farther, with a claim that the Standard Model is also emergent. Freund has a new paper out on the arXiv entitled “Emergent Gauge Fields” with an abstract:
Erik Verlinde’s proposal of the emergence of the gravitational force as an entropic force is extended to abelian and non-abelian gauge fields and to matter fields. This suggests a picture with no fundamental forces or forms of matter whatsoever.
Freund thanks Verlinde, who evidently has much the same idea:
I wish to thank Erik Verlinde for very helpful correspondence from which it is clear that he independently has also arrived at the conclusion that not only gravity, but all gauge fields should be emergent.
He remarks that this new theoretical idea is remniscent of Geoffrey Chew’s failed “bootstrap program” of the sixties:
It is as if assuming certain forces and forms of matter to be fundamental is tantamount (in the sense of an effective theory) to assuming that there are no fundamental forces or forms of matter whatsoever, and everything is emergent. This latter picture in which nothing is fundamental is reminiscent of Chew’s bootstrap approach , the original breeding ground of string theory. Could it be that after all its mathematically and physically exquisite developments, string theory has returned to its birthplace?
It’s very unclear to me why this is supposed to be a good thing. In his Nobel prize lecture, David Gross, a student of Chew’s explains:
I can remember the precise moment at which I was disillusioned with the bootstrap program. This was at the 1966 Rochester meeting, held at Berkeley. Francis Low, in the session following his talk, remarked that the bootstrap was less of a theory than a tautology…
How can everything be emergent? To the best of my understanding, emergent laws of nature are derived by considering the statistical mechanics of the fundamental laws of nature. If there aren’t any fundamental laws, you can’t have any other laws emerge from them.
So, they seem to be saying that the fundamental forces can be reduced to some other kind of thing from which they emerge. Ok … so what’s new? Isn’t this exactly what string theory has been hypothesising all along?
The preprint is just words. Compared to that, even Christoph Schiller’s approach is better. He claims to deduce quantum theory and the three gauge groups from emergence, in four dimensions…
Mockery by reference to that strange sometimes-very-funny Jonathan Foer book “Everything is Illuminated”.
full circle back to bootstrap theories?
oh my, this one is completing a much larger full circle: full circle back to Aristoteles, where physical theories are suggested in vage analogies without the inhibiting rigour of mathematics.
Okay, so there is a deeper underlying reality for which our current understanding of the universe is simply an higher-level abstraction. That’s good to know, I guess. Are there any hints in the paper what the deeper, simpler reality could be? Or is this simply one of those mathematician anecdotes with the punchline “a solution exists!”.
Looks like philosophy isn’t far away now. I do have to chuckle a bit. Seems to me the whole physics body has no clue right now as how to move on in understanding that silly thing called reality.
If you’re trying to find a more fundamental explanation to features of reality, isn’t it mandatory to declare those features you deemed fundamental as “emergent”?
Verlinde’s proposal is somehow quite compelling. To say that the gravitational equations are emergent is not so different than saying that the Navier-Stokes equations are emergent. What one means is that Navier-Stokes are some equations that are derived from the macroscopic behavior of enembles of things described microscopically in quite a different way, subject to constraints like conservation of energy. The gravitational equations are not so different … at least this is what Verlinde is arguing.
The philosophizing ones are those who speak of ‘fundamental forces’ while not expressingly clearly or precisely what that means. Whatever that means, the equations describing the motion of a fluid certainly do not describe fundamental forces/laws, and the suggestion at the bottom is that we need to think of and derive the gravitational equations from a similar point of view. What’s so terribly unreasonable about that?
Good news for Fritjof Capra and his Tao of Physics baloney.
Er, the Standard Model IS emergent. That’s what it means to be a renormalizable quantum field theory, and that’s the basic lesson of the renormalization group.
I am a bit fuzzy on what the term “emergent” means in a mathematical or physics context. There are some good clues in the previous responses but is there a precise definition?
The standard meaning of “emergence” in the physics context comes from Phil Anderson’s 1974 article “More is Different”. In the context of the Standard Model, “emergence” refers to the statement that the Standard Model is an effective field theory. This is not a free-floating claim; it arises from the renormalization group’s insights about what renormalization really means.
After reading the paper Plato’s theory of forms came to my mind, it very much reminded me of the long, involved arguments you had to read in high school philosophy class to determine whether they were valid or invalid.
Leaving the physics apart, I can’t even begin to see if the paper is logically correct, but the thing that really bothers me about these emergent “models” is that I fail to see what is the gain, what do they have to offer besides what we already have from supersymmetry, strings, etc,?
The idea that gravity and gauge fields could be low-energy emergent degrees of freedom is FAR from new and there has already been done lots of work on this. See for example the work of Xiao-Gang Wen (http://dao.mit.edu/~wen/) or the book “The Universe in a Helium Droplet” by Grigory E. Volovik (where he shows that many features of the standard model can emerge as effective low-energy model of superfluid 3He, it is of course just a toy model).
In Condensed Matter physics emergent gauge fields and relativistic symmetries are nothing new, and has been measured in many different systems. The mechanism Verlinde proposes might of course be new (although its very hand-waving right now), but the idea of emergent standard model is not his.
@Paul Murray: No, emergence is not what string theory is about. You are confusing the fact that string theory (or another microscopic theory) should reduce to SM at long distances, with the concept of emergence (which is very different and very subtle).
Quote: “the thing that really bothers me about these emergent “models” is that I fail to see what is the gain, what do they have to offer besides what we already have from supersymmetry, strings, etc,?”
You have to understand the difference between emergence and reductionism. The prevalent idea in high energy physics is that everything (up to a limit) can be split apart and reduced into more and more fundamental particles (or strings). For example the proton is build out of three quarks, which again might be build out of more fundamental things.
The idea of emergence is VERY different. If a particle is not fundamental it is not necessarily build out of smaller particles, it can be a COLLECTIVE degree of freedom. There are many examples of collective phenomena in physics (specially pioneered in Condensed Matter Physics), and these “particles” cannot be described in a reductionist way. Quantum solid and liquids, are microscopically non-relativistic and build out of many-particles interacting with EM. But they can have low energy degrees of freedom which are much more symmetric, ie have Lorentz invariance and emergent non-Abelian gauge bosons.
Thus the answer to your question is that; what emergent models can offer is to describe emergent phenomena! IF gauge symmetries and gravity are not fundamental, but emergent collective degrees of freedom, they cannot be described by, say, strings. Wither the content of the standard model is fundamental or emergent is not known (though most people tend to prefer non-emergence).
So we had emergent gravity, we knew about Kaluza-Klein compactification of gravity….connect the dots r serious business rite? Why exactly did he have to communicate with Verlinde about this trivial observation?
Ok, thanks for the clarification, but my point is this: as I understand Verlinde’s proposal is based on the holographic principle, which is based on strings, so if you use that to show that the emergent degrees of freedom are not strings, isn’t that self-contradictory or circular reasoning at best?
Oh I’m sorry, I misunderstood your question.
Well I don’t think that the holographic principle is necessarily based on string theory, the original proposal by ‘t Hooft didn’t as far I can know (http://arxiv.org/abs/gr-qc/9310026). But it is apparently, mostly implemented in stringy environment (I must say that I know very little to nothing about this).
With that said, it seems like Verlinde want to, as you say, put in string theory into his mechanism. This also sounds odd to me, but I need to study his papers more carefully before commenting on this.
Quoting H M:
> what emergent models can offer is to describe emergent phenomena!
Yes, that would be useful, but much of the literature does not address the issue of constructing workable emergent models for gravity. It seems to me that those that go furthest in this direction are people that do not usually talk much about emergence. For instance, you could argue that causal dynamical triangulations is currently the most successful way of actually calculating emergent properties of gravity.
FWIW, Erik Verlinde’s talk on his paper is online at
The word ’emergent’ is one of those horribly loaded words in physics, that means about ten different things depending on the context.
For instance, the way I interpret the word, would be that the author of that paper claims that the gauge fields of the standard model and gravity (eg the W, Z, gluons, graviton etc) were in fact not fundamental and instead collective degrees of freedom.
The classical reply to that, is to point out the Weinberg-Witten theorem precludes exactly this scenario.
Having said that, I don’t think that’s what Peter Freund is saying at all, and I have no idea what to make of his paper. I think perhaps I will now associate the word ’emergent’ with ‘opaque, excessively verbose papers’ and be satisfied to leave it at that.
It’s not clear that Weinberg-Witten applies here; certainly it doesn’t apply any more than it would to the original emergent gravity idea. We know that both gravity and gauge fields in AdS can emerge from the boundary theory, which has a stress tensor and conserved currents but in one lower dimension, so that the assumptions of Weinberg-Witten are evaded. (The gauge fields can either live on branes, or arise in the Kaluza-Klein manner exactly as Freund suggests.) What isn’t clear is how any of this concretely works in any setting other than AdS. Verlinde proposes that gravity in flat space emerges from lower-dimensional physics (I think), in a mysterious unspecified way. If this is true, it would seem almost mundane to have gauge fields also emerge in this way.
In the UCSB talk, Verlinde takes some time to argue that gravity and gauge theories should emerge from a new kind of matrix theory that reflects the entropic screen information. This talk is not as good as the Perimeter one (although it has slightly more substance) because he tries too hard to cater to the stringers in the audience.
In agreement with onymous, it seems to me that the Weinberg-Witten theorem, “Limits On Massless Particles,” Phys. Lett. B 96 (1980) 59-62, does not exclude the possibility that the graviton could be a collective degree of freedom in a combinatorial system such as causal dynamical triangulations.
The assertion in H M’s first two comments above, that non-relativistic condensed matter systems in the laboratory can have low energy collective degrees of freedom that behave like non-Abelian gauge bosons with Lorentz-invariant dynamics, and that such behaviour has been measured in many different systems, is completely new to me. It would be very interesting to see some references on this.
Mmm, terminology and loaded vocabulary! A system where dualities are present is a different type of ’emergence’. You wouldn’t really say that the gravitons in ads/cft were composite would you? But you could say that the bulk was “emergent” from dynamics that lived on the boundary.
Likewise its kind of odd to call a lattice approach ’emergent’. For instance I consider lattice gauge theory the very definition of what qft means. But then its not particularly standard to say that lorentz invariance is ’emergent’, even though thats explicitly what happens when you take the continuum limit of a typical qft.
In any event, playing word games doesn’t increase one’s knowledge of the physics, instead they’re supposed to be defined by the context and the associated mathematics… Unfortunately it seems such a specific realization is considerably more difficult to produce, than currently in vogue publishing standards….
> its kind of odd to call a lattice approach ‘emergent’.
maybe so, but it’s not at all obvious that by throwing together a lot of little triangles and stirring you get something that looks on average like a smooth manifold. That’s why I think it is correct to use the word here.
> for instance I consider lattice gauge theory the very definition of what qft means.
As “cyd” pointed out above, the form of the lagrangian of the standard model could be said to be an emergent property: whatever nonrenormalizable stuff you started with at high energy would be enormously suppressed by the time you arrive at low energy. This is exactly the behavior that occurs in statistical systems with a second order phase transition, which is undeniably an example of emergence. And by the way, something similar could happen in the case of gravity, if it had a suitable fixed point…..
Concerning the Weinberg-Witten theorem, it does not forbid “induced gravity”, which is another concrete example of “emergent gravity” (you integrate out matter fields in a background metric, and in so doing you generate an action for the metric). WW explicitly point this out in a footnote.
Dear Dr. Shor,
Wheeler was the first to state Freund & Verlinde’ idea clearly:
“The only Law, is the Law that there IS no Law”.
I liked Percacci’s two comments here, so I started a discussion thread at Physicsforums to follow up with perhaps a bit more detail:
What I find annoying by the use of the term emergence is that it dilutes its original meaning for both natural and social sciences. In both sciences, emergence means there are some laws which are fundamental, but that the resultant ‘sub’-laws come about as an interplay/relationship between the given fundamental laws. This is especially true when considering sociological studies of extralegal institutions and some such (I’m not sure what would be the analog in physics or chemistry… Friction perhaps?).
It’s like emergence is the new synergy or whatever. X_X
The claim that the gauge field is emergent is not new, I think. X.G.Wen has argued that the gauge field dof can be emergent from the string-net liquid. And I think that Verlinde’s entropy gravity is only classical gravity, until we get to know what are the fundamental degrees of freedom and how they behave, since we don’t have the notion of quantum thermodynamics, but rather quantum statistics.
Can anybody explain (in a nutshell) what bootstrap theory was ?
These two great physicists, although they did brilliant work in their youth, are rather past it when generating creative ideas that have some resemblence to reality.
Erik Verlinde is 48
Peter Freund is… 73
Let’s wait and see what the young post docs have to say.
That is an awful bit of ageism right there. Do we really want to ignore the contributions of anyone over the age of 40?
With reference to attempts to view an Abelian gauge field as emergent from something else, perhaps it is fair to note that in 1861 James Clerk Maxwell developed a theory of magnetic lines of force as rotating vortices subject to Newtonian dynamics, with tiny particles like ball bearings separating neighbouring vortices to eliminate friction, and that this mechanical analogy apparently helped him to develop his field equations published four years later. See
With reference to the one sentence in which Freund mentions supersymmetry, perhaps a pointer to the way forward might be the papers of Cioroianu, Diaconu, and Sararu, e.g. 0903.0259, in which they use the Batalin-Vilkovisky apparatus to show that any theory in 11 dimensions that nontrivially couples a massless symmetric tensor, Rarita-Schwinger field, and 3-form gauge field, without altering their degree of freedom counts, must automatically be supersymmetric and coincide with the CJS theory after field redefinitions, and thus also have vanishing cosmological constant in 11 dimensions, by Bautier, Deser, Henneaux, and Seminara.
In other words, if you could adapt the Neuberger Dirac operator to a Majorana spinor living on the links of causal dynamical triangulations in 11 dimensions, and also add a 3-form gauge field, perhaps simply as a number living on the 3-simplexes, and ensure that the system is nontrivially coupled in the infra-red, then the Cremmer-Julia-Scherk theory should automatically emerge at long distances.
If combining Neuberger and CDT turned out to require a combinatorial tangent space or vielbein, perhaps the appearance of combinatorial structures called oriented matroids in the Gelfand-MacPherson construction of combinatorial Pontrjagin classes, math/9204231, and MacPherson’s subsequent theory of combinatorial differential manifolds, might provide a clue, the connection being that Pontrjagin classes are involved in the subtleties of discretized spinor fields via the Atiyah-Singer index theorem. Various types of connection between oriented matroids and M-theory have already been considered by J. A. Nieto, e.g. hep-th/0603139.
I am not sure that I am understanding “emergence”. I find the discussion in this thread opaque. The group & topology theory discussion goes over my head but statistical mechanics is something that I am familiar with. So, is emergence like the stable vortexes observed in turbulent flow – it arise out of statistical averaging of complex multi-body interactions? If so, like turbulence, does this imply that our universe is in a quasi-equilibrium state? Are phase transitions possible?
I am not sure how any reporter can comment on this stuff. I have an ancient masters degree in Statistical Mechanics plus I worked my whole life in science related topics yet this discussion comes across as voodoo science to me. So what is emergence – in layman’s terms? Can someone tell me?
To a condensed matter physicist, “emergent” refers to properties of large collections of atoms that have no analogue for one or a few atoms.
To a high energy physicist, “emergent” refers to the large distance or low energy properties of a model system whose microscopic definition has a simple mathematical structure. For example the microscopic degrees of freedom of lattice gauge theory, which is used in computer simulations of quantum chromodynamics, consist of a 3 x 3 unitary matrix on each link of a hypercubic lattice, together with a spinor for each type of quark on each vertex of the lattice.
As Roberto Percacci pointed out above, currently the most successful emergent model of gravity is the causal dynamical triangulations of Ambjorn, Jurkiewicz, and Loll. This is a completely combinatorial system, that consists simply of a list of a finite number of vertices, together with a list of selected quintuples of these vertices, each of which represents the four-dimensional analogue of a tetrahedron. If two selected quintuples have four vertices in common, they are the four-dimensional analogue of two tetrahedra that share a common triangular face. The list of selected quintuples is required to be such that for each quintuple in the list, and each set of four of the five vertices of that quintuple, there is exactly one other quintuple in the list that also has those four vertices. The list of selected quintuples then represents a triangulation of a four-dimensional curved space-time.
To see how the properties of a four-dimensional curved space-time can be coded into such a list of selected quintuples, consider the example of a two-dimensional curved surface. Randomly select a finite number of points sprinkled on the surface: these are your vertices. Associate to each vertex its Voronoi cell, which is the set of all the points of the surface that are closer to that vertex than to any other vertex. Then define a triple of vertices to be selected if and only if their Voronoi cells meet at a corner.
The list of the triples of vertices selected in this way gives a triangulation of the curved surface, and if you sprinkled your vertices finely enough, you can approximately study the curved surface simply by counting vertices, edges, and triangles. For example, the distance between two vertices is proportional to the number of edges in the shortest path of edges between those two vertices. And if you count the number of vertices up to some distance from a fixed vertex, and the number is smaller than you would find for a flat surface, then the surface is positively curved at that vertex, and conversely it is negatively curved if the number is larger.
Ambjorn, Jurkiewicz, and Loll found Monte-Carlo rules for randomly modifying the list of selected quintuples in their simulations such that on average, the lists of selected quintuples they generate look at large distances something like triangulations of smoothly curved four-dimensional “spacetimes”. Thus these four-dimensional curved “spacetimes”, governed at large distances by something like Einstein’s classical action for the gravitational field, emerge from simple combinatorial rules for combining the four-dimensional analogue of tetrahedra in a semi-random manner.
In their latest article with Goerlich, they appear to hint at a connection with the “entropic gravity” ideas of Verlinde which with the subsequent suggestion by Freund are the topic of this posting, although they don’t actually cite Verlinde’s paper.
That response was a bit higher level than I hoped but I think I get the reference to Statistical Mechanics: you are averaging constrained micro/local topology paths in some sense to infer macro characteristics for the surface? Also, this makes me think of things like the Ising model but not sure if that is a valid comparison.
I feel somewhat uneasy about this discour though, it seems to me infering macro characteristics from the bottom up is really sensitive to initial conditions, boundary conditions, # of dimensions, grid size, etc but this is several notches above what I usually deal with so I do not feel qualified to really debate this.
I still wonder how any reporter could understand this stuff – I suspect they would only react to the word “emergent” & they would interpret the term as it is used in common language.
I forgot to say: thanks for taking the time to respond – I do appreciate the effort even if it challenged me a little more than I anticipated. Also, I think that I sort of get the gist of the argument.
As jack Lothian points out above, the meaing of the word “Emergent’ is seemed to be ambiguous.
What do you think this word mean in physics?
I understand that, in some limit, the triangulated space-time looks like a smooth manifold. If I understand well, the list of selected quintuple gives you the curvature, right ?
But where does Einstein equation comes from ? Do you interpret the “randomness” as a stress energy tensor (i.e. one side of Einstein equation) that allows you to compute the curvature (i.e., roughly speaking, the other side of Einstein equations) ? If so, does this mean that various “randomness” (I am not very familiar with random objects but I believe the randomness is given by a probability law ?) correspond, in the classical GR picture, to various matter distributions ?
Martibal: “I understand that, in some limit, the triangulated space-time looks like a smooth manifold.”
No. Why should it? The CDT approach (which Chris was discussing) uses triangulation and lets the size of the triangles go to zero.
But it does not result in a smooth manifold model of spacetime.
There is no logical or mathematical reason that it should, and it does not.
The CDT authors (Loll Ambjorn et al) describe the spacetime model they get as “fractal-like” at small scale. The dimensionality, which is 4D at large, goes down continuously to around 1.9 or 2.1 at small scale. A similar spontaneous reduction of dimensionality occurs in spatial slices. Since they generate universes by computer simulation, they are able to study what happens by looking at concrete realizations. An approximately smooth deSitter spacetime emerges at large scale as a kind of path-integral “average” of many non-smooth cases.
Marcus: sorry I am not so familiar with the subject so my formulation may be not correct.
Cris Austin said: “Ambjorn, Jurkiewicz, and Loll found Monte-Carlo rules for randomly modifying the list of selected quintuples in their simulations such that on average, the lists of selected quintuples they generate look at large distances something like triangulations of smoothly curved four-dimensional “spacetimes”.”
This is what I meant by saying that at in some limit (I should better say: at large) the triangulated manifold (my formulation was misleading. I should say: the fractal spacetime obtained as the limit of a triangulated spacetime with triangles of size zero) looks like a smooth manifold.
So let me try again :
I understand that at large scale, the fractal spacetimes look like a smooth manifold (e.g. de Sitter). In other terms, the smooth structure emerges from an appropriate average on fractals. And so does the curvature.
What I do not understand is: where does Einstein equation comes from ? I mean Einstein equation determines the metric, but not the topology. So if you average in a different way on the same set of fractal spaces, can you obtain – at large- various metrics on the same topological manifold ?
Here is a seminal paper by Tullio Regge (1961) “General Relativity without Coordinates”:
Here are Loll’s papers
Loll and friends don’t use all identical building blocks, but they almost do. They use two types. Morally it’s like all identical—in the 2D case imagine identical equilateral triangles.
You can tell the curvature at a point by counting how many triangles meet there. 6 ==>flat. 5==> pos curv. 7==>neg curv.
Einst Hilb action is essentially an integral of curvature. Regge inspires us to evaluate it by counting identical triangles and the points where they meet. If there is just the right proportion of triangles to points then on average it is flat (every point is surrounded on average by 6)
The upshot is that we can evaluate the E-H action, or the Regge version of it, just by counting. That extends to higher, like 4D.
Loll team *could* do it with all identical equilateral 4-simplices. But for technical reasons they use two classes of identical non-equilateral 4-simplices.
It only makes the counting slightly more complicated. Main thing is that the E-H or Regge action is evaluated simply by counting different types of simplex—a linear combination of census-data. Comparing how many big ones versus how many little ones where the big ones meet. (I’m oversimplifying but it’s sort of like that).
Then they introduce different shuffling moves where you have this huge assemblage of simplices (“triangles”) and at each point the computer looks to see if it wants to perform a re-arrangement, and *tosses a coin*. The decision to make the move or not is random.
Different probabilities for different moves. Re-arrangement of simplices can change the census data, change the average number meeting around whatever they meet around. Change the overall action.
So the probabilities of the various moves (some of which insert or remove simplices) are adjusted so it is a path integral Monte Carlo scheme.
Individual spacetimes result from millions and millions of shuffles. The individual spacetimes are not deSitter. Only the average of many individual spacetimes is deSitter. An individual spacetime (after say a million shuffles at every location) can be studied. It will have “fractal-like” features at small scale. They are not strictly speaking fractals, people call them “fractal-like” at small scale. The dimensionality goes down at small scale.
Think of the radius to volume relation of a wad of crumpled paper.
At large scale the mass of paper within a certain radius goes up as the cube. But at small scale the mass of paper goes up as the square of the radius. The exponent of the radius-volume relation is one measure of dimension. But it is not like crumpled paper it is only remotely analogous.
Here is the paper I found most helpful
For lots of intuitive analogies and color illustrations there is a Sci Am by Loll and friends
Here is a recent review or status report:
Reading Loll papers is better than reading what I say. They are very clear writers. But maybe this (imperfect and dashed off) will get you started.
I am strongly disagreeing with your proposed meaning of emergence as applied to high energy physics. you say that the predominant picture in HEP is that of reductionism: if this is so, the what about e.g. QCD? how can reductionism be the guiding principle, when quarks are unobservable? is the effective theory of protons and pions not also “emergent”? you could argue that these degrees of freedom are not ‘colective’, but just look at the high temperature phases: emergent phenomena wherever you look. and this is hardcore, partially experimentally verified high energy physics.
and the specific criticism that string theory is in contradiction with emergence: the current main line of string research – as far as i know – is duality stuff. the boundary QFT of a bulk string theory – this satisfies every single criterion you set up for calling a phenomenon “emergent”.
in the end, without any concrete theory behind it it is just more words.
Marcus: thanks for the detailed answer !
I am a mathematician, with only a bit of training in general relativity, but I wanted to make a few comments/questions. I believe one of the first people to suggest gravity is an emergent phenomenon was Sakharov although he called it induced gravity http://en.wikipedia.org/wiki/Induced_gravity. How does this relate to ideas of Verlinde? Why is this not mentioned or discussed?
Of course the description of induced gravity in wikipedia is unintelligible: We are starting with a pseudo-riemannian 4 manifold or 3 manifold? I am guessing 3 manifold, since in a 4 manifold, the gravitational field is already implicit and there is nothing to induce. (I suppose that’s why they say Riemannian.) I am also guessing for Sakharov the universe needs to be globally hyperbolic.
Looking at the Freund paper– the way he generalizes entropic-gravity to entropic-everything is just using the old Kaluza-Klein dimensions-as-forces thing, right? But people don’t actually use the Kaluza-Klein dimensions in normal mainstream physics, right? Weren’t there problems with the theory, like stabilization? Wouldn’t an attempt to explain forces as Kaluza-Klein dimensions for entropic gravity run into the same problems that explaining forces as Kaluza-Klein dimensions for ordinary gravity did? Am I missing something?
This problem is treated from a Kaluza-Klein perspective in Chapter 10.3 “Flux compactifications: Moduli stabilization” of String Theory and M-Theory: A Modern Introduction by Becker, Becker, and Schwarz (that is, THE John Schwarz).
Whether you are a fan of string theory or not, this chapter is a good read.