The Defense Department has awarded a $7.5 million grant to Steve Awodey of CMU, Vladimir Voevodsky of the IAS and others to support research in Homotopy Type Theory and the foundations of mathematics. I had thought that getting DARPA 10 years ago to spend a few million on Geometric Langlands research was an impressive feat of redirection of military spending to abstract math, but this is even more so.

On some kind of opposite end of the spectrum of government spending on mathematics, there’s the story of the NSA, the largest employer of mathematicians in the US. Tom Leinster has an article in New Scientist about the ethical issues involved. More at the n-category cafe.

Seven years after the NSA-backdoored NIST standard was discovered by Microsoft researchers, and seven months after Snowden documents confirmed this (see here), the NIST has now removed the backdoored standard from its random number generator standards. As far as I know there has never been an explanation from the NIST explaining how the backdoored algorithm was made a standard, or why anyone should trust any of the rest of their cryptographic standards at this point. Earlier in the year they issued a Draft report on their standards development process which explained nothing about what had happened. The language about the NSA in the report is:

NIST works closely with the NSA in the development of cryptographic standards. This is done because of the NSA’s vast expertise in cryptography and because NIST, under the Federal Information Security Management Act of 2002, is statutorily required to consult with the NSA on standards.

which seems to indicate they have no intention of doing anything about the problem of NSA backdoors.

On the Langlands front, for those who don’t read French Vincent Lafforgue has produced an English translation of the summary version of his recent summary of his recent work on global Langlands for function fields (already proved by his brother, but he has a way of doing things without using the trace formula).

Langlands continues to add material to his web-site at the IAS. See for instance his long commentary on some history at the end of this section and his recent letter to Sarnak with commentary at the end of this section, where he gives his point of view on the state of the understanding of functoriality and reciprocity.

Sabine Hossenfelder has some interesting commentary on her experiences in the academic theoretical physics environment here.

Mark Hannam has some related commentary on academia at his new blog here.

I’m still trying to finish a first draft of notes about quantum mechanics and representation theory (available here). I recently came across some similar notes which are quite good by Bernard, Laszlo and Renard.

David Renard also has here some valuable notes on Dirac operators and representation theory.

Last Friday and Saturday at the University of South Carolina there was a Philosophy of the LHC Workshop, with talks here. Many of the talks were about the nature of the evidence for the Higgs and its statistical significance. James Wells talked about the supposed Higgs naturalness problem. He argues (see paper here) that you can’t base the problem on the Planck scale and quantum gravity since you don’t know what quantum gravity is (I strongly agree…). Where he loses me is with an argument that there must be lots more scalars out there than the Higgs (because string theory says so, or it just doesn’t seem right for there to only be one), and these cause a naturalness problem. Of course, once you have the naturalness problem, SUSY is invoked as the only known good way to solve it.

A quick addendum: Vincent Lafforgue claims a proof of the “automorphic to Galois” direction of the global Langlands correspondance – attaching Galois representations to automorphic representations, the “””easy””” direction – for any reductive group over a function field, while his brother Laurent has established both directions but only for GL_n.

* Designing cryptographic standards for the US government has always been half of the NSA’s mandate, and they do that work in collaboration with NIST. This has never been a secret. Usually it’s a good thing because they do employ a lot of talented people, and the US government does have an interest in using cryptography that can’t be broken by, say, China. If you kick out the NSA, someone still has to design crypto and no one can be trusted to do it. The only solution is to publish the algorithms and let hordes of academic cryptographers try to break them in the hope of getting a paper out of it. That works against NSA subterfuge too.

* There was something obviously screwy about Dual_EC_DRBG even before Shumow and Ferguson worked out the details. Cryptographic primitives that need quasi-random initialization constants normally use so-called nothing up my sleeve numbers, such as digits of pi. Dual_EC_DRBG uses mysterious constants that were provided by the NSA without explanation. Given that it’s also based on elliptic curves (the EC in the name), it’s easy to guess that the mysterious constants are some sort of public key.

* I don’t think published Snowden documents confirmed that Dual_EC_DRBG was backdoored; they just said the NSA was trying to insert back doors into cryptographic primitives, and everyone (or just The Guardian?) assumed it was Dual_EC_DRBG because that was already under suspicion. I’m not completely sure about that, though. Regardless, the Shumow and Ferguson paper was not what caused the initial suspicion, later confirmed by Snowden—the paper was itself a confirmation of what anyone would suspect from looking at the standard.

* Dual_EC_DRBG is only breakable by whoever has the private key. It’s not breakable by, say, China. This was obviously a design goal. The thing is that it seriously constrains the design, which is why Dual_EC_DRBG is so obviously suspicious. Public keys are big (200+ bits for ECC, thousands or millions of bits for other public key systems). If an algorithm only uses nothing-up-my-sleeve numbers, there isn’t enough room in the design for a public key. You could still design in a subtle flaw not based on public-key cryptography, but anyone who noticed that flaw could exploit it, and there’s no way to predict when, say, China would notice it. You can sort of see how NSA decision-makers could justify Dual_EC_DRBG as within the NSA’s mandate, but a deliberate flaw that might leave US communications totally exposed to anyone is a different matter. So I’m not personally too worried about other NIST cryptographic standards.

Ben R.,
I don’t see any reason at all to have faith that NSA-introduced backdoors will only ever be usable by the NSA. Do you really think that everyone who works for or has worked for the NSA except for Edward Snowden is completely devoted to keeping their secrets secret and will never slip up on this?

The argument seems to be that it was obvious from the beginning that this was bad crypto, which makes even stronger the main concern here: how and why did the NIST introduce a bad crypto standard? They have done nothing at all to address this, and from all appearances the reason is they feel they have a legal mandate to allow the NSA to use them for this purpose. That the specific mechanism for this involves mathematics and mathematicians makes this an issue that should be of concern to the math community.

I do think that the NSA can keep a private key secret. Private keys, unlike Powerpoint slides, never have to be seen by anyone. They can be stored in tamper-resistant hardware decryption modules kept under physical guard, and probably detached from any network most of the time.

I agree it’s worrying that this actually became a standard, and that it took so long to become a scandal despite the signs being there from the beginning. It means the open cryptographic community is not paying enough attention. I just don’t think it means that the NSA could be secretly subverting other standards without their knowledge. You can look at existing standardized primitives and see that they aren’t suspicious in the way that Dual_EC_DRBG is, and there certainly won’t be another one like it in the future after this fiasco.

Blaming NIST is a little strange. It has released dozens of security standards, and its track record is extremely good. Nobody’s private info was compromised by Dual_EC_DRBG, as far as anyone knows. On the other hand, Google, Facebook, and Yahoo have compromised 100s of millions of users with the Heartbleed bug. There is more reason to trust NIST for crypto than those companies.

Peter – As Simon Pepin Lehalleur points out, Vincent Lafforgue’s work goes far beyond reproving his brother’s results without the trace formula. His work applies to arbitrary reductive groups, which is a huge breakthrough since representation theory works in profoundly different ways outside of GL_n. In particular the “other direction” (Galois to automorphic) doesn’t even make sense in the same way as for GL_n, since representations now come clustered together in “L-packets” labeled by Langlands parameters (a labeling we finally have thanks to this work of V. Lafforgue). Together with the giant breakthroughs of Gaitsgory and collaborators in the geometric setting this is a really exciting time for representation theory over function fields!
(…not to mention the advances in the number field setting, which are far too numerous and fast paced for me to even keep up on a hearsay level).

“I had thought that getting DARPA 10 years ago to spend a few million on Geometric Langlands research was an impressive feat of redirection of military spending to abstract math, but this is even more so.”

I’d say this is backwards, with much more potential for applications for homotopy-type theory, as compared to Langlands.”

Apart from formal theorem proving, type theory has been used in the past to verify the correctness of protocols in large complex mission critical systems where you really dont want to have bugs, race conditions or infinite loops. I know of at least one very large DARPA grant of this nature which was handed a few years ago. As far as I understand (not an expert) its still not clear how constructive (hence useful) the new univalent approach is, but one can certainly see why DARPA would be interested in exploring this.
In applications, one needs highly qualified programmers who also know theory in this approach to protocol verification, this might explain the large sum. Geometric Langlands is really interesting, but I dont see the DARPA connection, but who knows higher math is sometimes useful in surprising ways.

Thanks George,
That looks like big news, but I know nothing beyond what Jester has. I’ll repeat my standard advice that Resonaances is a blog anyone seriously interested in HEP physics should be following. If the BICEP2 result does fall apart, will be interesting to see if that gets anything like the often-fevered coverage of the initial claim. It’s certainly not going to be advertised as evidence against the multiverse…

As for Falkowski’s suggestion in his blog that the BICEP has admitted to making a mistake, Pryke says that “is totally false.” The BICEP team will not be revising or retracting its work, which it posted to the arXiv preprint server, Pryke says: “We stand by our paper.

A quick addendum: Vincent Lafforgue claims a proof of the “automorphic to Galois” direction of the global Langlands correspondance – attaching Galois representations to automorphic representations, the “””easy””” direction – for any reductive group over a function field, while his brother Laurent has established both directions but only for GL_n.

Re Dual_EC_DRBG:

* Designing cryptographic standards for the US government has always been half of the NSA’s mandate, and they do that work in collaboration with NIST. This has never been a secret. Usually it’s a good thing because they do employ a lot of talented people, and the US government does have an interest in using cryptography that can’t be broken by, say, China. If you kick out the NSA, someone still has to design crypto and no one can be trusted to do it. The only solution is to publish the algorithms and let hordes of academic cryptographers try to break them in the hope of getting a paper out of it. That works against NSA subterfuge too.

* There was something obviously screwy about Dual_EC_DRBG even before Shumow and Ferguson worked out the details. Cryptographic primitives that need quasi-random initialization constants normally use so-called nothing up my sleeve numbers, such as digits of pi. Dual_EC_DRBG uses mysterious constants that were provided by the NSA without explanation. Given that it’s also based on elliptic curves (the EC in the name), it’s easy to guess that the mysterious constants are some sort of public key.

* I don’t think published Snowden documents confirmed that Dual_EC_DRBG was backdoored; they just said the NSA was trying to insert back doors into cryptographic primitives, and everyone (or just The Guardian?) assumed it was Dual_EC_DRBG because that was already under suspicion. I’m not completely sure about that, though. Regardless, the Shumow and Ferguson paper was not what caused the initial suspicion, later confirmed by Snowden—the paper was itself a confirmation of what anyone would suspect from looking at the standard.

* Dual_EC_DRBG is only breakable by whoever has the private key. It’s not breakable by, say, China. This was obviously a design goal. The thing is that it seriously constrains the design, which is why Dual_EC_DRBG is so obviously suspicious. Public keys are big (200+ bits for ECC, thousands or millions of bits for other public key systems). If an algorithm only uses nothing-up-my-sleeve numbers, there isn’t enough room in the design for a public key. You could still design in a subtle flaw not based on public-key cryptography, but anyone who noticed that flaw could exploit it, and there’s no way to predict when, say, China would notice it. You can sort of see how NSA decision-makers could justify Dual_EC_DRBG as within the NSA’s mandate, but a deliberate flaw that might leave US communications totally exposed to anyone is a different matter. So I’m not personally too worried about other NIST cryptographic standards.

Ben R.,

I don’t see any reason at all to have faith that NSA-introduced backdoors will only ever be usable by the NSA. Do you really think that everyone who works for or has worked for the NSA except for Edward Snowden is completely devoted to keeping their secrets secret and will never slip up on this?

The argument seems to be that it was obvious from the beginning that this was bad crypto, which makes even stronger the main concern here: how and why did the NIST introduce a bad crypto standard? They have done nothing at all to address this, and from all appearances the reason is they feel they have a legal mandate to allow the NSA to use them for this purpose. That the specific mechanism for this involves mathematics and mathematicians makes this an issue that should be of concern to the math community.

I do think that the NSA can keep a private key secret. Private keys, unlike Powerpoint slides, never have to be seen by anyone. They can be stored in tamper-resistant hardware decryption modules kept under physical guard, and probably detached from any network most of the time.

I agree it’s worrying that this actually became a standard, and that it took so long to become a scandal despite the signs being there from the beginning. It means the open cryptographic community is not paying enough attention. I just don’t think it means that the NSA could be secretly subverting other standards without their knowledge. You can look at existing standardized primitives and see that they aren’t suspicious in the way that Dual_EC_DRBG is, and there certainly won’t be another one like it in the future after this fiasco.

Blaming NIST is a little strange. It has released dozens of security standards, and its track record is extremely good. Nobody’s private info was compromised by Dual_EC_DRBG, as far as anyone knows. On the other hand, Google, Facebook, and Yahoo have compromised 100s of millions of users with the Heartbleed bug. There is more reason to trust NIST for crypto than those companies.

Peter – As Simon Pepin Lehalleur points out, Vincent Lafforgue’s work goes far beyond reproving his brother’s results without the trace formula. His work applies to arbitrary reductive groups, which is a huge breakthrough since representation theory works in profoundly different ways outside of GL_n. In particular the “other direction” (Galois to automorphic) doesn’t even make sense in the same way as for GL_n, since representations now come clustered together in “L-packets” labeled by Langlands parameters (a labeling we finally have thanks to this work of V. Lafforgue). Together with the giant breakthroughs of Gaitsgory and collaborators in the geometric setting this is a really exciting time for representation theory over function fields!

(…not to mention the advances in the number field setting, which are far too numerous and fast paced for me to even keep up on a hearsay level).

Simon and David,

Thanks for the comments about the V. Lafforgue work, that’s very helpful.

Peter, I’m having trouble with the links for James Wells’ talks,

Your request could not be completed

Authorisation – The access to this page has been restricted by its owner and you are not authorised to view it

Are they publically available?

Roger,

not everyone seems to agree.

“I had thought that getting DARPA 10 years ago to spend a few million on Geometric Langlands research was an impressive feat of redirection of military spending to abstract math, but this is even more so.”

I’d say this is backwards, with much more potential for applications for homotopy-type theory, as compared to Langlands.”

“I’d say this is backwards, with much more potential for applications for homotopy-type theory, as compared to Langlands.”

I feel the same. Homotopy-type theory seems (to me) the most exciting idea anyone proposed in many years.

Apart from formal theorem proving, type theory has been used in the past to verify the correctness of protocols in large complex mission critical systems where you really dont want to have bugs, race conditions or infinite loops. I know of at least one very large DARPA grant of this nature which was handed a few years ago. As far as I understand (not an expert) its still not clear how constructive (hence useful) the new univalent approach is, but one can certainly see why DARPA would be interested in exploring this.

In applications, one needs highly qualified programmers who also know theory in this approach to protocol verification, this might explain the large sum. Geometric Langlands is really interesting, but I dont see the DARPA connection, but who knows higher math is sometimes useful in surprising ways.

Axion search in IEEE Spectrum

Thanks, Yatima. That’s a great article—kudos due to the author, Rachel Courtland.

Here’s a quick link of possible importance related to Bicep2, which you discussed a while ago:

http://resonaances.blogspot.co.uk/2014/05/is-bicep-wrong.html

Thanks George,

That looks like big news, but I know nothing beyond what Jester has. I’ll repeat my standard advice that Resonaances is a blog anyone seriously interested in HEP physics should be following. If the BICEP2 result does fall apart, will be interesting to see if that gets anything like the often-fevered coverage of the initial claim. It’s certainly not going to be advertised as evidence against the multiverse…

See this instead: