During my recent vacation I visited my old friend Nathan Myhrvold, and got a tour of his company’s lab near Bellevue. At that time he told me about what he had been working on recently, which has now appeared on the arXiv here, and is the subject of news stories today at the New York Times and Science magazine.
I confess I’ve never worried much about killer asteroids, but am glad that someone is doing this. Nathan has always pursued a wide range of different interests, and killer asteroids has evidently been one of them. I first heard from him a year or two ago about how he had gotten interested in the question of how to model the observability of such objects. Such modeling affects choices to be made about how to optimally search for these things (space-based or earth-based telescopes? what kind?). He wrote a paper last year about this, which was published in March.
What Nathan told me when I saw him was that he had found significant problems with the modeling done by the NEOWISE/WISE group at NASA, and you can now judge for yourself by reading his paper. I’m very far from being able to understand the details of this story well enough to judge who’s right here. I do know Nathan well enough to know that his work on this deserves to be taken very seriously, and would bet that he has identified real problems. As noted in the comments there, the reaction from one of the
NASA WISE people quoted at the end of the Science article wasn’t exactly confidence inspiring.
Update: There’s a press release about this out from NASA today, pretty much devoted to attacking Nathan’s work.
Update: For some specific criticisms of Nathan’s work, see the comment thread here. For a response to some of this from Nathan, see here.
Update: Scientific American has an article about this here.
Update: As pointed out by Wayt Gibbs in a comment, those interested in some discussion of the main point at issue might want to read the exchange here.
For what it’s worth, this was published today in Slate: http://www.slate.com/blogs/bad_astronomy/2016/05/27/nathan_myhrvold_claims_nasa_scientists_asteroid_calculations_are_all_wrong.html. Haven’t had a chance to read it closely yet, but at first blush it seems thorough, and presents both sides of the debate.
From the slate paper:
“This is the model Myhrvold claims is wrong. However, the asteroid diameters found by the NEOWISE team agree very well with previous satellite measurements. NEOWISE looked at many of the same asteroids as an earlier mission called IRAS—a couple of thousand of the same asteroids—and found that the diameters calculated for those asteroids matched the measurements using IRAS to about 10 percent. Not only that, measurements using a Japanese satellite called Akari also yielded similar results, and all three agree well with the radar and occultation measurements.”
They cite this paper in support of that claim:
“The mean values of the relative differences are 2.8%, 1.7%, and 7.5% for IRAS, AKARI, and WISE, respectively, and the standard deviations for each are 12–13 %.We found that the size derived by AKARI is closer to that derived by IRAS or WISE. This is not a surprising result, as the beaming parameter adopted in the thermal model calculation in the AKARI catalog is calibrated with well-studied main belt asteroids larger than 90 km, whose size, shape, rotational properties, and albedo are known from different measurements, as mentioned above. In this respect, the diameters obtained by radiometric measurements based on I–A–W are reliable in a statistical sense, which are smoothed out and averaged over a limited number of observations, even though the sizes obtained by radiometric and other measurements can be discrepant by up to 30%.”
So the 1-sigma uncertainty is >10%, and from figure 7 it looks like the (more usual to report) 2-sigma uncertainty is ~25%. I don’t see how Myhrvold’s point that claiming “+/- 10% accuracy ” is way overconfident is at all controversial from this data.
Also, this is apparently for the same asteroids that were used to calibrate this thermal model… Am I understanding that last part correctly? Don’t they need to estimate out of sample error in order to assess the skill of the model?
“There’s a lot of heat here, clearly.”
Heat, for sure; clarity, not so much.
Since LSST and Ivezic have been brought into the mix by the SciAm article, the “NASA” team have this from April 12: http://arxiv.org/abs/1604.03444 Modeling the Performance of the LSST in Surveying the Near-Earth Object Population
which is from before this controversy hit the airwaves.
“We have performed a detailed survey simulation of the LSST performance with regards to near-Earth objects (NEOs) using the project’s current baseline cadence. The survey shows that if the project is able to reliably generate linked sets of positions and times (a so-called “tracklet”) using two detections of a given object per night and can link these tracklets into a track with a minimum of 3 tracklets covering more than a ~12 day length-of-arc, they would be able to discover 62% of the potentially hazardous asteroids (PHAs) larger than 140 m in its projected 10 year survey lifetime. This completeness would be reduced to 58% if the project is unable to implement a pipeline using the two detection cadence and has to adopt the four detection cadence more commonly used by existing NEO surveys. When including the estimated performance from the current operating surveys, assuming these would continue running until the start of LSST and perhaps beyond, the completeness fraction for PHAs larger than 140m would be 73% for the baseline cadence and 71% for the four detection cadence. This result is a lower than the estimate of Ivezic et al. (2007,2014); however it is comparable to that of Jones et al. (2016) who show completeness ~70$%. We also show that the traditional method of using absolute magnitude H < 22 mag as a proxy for the population with diameters larger than 140m results in completeness values that are too high by ~5%. Our simulation makes use of the most recent models of the physical and orbital properties of the NEO populations, as well as simulated cadences and telescope performance estimates provided by LSST. We further show that while neither LSST nor a space-based IR platform like NEOCam individually can complete the survey for 140m diameter NEOs, the combination of these systems can achieve that goal after a decade of observation. “
One thing that is quite clear: Myhrvold has gone very publicly on the record as stating the NOEOWISE team, et al. should perhaps someday be discussed in classrooms the way we discuss the Piltdown Man. Only the fraud is more obvious. Chutzpah doesn’t begin to describe it. If he’s wrong he’s going to find out what being an 8.4 Tesla shit magnet feels like, and may even deserve it.
The record of space telescopes operating in the infra-red for detecting asteroids seems unsurpassed. LSST, if it operates per specs, will also be awesome. If I were worried about asteroids that might collide with Earth, I’d want both instruments. If I had to choose one or the other — that is a very hard decision. I hope this goes back to being science and engineering and less the media-circus that it has become. I hope that Myhrvold gets some collaborators and he makes the breakthrough in NEO asteroid detection that he is looking for. Bye-bye for this thread.
I wasn’t aware he was on record to have made such a (perhaps) reckless pronouncement. But I can’t imagine that he was unaware of the consequences (if he was wrong). So he is either confident that he is not wrong, or he may not care about the consequences. I hear that ~650 mega-bucks can buy a lot of protection against all sorts of consequences.
I think that mega-bucks are the keys to clarification for these sorts of controversies. Chercher les mega-bucks if you have a dog in this fight. I don’t.
I have looked at three of the papers — Myhrvold’s arxiv preprint, and (as referenced by Myhrvold) Masiero et al., 2011, and Mainzer et al., 2011c — with some care, and I must say that I find Myhrvold’s criticism of the portrayal in Masiero et al. of “direct” diameter measurements (what Myhrvold calls ROS) as the result of the thermal model to be fully legitimate. (Myhrvold reiterates this point in his note, “A Simple Guide to NEOWISE Data Problems,” that Peter links to in one of his updates.) (Note, to avoid confusion, that Masiero et al. reference as Mainzer et al., 2011b that which Myhrvold references as Mainzer et al., 2011c.)
I haven’t read any of these papers in complete detail, but I have done more than skim them, and I have read through some sections quite closely. Myhrvold complains that 123 of the thermal-model diameter estimates (117 in Masiero et al., 2011) are actually ROS diameters from other sources. On the surface, including such “foreign” diameter measurements in a table of thermal-model estimates is bad science. It would be simply wrong of them to have replaced their model estimates with others’ measurements. It would be at a minimum questionable to have somehow baked these benchmark values into their analysis so that their thermal model reproduced those exact values.
Note that Mainzer et al., 2011c analyze thermal models for 50 objects specifically for which ROS diameters are known, and make explicit that they use and report these ROS diameters. Masiero et al. cite this paper, as I discuss below.
Masiero et al. do cite various ROS papers, but only in the introduction. I find no mention in their paper that they have used these ROS values in any way in their analysis. I find no mention that any of the diameter values in their Table 1 (“Example of Electronic Table of the Thermal Model Fits”), or in the electronic table of which it is an extract, are produced in any way differently than others of the same class. The ONLY clue I see anywhere in the paper itself that those which Myhrvold has identified as ROS values are different is that they are round kilometer values.
Masiero et al. cite a number of other papers, and I have only looked at the few of them that seemed potentially relevant to this issue. Of these, only Mainzer et al., 2011c proved relevant, discussing the explicit use of ROS diameters. Masiero et al. cite Mainzer et al. a number of times in different contexts. The most relevant citation is:
“We obtained our data used for fitting in a method identical to the one described in Mainzer et al. (2011b) and A. K. Mainzer et al. (2011, in preparation), though tuned for MBAs. Specifically …”
In their elaboration of how they collected their data (the “Specifically …” part) Masiero et al. say nothing about requiring any of the objects they analyze to have independent ROS diameters nor do they mention collection any ROS diameters. Although Mainzer et al. could be viewed as a hint that Masiero et al. use ROS diameters, nothing in my reading of Masiero et al. itself states or implies that they use ROS diameters, and in my reading of Masiero et al. and Mainzer et al. together, only the coincidence of the values of a subset of the diameters in Masiero et al. with those in Mainzer et al. indicate that Masiero et al. use ROS diameters.
There are a number of places in the paper, particularly in Sections 3 and 10, where the authors described differing treatment of different classes of object. But, to reiterate what I said above, in no place to I see a statement, or even a hint, that objects for which ROS diameters are available were treated differently.
Table 1 gives thermal-model diameter estimates together with error bars. From text elsewhere in the paper, it would seem that these error bars have been generated with a model-driven Monte Carlo process (but I have not found a description of this process in the paper). However, comparing the error bars for the ROS diameters with, for example, those in Mainzer et al., we see that those error bars are, as are the diameters themselves, input values, rather than results of the model. Again, there is no suggestion in Masiero et al. that these error bars have been put in by hand.
(Myhrvold also complains that the right approach would have been to run the analysis completely blind to the ROS values, and then use the ROS values to measure the errors due to the thermal modelling. I think he’s absolutely right about this. Not to have done so is, in my view, poor science.)
Let me note that I have no dog in this fight. Also, you don’t have to trust that I’m not making stuff up here or spinning things in some artful way. Look at the papers yourselves. For these ROS values to be presented as results of the thermal modelling with no discussion — or even hint — of where they actually came from is either willfully misleading or incredibly sloppy.
Sorry, I intrude again. I have to disagree with Astronomical Cowardice.
Table 1 has the caption: “Table 1. Spherical NEATM models were created for 50 objects ranging from NEOs to irregular satellites in order to characterize the accuracy of diameter and albedo errors derived from NEOWISE data. The diameters and H
values used to fit each object from the respective source data (either radar, spacecraft imaging, or occultation) are given…..”.
Column #1 of the table is the object ID.
Column #2 of the table is the diameter in km and Column #3 is the H value from the (radar, spacecraft imaging or occultation data).
Column #8 is a pointer to the reference to the source of column #2 and #3.
Thus Object #47, with a diameter of 138 +/- 13 km and H of 7.8 comes from reference d, which is Shevchenko, V., & Tedesco, E., 2006 Icarus, 184, 211. Via scholar.google.com one can find this on researchgate.net, where you find Object #147 is the asteroid Aglaja, and it is ascribed a diameter of 138.0 km. You can read the paper and see the details of how that result of 138.0 km was reached (starting with an occultation measurement in 1984.)
It is very clear from the text of the preprint that this preprint describes a calibration of NEOWISE, and that after calibration but with Monte Carlo of measurement uncertainties, NEOWISE gives the diameter of 138 +/- 13 km.
Perhaps the preprint authors should have had two columns, one with the 138.0 km from reference d, and the second with the NEOWISE answer of 138 +/- 13 km. On the other hand, presumably anyone intimately familiar with the literature (or anyone who bothered to read the preprint carefully enough) would know what the condensed table meant.
Yes, the format the authors used will trip up the unwary. On the other hand, it allows their table to neatly fit the width of the page.
My apologies for breaking my promise to exit this thread. I have no stake in these games; I am not a professional scientist, nor affiliated with any science research institution (I am an employee of a telecom company.).
@Anonyrat, your commentary has been very insightful and helpful. Thank you. It seems that the main obstacle to resolving the controversy here is the communication barrier between the scientists who share a common knowledge base related to asteroid astronomy and those who don’t, despite being technically competent in other ways. Anyone who has worked in a narrow field would probably agree that such communication barriers are ubiquitous. It’s often simply impossible to include all the details of this background information when writing an isolated paper. In fact, including such details can sometimes be seen as tedious repetition by the intended expert audience.
@ Igor Khavkine
You make the excellent relevant point “… that the main obstacle to resolving the controversy here is the communication barrier between the scientists who share a common knowledge base related to asteroid astronomy and those who don’t, despite being technically competent in other ways.” I belong to the latter category, my competency being nuclear physics (LANL, retired). This is why, I believe, such a heated controversy here (in the comments section of the personal blog of an expert in “Quantum Theory, Groups and Representations”) seems out of place.
In response to Anonyrat’s comment, Anonyrat has mixed up two of the papers I discussed.
I criticize Masiero et al.; Anonyrat defends Mainzer et al., the paper I did not criticize.
I apologize for leaving the door open to confusion. I should have been more detailed with paper titles and links in my original comment. Let me correct that now.
I discuss three papers (whose lead authors all begin with the letter ‘M’):
The first I refer to as “Myhrvold’s arxiv preprint.” Its title is:
“Asteroid thermal modeling in the presence of reflected sunlight with an application to WISE/NEOWISE observational data”
and it can be found here:
The second I refer to as “Masiero et al., 2011” (and in abbreviated form as “Masiero et al.”). Its title is:
“MAIN BELT ASTEROIDS WITH WISE/NEOWISE. I. PRELIMINARY ALBEDOS AND DIAMETERS”
and it can be found here:
This is the paper I criticize. For completeness, it is cited in Myhrvold’s arxiv preprint as:
“Masiero, J.R., Mainzer, A., Grav, T., Bauer, J.M., Cutri, R.M., Dailey, J., Eisenhardt, P.R.M., McMillan, R.S., Spahr, T.B., Skrutskie, M.F., Tholen, D., Walker, R.G., Wright, E.L., DeBaun, E., Elsbury, D., Gautier IV, T., Gomillion, S., Wilkins, A., 2011. Main Belt Asteroids with WISE/NEOWISE. I. Preliminary Albedos and Diameters. Astrophys. J. 741, 68. doi:10.1088/0004-637X/741/2/68”
The third I refer to as “Mainzer et al., 2011c” (and in abbreviated form as “Mainzer et al.”). Its title is:
“THERMAL MODEL CALIBRATION FOR MINOR PLANETS OBSERVED WITH WIDE-FIELD INFRARED SURVEY EXPLORER/NEOWISE”
and it can be found here:
I did not criticize this paper. Anonyrat defends its arxiv preprint version (see below). For completeness, this paper is cited in Myhrvold’s arxiv preprint as:
“Mainzer, A., Grav, T., Masiero, J., Bauer, J.M., Wright, E.L., Cutri, R.M., McMillan, R.S., Cohen, M., Ressler, M., Eisenhardt, P.R.M., 2011c. Thermal Model Calibration for Minor Planets Observed With Wide-Field Infrared Survey Explorer/Neowise. Astrophys. J. 736, 100. doi:10.1088/0004-637X/736/2/100”
and in Masiero et al. as:
“Mainzer, A. K., Grav, T., Masiero, J., et al. 2011b, ApJ, 736, 100”
Anonyrat discusses in his comment:
This paper is titled
“Thermal Model Calibration for Minor Planets Observed with WISE/NEOWISE”
and, as indicated above, is the arxiv preprint version of Mainzer et al.
In my original comment I say of this paper:
“Note that Mainzer et al., 2011c analyze thermal models for 50 objects specifically for which ROS diameters are known, and make explicit that they use and report these ROS diameters. Masiero et al. cite this paper, as I discuss below.”
That is, I make a point of acknowledging that I find nothing misleading about the use and reporting of ROS diameters in Mainzer et al.
Sorry to be so pedantic about all of these references, but once burnt, twice shy.
To recapitulate, I criticize Masiero et al., and not Mainzer et al. Anonyrat has mixed the two up. I stand by my original conclusion that Masiero et al. “is either willfully misleading or incredibly sloppy.”
but they cite the Mainzer articles. this is such a non-issue.
they were not hiding that these values were from other sources.
they are identical to the other sources. Alls it takes is asking the author
and they replied yes its from other sources.
My apologies if I’ve confused between papers.
IMO, having seen this kind of thing before, the problem is the lack of trust on either side. We can imagine what each side thought of the other, but let’s not got there.
Now that the dispute has gone to the press, attitudes would likely have hardened, and it is going to be difficult to restore trust. In the interest of science, I urge both sides to stop talking to the press, give it a month’s break to cool tempers down, and then dedicate a couple of days of face-to-face conference to walk through all the material. Perhaps a mutually respected person should be present, not to decide technical issues but to keep the proceedings parliamentary. Surely doing something like this is within the resource budgets of both sides.
I too have read the key papers Myhrvold cites, in his “A Simple Guide to NEOWISE Data Problems”.
“Main Belt Asteroids with WISE/NEOWISE. I. Preliminary Albedos and Diameters” (Masiero+ 2011; doi:10.1088/0004-637X/741/2/68) is not as clear as it could be, re the extent to which the diameters in Table 1 are derived/sourced entirely/solely from “thermal model fits”. On its face this seems to support what both Myhrvold and Astronomical Cowardice write (more later).
“NEOWISE Observations of Near Earth Objects: Preliminary Results” (Mainzer+ 2011; doi:10.1088/0004-637X/743/2/156) is quite different. As Anonyrat points out. Myhrvold: “There is no explanation in those papers that diameters were copied, let alone a justification for why it was done.” That may be so for doi:10.1088/0004-637X/741/2/68; it is clearly not, for doi:10.1088/0004-637X/743/2/156. Here is an extract from the caption to Table 1, of the latter: “Two calibration papers (Mainzer et al. 2011b, 2011c) discuss the absolute calibration of the WISE data for small solar system bodies and should be consulted before
comparing with data derived from other sources.” Note that “Mainzer et al. 2011b” is “Thermal Model Calibration for Minor Planets Observed with Wide-Field Infrared Survey Explorer/NEOWISE”, doi:10.1088/0004-637X/736/2/100
Yes, it may be true that, as Myhrvold writes, “What Mainzer and coworkers mean by “calibration” is debatable, but they appear to mean more in the sense of validation — that when they calculate their color correction (to adjust to the properties of the WISE sensor), they get roughly the same observed IR flux from the test objects using the ROS diameters as they see from the asteroids they represent.”
However, as I have noted before, astronomers often use conventions, terms, methods, etc which *seem* to be the same as those in other fields of science – and indeed they often are – but may not be (they are sometimes not even consistent, among themselves; “flux” and “extinction” or “attenuation” are notorious examples). Naturally, this can create confusion and misunderstanding when an outsider tries to fully understand a published astronomy paper. Of the three alternatives Myhrvold gives (colossal error, fraud, something else), I think the last is by far the most likely … he simply didn’t fully understand what he read.
Returning to “Main Belt Asteroids with WISE/NEOWISE. I. Preliminary Albedos and Diameters”. Astronomical Cowardice writes “For these ROS values to be presented as results of the thermal modelling with no discussion — or even hint — of where they actually came from is either willfully misleading or incredibly sloppy.” Yes, to a complete outsider this seems a reasonable conclusion. Sadly, I think it’s neither; this sort of thing can, IMHO, be found in thousands of published astronomy papers. And many a peer reviewer would not even notice.
I’ll end with a hobby-horse: early in his “Simple Guide”, Myhrvold cites “Combining asteroid models derived by lightcurve inversion with asteroidal occultation silhouettes” (Ďurech+ 2011) and “Asteroid albedos deduced from stellar occultations” (Shevchenko&Tedesco 2006). Unless you have a cool $71.90 to spare (or a backdoor), you cannot read either. If one wants to get outraged by anything, high up on the list, surely, is why so many Icarus papers are behind paywalls? Personally, I think it’s particularly outrageous in these two papers: without the, freely given, observations of many amateur astronomers, neither paper could have been written.
Combining asteroid models derived by lightcurve inversion with asteroidal occultation silhouettes
Asteroid albedos deduced from stellar occultations
Thanks for digging up these two, Anonyrat.
The second is (almost) certainly what was actually published in Icarus; the first not quite so. In the arXiv abstract (http://arxiv.org/abs/1104.4227), the Comment is “33 pages, 45 figures, 4 tables, accepted for publication in Icarus”, which strongly implies it’s the same as what (later) was published. However, this is caveat lector; no one is charged with checking that it’s accurate, and while authors almost invariably have good intentions, they do sometimes make mistakes. Further, the PDF contains this, at the bottom of the first page: “Preprint submitted to Icarus”, making the ambiguity worse.
Further, the PDF contains this, at the bottom of the first page: “Preprint submitted to Icarus”, making the ambiguity worse.
That’s just boilerplate text generated by the LaTeX style file, not something the authors explicitly wrote themselves.
@Peter Erwin thanks for that! Question: how could an outsider ever learn that? Presumably it’s something unique to style sheets used by authors thinking of submitting to Icarus, I guess.
The divergent accounts of Dr. Myhrvold and the NASA-affiliated investigators make it extremely difficult to figure out who, if anyone, has portrayed the situation with sufficient accuracy to draw firm conclusions from an outside perspective. If there are any uncontroversial statements that can be made, it would appear that Myhrvold’s attempts to build a model using basic physics principles is still outperformed by NASA’s empirical model, at least in some key instances, though that situation may change with further refinement. Also, the NASA teams’ data reporting lacks sufficient clarity to satisfy an outsider that they’ve been anything but sloppy, at best, and at worst malfeasant.
I am fully in agreement with Phil Plait’s article on at least one point: No matter which (if any) of Myhrvold’s accusations proves to be correct, they are all quite troubling (with varying degrees of severity), and some publicly-available and well-vetted account of the dispute’s resolution is very much in order. I fear that won’t happen, given the attention span of modern media, despite the great importance of this story. We have on the one hand an argument for the need for transparency and public accountability of govt.-funded agencies and experiments, and on the other the legitimacy (real or perceived) of the “outsider” scientist as a watchdog. The outcome won’t resolve that age-old tension by any means, but I think it could prove to be a critical bit of history informing the perennial clash of public-vs.-private, insider-vs.-outsider, peer/expert-vs.-well-informed-amateur.
Most unfortunate is the tone, e.g. Dr. Wright’s gratuitous insults. Even worse are the allegations by Dr. Myhrvold that he was compelled to “go public” because of concerns about his work getting fair treatment during peer review. This could legitimately be characterized as “poisoning the well”, and I think he should have waited to see if his suspicions proved justifiable.
To echo Phil Plait, I do very much hope all concerned are as willing to be as vocal and thorough as they have been leading up to peer review of the Icarus article once that process is finished.
Myhrvold published of the paper to arXiv on June 1: http://arxiv.org/abs/1605.06490
He posted a lay-language explanation of why he finds the NEOWISE group’s use of observations in place of thermal model computations for a few selected asteroids so troubling to Medium on May 25: https://medium.com/@nathanmyhrvold/a-simple-guide-to-neowise-data-problems-a93f41e3bdb4#.kwe9iujo8
He has elaborated in three posts (so far) to the Minor Planets Mailing List on Yahoo on why some of the complaints about his paper in NASA’s press release were off base and has also replied to Herald’s suggestions about using occultation observations: https://groups.yahoo.com/neo/groups/mpml/conversations/topics/32025
He recently gave a talk at the Code Conference on asteroids that briefly touched on the controversy over his paper: https://www.youtube.com/watch?v=CH4k4kNBpN8
Maybe someone has already said this, but there seems to be a number of orthogonal issues here
1) Myhrvold’s pre-print has some errors in it. Perhaps already corrected in fourth revision.
2) Myrvold’s bootstrap model predicts some erroneous diameters.
3) Myrvold’s is alleged to have gone running to the press before peer review. Or did the press run to Myrvold?
4) Myrvold makes some harsh accusations about integrity and professionalism.
5) The NEOWISE team do not show what their model predicts for the asteroids for which ROS data are available. Instead they jumble the ROS numbers in same table as the IR model predictions. So we get no indication of how well the NEOWISE IR model works.
As far as I can see the status 1,2,3,4 has absolutely nothing to with whether or not #5 is correct but NASA is using 1,2,3,4 as a smoke screen to deflect attention from #5.
I think a good lesson from this is if you want people to pay attention to something like #5, then don’t do 1,2,3,4.
Myhrvold put a corrected revision of his article on arXiv on June 1: http://arxiv.org/abs/1605.06490
He posted a plain-English explanation of the reasons that the accuracy of the NEOWISE estimates cannot be determined from their published papers on Medium on May 25: https://medium.com/@nathanmyhrvold/a-simple-guide-to-neowise-data-problems-a93f41e3bdb4#.kb12omv42
He has responded to Dave Herald’s suggestions and some of the misleading statements in the NASA/JPL press release on the Minor Planets Mailing List on Yahoo: https://groups.yahoo.com/neo/groups/mpml/conversations/topics/32025
A story by Scientific American on the debate on May 27 provides some perspectives from astronomers not directly involved WISE or NEOWISE: http://www.scientificamerican.com/article/for-asteroid-hunting-astronomers-nathan-myhrvold-says-the-sky-is-falling1/
Myhrvold gave a short talk at the recent Code Conference on asteroids, and briefly mentioned the debate over asteroid sizes: https://youtu.be/CH4k4kNBpN8