One thing I’ve learned in life is that human beings are creatures very much obsessed with social hierarchy, and academics are even more obsessed with this than most people. So, any public rankings that involve oneself and the institutions one is part of tend to have a lot of influence. In the US, US News and World Report each year ranks the “Best Colleges”, see the rankings for National Universities here. My university’s administration tended in the past to express skepticism about the value of this ranking, which typically put us tied for 8/9 or 8/9/10. This year however, everyone here agrees that there has been a dramatic improvement in methodology, since we’re at number 4.
For most academics though, the real ranking that matters is not that of how good a job one’s institution does in training undergraduates, but the ranking of the quality of research in one’s academic field. Where one’s department fits in this hierarchy is crucial, affecting one’s ability to get grants, how good one’s students are and whether they can get jobs, even one’s salary. The gold standard has been the National Research Council rankings, which were supposed to be revised about every ten years. It turns out though that the task of making these ranking has somehow become far more complex and difficult, with more than fifteen years elapsing since the last rankings in 1995. Since 2005 there has been a large and well-funded project to generate new rankings, with release date that keeps getting pushed back. Finally, last year a 200 page book was released entitled A Guide to the Methodology of the National Research Council Assessment of Doctorate Programs, but still no rankings.
Recently the announcement was made that all will be revealed tomorrow at a press conference to be held in Washington at 1pm EDT. I hear rumors that university administrations have been privately given some of the numbers in advance, to allow the preparation of appropriate press releases (see here for an example of a university web-site devoted to this issue).
The data being used was gathered back in 2005-2006, and the five intervening years of processing mean that it is rather stale, since many departments have gained or lost academic stars and changed a lot during these years. So, no matter what happens, a good excuse for ignoring the results will be at hand.
Update: University administrations have now had the data for a week or so and are providing it to people at their institutions. For an example of this, see the web-site Berkeley set up here. All you need is a login and password….
Update: (Via Dave Bacon) Based on the confidential data provided to them last week, the University of Washington Computer Science and Engineering department has released a statement characterizing this data as having “significant flaws”, and noting that:
The University of Washington reported these issues to NRC when the pre-release data was made available, and asked NRC to make corrections prior to public release. NRC declined to do so. We and others have detected and reported many other anomalies and inaccuracies in the data during the pre-release week.
The widespread availability of the badly flawed pre-release data within the academic community, and NRC’s apparent resolve to move forward with the public release of this badly flawed data, have caused us and others to urge caution – hence this statement. Garbage In, Garbage Out – this assessment is based on clearly erroneous data. For our program – and surely for many others – the results are meaningless.
The UW Dean of the College of Engineering has a statement here where he claims that, despite 5 years of massaging, the NRC data contained obvious nonsense, such as the statistic that 0% of their graduating CS Ph.D. students had plans for academic employment during 2001-5.
Update: Boston University has broken the embargo with this press release. They give a chart showing that almost all their graduate programs have dramatically improved their ranking since the 1995 rankings, while noting that the two rankings are based on different criteria, the NRC says you can’t compare them, and the 2010 rankings in the chart are not NRC numbers, but are based on their massaging of the data. I suspect that the NRC data will be used to show that, like the kids in Lake Wobegon, all programs are above average.
For more on this story, see coverage by Steinn Sigurdsson at Dynamics of Cats.
Update: The NRC data is out, and available from its web-site. But, no one really cares about that, all they care about are the rankings, and the NRC is not directly putting those out. Instead, they’ve subcontracted the dirty work to phds.org, where you can get rankings here, using either the “regression-based” or “survey-based” score. In mathematics and physics, the lists you get are about what one would expect, with perhaps somewhat of a tilt towards large state schools compared to the 1995 rankings (most dramatically, Penn State was number 36 in 1995, number 8 or 9 this year).