NIH-NSF Workshop

I recently attended a joint NIH-NSF workshop on the “Science of Science and Innovation Policy.” There were lots of interesting talks (here’s the workshop agenda) and good discussion in Q&A after each presenter, but I nominate George Santangelo for “most creative” with discussion of a new impact metric, the Relative Citation Ratio, developed in collaboration with Bruce Ian Hutchins, Xin Yuan, James M Anderson (article available for free download at bioRxiv). Finally a measure that tries to control for disciplinary differences in publication rates and citation patterns, and moreover that focuses on the impact of a publication, and not a journal.

I’ll assign “greatest potential impact” to the the Weinberg presentation, for his presentation of published and forthcoming research that uses UMETRICS data linked with Census LEHD and LBD products. It seems that the IRIS initiative is off to a great start!1

I reserve “most provocative” for Jon Lorsch who presented data demonstrating that the marginal productivity of large science labs is substantially lower than that of relatively small labs (here’s a short video version). He uses this finding to argue for smaller labs (and hence smaller awards by NIH and NSF), and explains the outcome (based on his personal experience and understanding from reading behavioral economics) that lab managers have limited capacity to spread their time and talent across many people. This limitation, he claims, implies there is a critical point at which productivity falls off rapidly as labs grow larger.

I’m sure research is already under way to confirm this claim in other settings (he looks just at NIH data), and to provide other explanations. I am not familiar with research in this area, but at first glance there seem to be several ways to interpret the finding, and that have different policy implications. Here are two:

  1. Big labs produce more valuable science. Lorsch focuses on publications, citations, and patents, which means (from a vanilla “non-behavioral economics” perspective) that we don’t have enough information to assess the relative value of marginal contributions across small and large labs. His analysis implicitly assumes that all labs do the same kind of science (or that they do science that all has the same marginal value to society). Are there systematic differences in the marginal social value of a publication or citation attributions across small and large labs? If so, then marginal productivity must be measured based on its value to society and not arbitrary metrics like publication counts.
  2. Labs get large because they receive funding from multiple sources who do not coordinate their funding programs. It is not “large” that is the problem, but “diverse.” The economics jargon is “scope diseconomies.” Some science needs large labs, but to get large, scientists have to scrape together funding from multiple sources that are not fully aligned. This limits possibilities for intense focus on narrow but deep questions.

No doubt there are other potential explanations, or possibly these two are easy to discard. In any case, I’m looking forward to seeing how the science policy community responds to the Lorsch analysis.


  1. Disclosure: WiscRDC is part of the IMI/UMETRICS “Tester Team” that is working with Census to vet and evaluate the UMETRICS-Census linked data before its release to the FSRDCresearch community.↩