Improving the grant system ain’t so easy
Today’s NY Times has an article by Gina Kolata about how the National Cancer Institute plays it safe with grant funding. The main point of the article is that NCI funds too many “safe” studies — studies that promise a high probability of making a modest, incremental discovery. This is done at the expense of more speculative and exploratory studies that take bigger risks but could lead to greater leaps in knowledge.
The article, and by and large the commenters on it, seem to assume that things would be better if the NCI funded more high-risk research. Missing is any analysis of what might be the downsides of adopting such a strategy.
By definition, a high-risk proposal has a lower probabilty of producing usable results. (That’s what people mean by “risk” in this context.) So for every big breakthrough, you’d be funding a larger number of dead ends. That raises three problems: a substantive policy problem, a practical problem, and a political problem.
1. The substantive problem is in knowing what would be the net effect of changing the system. If you change the system so that you invest grant dollars in research that pays off half as often, but when it does the findings are twice as valuable, it’s a wash — you haven’t made things better or worse overall. So it’s a problem of adjusting the system to optimize the risk X reward payoffs. I’m not saying the current situation is optimal; but nobody is presenting any serious analysis of whether an alternative investment strategy would be better.
2. The practical problem is that we would have to find some way to choose among high-risk studies. The problem everybody is pointing to is that in the current system, scientists have to present preliminary studies, stick to incremental variations on well-established paradigms, reassure grant panels that their proposal is going to pay off, etc. Suppose we move away from that… how would you choose amongst all the riskier proposals?
People like to point to historical breakthroughs that never would have been funded by a play-it-safe NCI. But it may be a mistake to believe those studies would have been funded by a take-a-risk NCI, because we have the benefit of hindsight and a great deal of forgetting. Before the research was carried out — i.e., at the time it would have been a grant proposal — every one of those would-be-breakthrough proposals would have looked just as promising as a dozen of their contemporaries that turned out to be dead-ends and are now lost to history. So it’s not at all clear that all of those breakthroughs would have been funded within a system that took bigger risks, because they would have been competing against an even larger pool of equally (un)promising high-risk ideas.
3. The political problem is that even if we could solve #1 and #2, we as a society would have to have the stomach for putting up with a lot of research that produces no meaningful results. The scientific community, politicians, and the general public would have to be willing to constantly remind themselves that scientific dead ends are not a “waste” of research dollars — they are the inevitable consequence of taking risks. There would surely be resistance, especially at the political level.
So what’s the solution? I’m sure there could be some improvements made within the current system, especially in getting review panels and program officers to reorient to higher-risk studies. But I think the bigger issue has to do with the overall amount of money available. As the top-rated commenter on Kolata’s article points out, the FY 2010 defense appropriation is more than 6 times what we have spent at NCI since Nixon declared a “war” on cancer 38 years ago. If you make resources scarce, of course you’re going to make people cautious about how they invest those resources. There’s a reason angel investors are invariably multi-millionnaires. If you want to inspire the scientific equivalent of angel investing, then the people giving out the money are going to have to feel like they’ve got enough money to take risks with.