As I sloshed through the snow last week, I thought about the way science “measures impact” compared to international development. They are quite different. The international development approach is exemplified by this world bank explanation:
When a project is completed … the World Bank and the borrower government document the results achieved; the problems encountered; the lessons learned; and the knowledge gained from carrying out the project.
The International Development Association (the World Bank’s fund for the world’s poorest countries) tracks aggregated results. Its Results Measurement System (RMS) is designed to strengthen the focus of IDA’s activities on development outcomes and keep donors aware of IDA’s effectiveness. The system measures results on two levels:
1. Aggregate country outcomes including:
Growth and poverty reduction,
Governance and investment climate,
Infrastructure for development,
2. IDA’s contribution to country outcomes, or “agency effectiveness.”
Wow. That’s very structured. But does it work?
In contrast, nobody asks a scientist to count the lives saved by her research. Instead, they ask “who found this scientist’s work valuable?” They count the number of other scientists who reference the first. Influential documents are the currency of impact in science.
The influence of particular efforts can be quantified, and most of the scientific community accepts the premise behind the journal Impact Factor, which is calculated by a 3rd party (Thompson-Reuters) and has a correction built into the formula so that any two journals from unrelated fields of science can be compared to each other.
And while I personally feel this focus on knowledge rather than results has led some scientists to avoid research that would benefit the lives of the most people (because these challenges are usually hard, often fruitless, well-traveled research areas), science has done far more to advance society than international development.
Why is that?
The grant award process is fairer and less politicized. You can assign one number to a person’s total contribution to the field. This is the h-index, or the the number of papers a scientist has that each have as many references from papers written by colleagues.This h-index is detailed by J.E. Hirsh in PNAS:
With this one number, you can tell whether a scientist is in the company of nobel laureates. Thus, top research grants go to people who have an h-index of at least 20 (they authored 20 papers that each have been cited as useful by 20 other people).
Nobody compared the actual types of work to each other in this case, yet everyone agrees that it is a reliable measurement for comparing that work. Amazing. The impact factor and the h-index are based only on social networks of scientists citing each other. And everyone in science is required to contribute to the reputation system. That’s also important. Those who work in isolation are largely irrelevant to the progress of science.
For decades the world bank, the UN, and all the rest have been trying to compare the accomplishments of their projects (largely in isolation), while ignoring the social connections among their practitioners. But with social media, there has to be some international development equivalent to a “citation” by one’s peers. But what is that?
If we measured this, we could attribute value to the work of practitioners in the field, and thus reward the best with continued support.