Many calls for aid reform assert the reasonable assumption that better evidence will lead to better policy.
What if this isn’t true?
What if policymakers don’t really care about whether their policies align best with the evidence? Are we screwed?
No. Good systems can still achieve progress and coordination in spite of the people within that system. The example I know best is scientific peer review.
You might think that the peer review process is a system to weed out garbage and improve publications – but that’s a pleasant side effect. The actual “system” is built around these three components:
- One “cannon” – scientists within a field implicitly agree that there will be one shared body of knowledge, to which everyone will contribute. This knowledge can be spread across hundreds of journals because (a) all use the same peer review mechanism and (b) specialized search engines (PubMed, Web of Knowledge, and Scopus) index virtually everything – reducing the odds that an important paper will go unnoticed.
- Forced Confrontations – scientists must face their critics and respond to them. Most neuroscience papers are submitted at least 3 times, meaning you get to read about your professional inadequacies a hundred times over a typical career. Peer review also alerts your most successful competitors ahead of others, further pressuring you to address the weaknesses in your paper and resubmit.
- A reputation system for scientists (see the H-index) – this system fairly reflects your breadth and depth, ignores non-peer-reviewed work, and requires that your work not only pass peer review but also be valuable to others (frequently cited).
Lets assume for the sake of argument that scientists are only concerned with their own reputations, and see what happens in this system. Scientific facts become a means to an end: prestige.
A scientist could try to publish a bunch of “facts” to vault his career, but because of the canonical nature of publications, only peer reviewed “facts” affect his prestige. It takes 4 or more quasi publications to equal one peer-reviewed one, but takes the same effort. Moreover, a scientist who promotes his “facts” outside of journals won’t get cited and could get “scooped” by a competitor. These non-canonical publicists win over the media and public but lose in grant competition. Peer review provides a reputation system for the “herd” because the H-index never lies.
Gamers: You could imagine groups of scientists colluding to publish each others’ papers and move their reputations forward. This tends to happen in small groups, but only within the same journal – and over time the journal’s “impact factor” goes down, because Thompson-Reuters only counts cross-journal references in their ranking system). Also, the editors and authors are known, and any competitor who knows the field can detect collusion and blow the whistle. The whole scientific community loves to ostracise cheaters and get grantmakers to blackball them. Moreover, Science, Neuron, and Nature (the top tier journals) would never publish those authors thereafter.
Stubborn Scientists: Most journals are good at figuring out who the scientists’s biggest competitor is and making that person one of the referees – so criticism follows you around and gets harsher each time you ignore your nemesis. In this sense, having a paper rejected 6 times gives you ample reason to re-examine your data and interpretation. Thomas Kune pointed out that this can prevent a paradigm shift, but ratio of erroneous results to transformative ones in papers is such that you’re alwys better off repeating that experiment before resubmitting the paper.