blah-blah

Science works because peer review works: Lessons for ending poverty

You might think that the peer review process in Science academia exists to weed out sketchy results. That’s just a side effect. It’s real purpose is to define the frontiers of each field and force contradictory ideas into head-to-head battles that force winners and loser to emerge. If a field of inquiry allows incompatible ideas to coexist without confrontations, it becomes a pseudoscience where anyone could cite any evidence to support any opinion. Because the scientistic community agrees to works together to define the best theory — based on currently available evidence — to explain phenomena, Science succeeds as the best philosophy to describe reality.

The system builds on three things:

One “cannon”

Scientists within a field implicitly agree that there will be one shared body of knowledge, to which everyone will contribute. This knowledge can be spread across hundreds of journals because (a) all use the same peer review mechanism and (b) specialized search engines (PubMed, Web of Knowledge, and Scopus) index virtually everything — reducing the odds that an important paper will go unnoticed. (Before science indexing, the work of Gibbs and Mendel went unnoticed for a half century because they appeared in obscure journals.)

Forced confrontations

Scientists must face their critics and respond to them. Most papers get submitted at least three times, and reviewed by four other scientists each round. It is humbling. A scientist gets formal, written littanies on his or her inadequacies a hundred times a decade over a typical career. Twelve experts have given feedback prior to even mediocre ideas getting into print. This peer review also alerts your most successful competitors ahead of others, putting more pressure on you to address the weaknesses in your paper before others exploit its strengths. Dialogue, competition, feedback, and the Golden Rule emerge from this dynamic. (Review unto others as you would have them review your papers.)

A reputation system for scientists

The system leads to many contested ideas appearing in print. Some of these attract more readers than others. A few of these spawn subsequent experiments that move the field forward. Papers about these experiments cite the predecessor work, and this web of connections is a trail of information that forms the basic reputation system for scientists. For details on how it is calculated, see the H-index.

This system is equitable. Credit is mostly a meritocracy — except when multiple teams publish nearly identical findings concurrently (then it’s more of a flavor thing. For example, see Newton vs Leibnitz on “the calculus.” You H-index measures your breadth and depth as a contributor to the field. It ignores non-peer-reviewed work, and uncited work. To score high, your work must not only pass peer review but also be valuable to others. My personal H-index is 6. Most Nobel hopefuls have at least a 45.

Still, I’m pretty pleased that 6 of my 9 peer reviewed papers are still being cited. I haven’t done Neuroscience research since 2008.

This system works better than any alternative, and my 8 years working in aid has made me realize it is not bad ideas that is perpetuating poverty for a billion people, but our refusal to embrace these three rules. Foundations, government aid agencies, and civil society organizations are rewarded for NOT having a common cannon. Experts perpetuate their own reputations by developing original, unique definitions of the problem, interventions, measurements of success, even reality itself. Confrontations in the aid world are meaningless. Different actors will fund whomever they choose with impunity from the “community” of other funders, experts, and politicians. And many reputations are not based on highly cited works, but rather, suave branding and language.

In contrast, a scientist who promotes his “facts” outside of journals won’t get cited and could get “scooped” by a competitor if there’s any value to them. These non-canonical publications win the media and public but don’t carry weight in grant competitions with the funders. The “herd” protects itself from becoming a psuedoscience. The H-index doesn’t lie.

Even Groups of scientists colluding to cite each others’ papers and advance their collective reputations also fail, for reasons I explained previously.

Unfortunately, advancing Science is easier than ending poverty. Knowledge is the only measurable output, and the system forces quality over quantity. Funding follows quality.

If we ended poverty, there would be winners and losers in the system. Finding the solutions will require evolution, not randomized trials, so we haven’t even begun to truly search yet. When aid experts assert that better evidence will lead to better policy, they’re fooling themselves.

This was originally posted on Dennis Whittle’s blog.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s