(I originally posted this on GlobalGoodness)
That is a lofty, aspirational goal. To everyone else, it might look like all we do is run a website that connects donors to organizations. But internally, I serve on a team that has met every week for the past 3 years to pour over the data, to find an efficient way to help organizations become more effective. We call ourselves the iTeam (i for impact).
It is hard to move thousands of organizations in one shared community forward. We use gamification, incentives, and behavioral economics to encourage organizations to learn faster and listen to the people in whatever corner of the world they happen to operate.
Before 2014 we used just six critera to define “good,” “better”, and “best.” If an organization exceeded the goals on all six, they were Superstars. If they met some goals, they were Leaders. The remaining 70% of organizations were permanent Partners – still no small feat. Leaders and Superstars were first in line for financial bonuses and appeared at the top of search results.
In 2014 we unveiled a more complete effectiveness dashboard, tracking all the ways we could measure an organization on its journey to Listen, Act, Learn, and Repeat. We believe effective organizations do this well.
But this dashboard wasn’t good enough. We kept tweaking it, looking for better ways to define learning.
What is learning, really?
How do you quantify it and reward everyone fairly?
The past is just prologue. In 2015 organizations will earn points for everything they do to listen, act, and learn.
This week I put together an interactive modeling tool to study how GlobalGiving could score organizational learning. When organizations do good stuff, they should earn points. If they earn enough points, they ought to become Leaders or Superstars. But how many points are enough to level up? That is a difficult question.
Here is the evidence we used to decide. The current distribution of scores for our thousands of partners, leaders and superstars looks like this:
How to read this histogram
On the x-axis: total learning points that an organization has earned.
On the y-axis: number of organizations with that score.
There are three bell curves for the three levels of status. It is significant to notice that these bell curves overlap. It means that some Superstar organizations in our old definition of excellence are not so excellent under the new set of rules. Other Partner organizations are actually far more effective than we thought; they will be promoted. Some of the last will be first, and some of the first will be last.
The histogram shown mostly reflects points earned from doing those six things we’ve always rewarded. But in the new system, organizations are also going to earn points for doing new stuff that demonstrates learning:
And that will change everything. “Learning organizations” will leapfrog over “good fundraising organizations” that haven’t demonstrated that they are learning yet.
Not only will different organizations level-up to Leaders and Superstars, everyone’s scores will likely increase. We’ll need to keep “moving the goal posts.” Otherwise the definition of a Superstar organization will be meaningless.
The reason this is a modeling tool and not an analysis report is that anyone can adjust the weights and rerun the calculations instantly. Here I’ve increased the points that organizations earn for raising money over listening to community members and responding to donors:
This weighting would run contrary to our mission. So obviously, we’re not doing that. But we also don’t want to impose rules that would discount the efforts organizations have made to become Superstars under the old rules.
So I created another visualization of the model that counts up gainers and losers and puts them into a contingency table. Here, two models are shown side by side. Red boxes represent the number of organizations that are either going to move up or down a level in each model:
We’d like to minimize disruption during the transition. That means getting the number of Superstars that would drop to Partner as close to zero as possible. It also means giving everybody advance warning and clear instructions on how to demonstrate their learning quickly, so that they don’t drop status as the model predicts.
This is a balancing act. Our definition of a Learning Organization is evolving because our measurements are getting more refined, but we acknowledge they are a work in progress. We seek feedback at every step so that what we build together serves the community writ large, and not just what we think is best.
More instructions on what happens after launch are coming next week. This post is just the story of how we got to where we are, and a few lessons of what we’ve learned along the way.
- Fairness: It is mathematically impossible to make everybody happy when we start tracking learning behavior and rewarding it.
- Meritocracy: We will need to keep changing the definition of Superstar organizations as all organizations demonstrate their learning, or else it will be meaningless. The best organizations would be indistinguishable from average ones.
- Crowdsourcing: The only fair way to set the boundaries of Partner, Leader, and Superstar is to crowdsource the decision to our community, and repeat this every year.
- Defined impact: We can measure the influence of our system on organizational behavior by comparing what the model predicts with what actually happens. We define our success as seeing everybody increase their score every year, and earning more points each year than in the previous year. Success is also seeing a normal distribution (e.g. “bell curve”) of overall scores.
- Honest measurement: I was surprised to realize that without penalties for poor performance, it is impossible to see what makes an organization great.
- Iterative benchmarking: We must reset the bar for Leader and Superstar status each year if we want it to mean anything.
- Community: We predict that by allowing everyone a say in how reward levels are defined, more people will buy into the new system.
- Information is Power: By creating an interactive model to understand what might happen and combining it with feedback from a community, we are shifting away what could be contentious and towards what could inspire stronger community.
We were inspired by what others at the World Bank and J-PAL did to give citizens more health choices in Uganda. What the “information is power” paper finds is that giving people a chance to speak up alone doesn’t yield better programs (the participatory approach). Neither does giving them information about the program alone (the transparency approach). What improves outcomes is a combination of a specific kind of information along with true agency – the power to change the very thing about a program that they believe isn’t working through their interpretation of the data.
The model I built can help each citizen of the GlobalGiving community see how a rule affects everyone else, and hence understand the implications of their choice, as well as predict how they will fare. If we infuse this information into a conversation about what the thresholds for Partner, Leader, Superstar ought to be each year (e.g. how much learning is enough?), this will put us in the “information is power” sweet spot – a rewards paradigm that maximizes organizational learning and capacity for the greatest number of our partners.
I predict that giving others this power (to predict and to set standards) will lead to a fairer set of rules for how learning is measured and rewards dolled out. It ain’t easy, but it is worthy of the effort.
Two years later, many of these concepts were formally synthesized into an excellent report by Meghan Campbell, Mari Kurashi, et al: InfoPower: Under what conditions is information power?
Here’s a brief summary:
Due to a lot of effort and resources combined with new technologies, data and information are now more transparent and accessible than anything we could have imagined a decade ago.
And while this has led to some very positive outcomes (think: information campaigns leading to decreased smoking rates, increased vaccine rates, and an upsurge in recycling), overall results have been mixed. We know that information alone does not always empower or lead to the changes we hope to see. So, we need to ask what are the factors that make a difference.
This question was explored through a collaborative research effort supported by the Omidyar Network, which resulted in the report. First, for information to empower, it needs to be embedded in a social and emotional context that inspires people to reinterpret that information and act on it. These are seven key principles that help explain when information does and doesn’t empower change.
Earlier in my career, I help to start FeedbackLabs and posted my thoughts on how to fix broken feedback loops, summarized here:
Both of these were important factors in designing GGrewards.