The essence of good evaluation is to capture the 6 journalistic questions (what, who, why, where, when, and how) in a brief, honest format. I believe that defining the “what” is the most important, and one of the hardest to codify. For without these questions answered, the information is incomplete and cannot be used to analyze the Social Impact or performance of the organization.
This month I am excited that GlobalGiving is launching an upgraded flexible storytelling format with these features:
- Universality: one survey framework that can capture the essence of all NGO work because the questions are not output- or sector-specific
- Flexibility: implementing organizations decide which questions to use from a larger pool of 20
- Benchmarking: because the pool of available questions is limited and shared, every question will have other users that have also used that question, so results can be compared worldwide between implementing organizations
- An agile, evolving survey design: Users can propose questions, and we will periodically swap out less popular questions for newer questions after testing.
This combination of features is significant because it puts community-level actors in control of the evaluation process. Organizations choose which questions to include in their story forms and then train local scribes, who go out and collect stories. Responsive organizations will involve these scribes in the survey design directly. These organizations will use our GlobalGiving story analysis tools to understand what communities are saying about the services that aim to serve them. And if they don’t like the questions, they can test and propose better ones, which may get adopted by the larger community of nonprofits interested in a simple and cost-effective way to gauge their performance and guide their strategic thinking.
I hope this will fuel a paradigm shift from Impact Post Evaluation to Baseline Pre-Exploration of ideas that could improve society.
Two Story Rule and Benchmarking
The two-story rule remains the most important part of the storytelling project’s design. Every storyteller must give two stories about two different community efforts they’ve witnessed. Only one can be about the organization that is tied to GlobalGiving. As a result, at least half of the stories we receive do not have the typical self-report positive bias endemic in NGO programmatic evaluation. And when they do, we can tell, because we have a huge baseline of organization-related stories to compare them to. Benchmarking is also the point of a shared system through which evaluation data can flow. These two modes of analysis provide every organization with a “within-group” (two-story-rule) and “between-group” (benchmarking) comparison a fraction of the effort.
Story-based feedback will never be as rigorous as a randomized controlled trial (RCT), but it will be rapid (by 10X) and mostly right most of the time. As this data set grows (1,000,000 stories is possible), it will eventually dwarf RCTs in predictive power.
All that is possible because the design of the survey is finally a dynamic, fluid process controlled by the users of the data. GlobalGiving is merely a data curator, steward of stories and protector of storytellers’ privacy, and builder of analysis tools for everyone to share.
So how is an organization to know which questions work for their evaluations?
Soooo glad you asked. Today I designed a card game version of our flexible storytelling forms that allows people to try out question-combinations and choose the best questions for their version of the storytelling project. All questions will work with all stories, but only some NGOs care about mapping the social conflict in a story, while others care more about crowdsourcing community solutions to social problems, and so on. This way everybody gets what they want answered in the margins because they agree to keep the core (prompting question) the same.
This is rapid prototyping at its best. Organizations can design an evaluation within hours instead of months. I used a free tool for making Magic: The Gathering like card games to design the question set as a game. Now others just need to print the cards and play the game, in order to gain insights into how our storytelling project opens up many new paths to understanding community context around the community efforts organizations lead within them.
Storytelling Design Game
Premise: Each player will select a story from GlobalGiving’s set at random, and read it without any of the other players seeing it. Best to save the story ID# in the URL for end-game reference:
Goal: To win, be able to summarize another player’s secret story solely from the questions you have asked him from a minimal set of cards. Higher scores result from inductively understanding the story with fewer than 10 questions. “Closeness” of your final answer to another player’s actual story text is best determined by an outside judge, or by group consensus and ridicule (like Apples-to-Apples judging)
Rules and turn by turn play: After each player has secretly selected and memorized a random story, they take turns playing one card and asking the other player to answer the question about the player’s secret story. If there are three or more players, target the clockwise player with questions in a round-robin style of play.
Note: Some of these question cards are more relevant to other interpretations beyond the “what” in the story. I’ll come up with rules for how to test these aspects of the survey later. But in general the game rules are designed to be as closely aligned with the actual survey goals as possible, so that this is a good simulation.
So a good survey needs to answer these questions in this order of decreasing priority:
What, who, why, where, when, and how
Interpret “how” as being about the process that the organization approached the problem and carried out its intervention. Who is both the organization and the type of person the storyteller is. “What” is the essentially the whole story, or at least the 3 to 5 most important elements in it. What was the sequence of events? Who was the story about? What happened to the main character in the story? What role did the storyteller play?
Some of these questions are straight-forward and map exactly to a card:
Whereas many other aspects of the story are fuzzier, and the questions are also fuzzier to match:
End Game: Players continue using cards to question each other for up to 10 rounds. After 10 cards have been asked of all players, go around and have each player summarize the other player’s story.
If a player wishes to summarize in less than 10 rounds, award them a 5 point bonus. Typically, each play gains 10 points for summarizing the story correctly at the end of 10 rounds, though the judges can decide to award less than full points. If a player that guesses early but does not get the full 10 points awarded, he gets no bonus points. A player that doesn’t get at least 7 out of 10 points instead gets ZERO points. Keep playing for multiple rounds, or until one player reaches 50.
The complete set of question cards for testing/playing
Some questions are in testing phase, such as the one where you can design your own question during the story listening process…
Here is the PDF if you want to download and print the set:
If you think this is cool and want to be part of the team that builds tools to analyze these stories, we’re accepting applications for a Big Data Scientist in Training in the month of May, 2013. You would work with GlobalGiving and >FeedbackLabs.org.
This project is about putting a face on the people affected by GlobalGiving and its partners’ work. So I’ve decided it was time to put a face on them. Literally:
Any suggestions on what kind of faces to put on the green and yellow people? Comment below.