The 2-story rule provides instant benchmarking for any organization or social science question
The GlobalGiving Storytelling Project has collected over 44,000 stories from East Africa so far. In every case, we encourage young people to go out and interview their neighbors and peers, and ask for two stories about a time when a person or organization tried to do to something to help the community. We hope that the storyteller is choosing both topics, but in the event that they are being coached to talk about a specific organization, the second story is often less biased.
We can use this internal-external reference frame to understand what matters to communities, on any subject. The output is visualized using our story search engine:
Anytime you search for something, it will show two sets of results.
So looking at these stories that mention USAID, there are clearly two clusters: positive stories and really positive stories. But as awesome as USAID must be, we can also see that everyone who talked about USAID in one story (the blue up ticks) was just as likely to tell another, second positive story (the orange down ticks).
Up until now, organizations have been evaluating their work without an internal reference point. The number of USAID success stories vastly outweighs the USAID failure stories. But when we use the storyteller’s 2nd story as a reference point, these USAID success stories are not more successful than the other things people wanted to talk about. I believe success and impact are easier to gauge when they are compared to something else. At the same time, being as good as something else is not a bad thing. It also means that the storytellers are not being manipulated into telling only positive stories about the organization.
Since it appears that the success/failure aspect of stories really isn’t that different, we could actually look at the main words used in these various groups to get a feel for what could lead to success and failure:
- Top Left: wordle of USAID success stories
- Top Right: wordle of USAID failure stories (33 of 216)
- Bottom: wordle of a reference set of stories, not mentioning USAID, from same group of storytellers.
At this level of analysis, which is just a brief introduction to textual analysis, you can see that USAID stories seem to emphasize children, food, and organization (perhaps a local partner?). USAID failure stories emphasize HIV/AIDS, children, community, and water. The non-USAID reference stories emphasize children, people (everybody talks about people and children actually), community, school, water, and slums. I’ll be working on extracting particularly interesting phrases from each of the 4 quadrants of these stories in the next version of the story search tool.
These ARE the droids you were looking for…
When every search includes a set of stories that are not about what you searched for, but from the same people, you have some context as you read stories that were returned below. Here are some examples where the reference stories have the same distribution as the search stories:
Kituvo Mobile Aids Organization
In these three examples, the plot of the search-related and reference stories mirror each other. This means there is no difference between these groups. It also means that the organization is not likely to be manipulating the stories.
Here are more interesting examples where the reference stories are either missing or less positive than the organization-related stories:
Shallow Wells Intl Movement (SWIM)
Using the “other” story to detect whether an NGO is influencing storytelling
After looking at a variety of stories, the success and failure benchmarking appears to have a similar bell-curve shape where the data can be trusted. Later, I discovered that the pronoun patterns in stories also differs between the diverse, reliable collections and the NGO-searching-for -its-own-marketing-material collections. Reference stories are important for helping us determine whether the whole collection is trustworthy.
This tool is no longer available, but there is a more powerful on available, that simply gives you an overall reliability score (0 to 100 scale) instead of asking you to interpret the charts. I’ll try it out.