I’ve been keeping my nose to the grindstone and my shoulder to the wheel building a search engine that will give you instant perspective on the quality of the stories fetched. Here are some examples of how it defines “quality of a narrative” in the context of would these stories be useful for evaluation purposes?
(1) Outcome: Was the story about good stuff or bad stuff happening? And are all the stories you found in search positive? Compared to other stories each storyteller told, you can get a feel for how much they share negative stories in general (which is not very often).
(2) Was the story an intimate personal narrative or an impersonal observer report (one where the organization is the main character, and the storyteller is absent)?
(3) Do the stories cover a broad or narrow timeframe?
(4) Are stories from both men and women in the set? And are people of all ages represented?
There are more sophisticated ways of measuring the quality of narratives, and I will incorporate them into this tool as I get time to write the code. But I consider this an accomplishment because the analysis is done instantly on 44,000 stories. And looking at the diversity of topics in a set of stories also reveals when an organization coached the storytellers in their community to tell the same story many times. For now, here are some examples of the beta version:
TYSA has a smattering of stories across the broad timespan (years) and nearly all of the storytellers are under 21. Most are male.
We only have 14 stories that mention Retrak, but even with this small sample, you can see that children are the storytellers and they provide both personal narratives and organization-focused stories, though all are about success.
Retrak helps street children. So you can compare Retrak stories with hundreds more about street children:
And if you look at the text/phrases analysis of these stories, you’ll find Retrak makes the top ten list:
Shallow Wells International Movement (SWIM) was an active story collecting organization, but the quality of these stories was not very good. This tool presents a very different picture of the SWIM stories, compared to most any other search:
When you compare the SWIM stories to 1811 more stories that are about lack of water, there is no comparison. The lack of water stories have all the features of a good narrative data set. Yes, these stories are a QUANTIFIABLY reliable picture of what Kenyans and Ugandans think about water, and the lack thereof:
- They contain the full spectrum of positive to negative outcomes.
- They have a better than average spread of perspectives, ranging form intimate personal narratives to organization-centric stories.
- The events in these stories cover a broad timeframe, from Jan 2010 to Sep 2012 (this month).
- Also important – the reference story collections (shown in orange below each plot) do match up with the blue up ticks in results. This is an instant benchmark, and can be used to detect whether the stories are trustworthy. Notice how different the blue–orange matches up much less with the SWIM stories.
- These stories are mostly from middle-aged men, but women are also represented, and people of many ages are represented.
- They come from many places, with the top ten shown:
Also note that only two phrases trend among the other stories told by these same ‘lack of water’ storytellers: pay school fees and world food programme.
You will see that a lot in this data. No matter what people talk about, the idea that children need more money to pay for school fees is the main idea that comes through loud and clear.
What does the least trustworthy story collection look like?
Turece is an organization in Kakamega that obviously tried to flood the storytelling project with self-reports. The lack of diversity, and therefore quality, is apparent:
And there are no other perspectives provided from any of these Turece storytellers:
And when combined with the story bubblizer, you can pretty much find and exclude any set of unreliable narratives from your collection:
These stories are about slum schools and pupils. Searching for these (instead of for Turece) gives you a proper perspective and context on the importance of supporting informal schools in slums:
And it also tells you which organizations are making a difference on informal schools. Though most of these stories are about Nairobi slums and appear to be from older youth and parents, not children:
The broader perspective:
Qualitative data gets disparaged by the monitoring and evaluation community because anecdotes are subjective, and the conclusions drawn from them rarely representative and reproducible. Here I present the means to fundamentally flip the equation. The work flow for qualitative analysis should be:
- Search for a set of narratives (from among our 44,000)
- Filter out unreliable stories and analyze a COLLECTION of hundreds of narratives that are quantitatively diverse.
- Share your interpretation and link to the raw data, so anyone else can analyze your stories and check your work.
- Publish these story reliability measures (and eventually, reference an overall reliability score that summarizes dozens of characteristics of the collection in one number).
- Start asking other people who rely on closed, non-benchmarked internal “quantitative” indicators how they know they aren’t fooling themselves on the data robustness question. (Quantitative indicators need their own bullshit detector, but I don’t believe I can build one for numbers the same way I built one for narratives)
You see, the equation is flipped: Stories can be more reliable than numbers when you can tell who’s lying.
P.S. If you think this is cool, quit school and teach yourself python – a simple and powerful language for rapidly building prototypes, testing ideas, and solving problems without the fuss of being a real techie.