As the storytelling project enters its sixth year, I’d like to share this brief overview of how it works and why it matters.
Imagine you asked 200 people about your programs, and this is a summary of who replied and how positive they were:
How can you be sure their stories are reliable feedback?
When can they support redesign, expansion, advocacy, and funding?
(Hint: There are alternatives to statistical random sampling.)
We chose to ask this question tens of thousands of times in East Africa:
“Talk about a time when a person or organization tried to help someone or change something in your community.”
You can choose any question, really. In 2003 I joined a simpler version of this on the streets of Pennsylvania. Our question was “What are your hopes and dreams for the future, and how has violence affected you in your life?” Stories from that listening project had many of the same qualities of the GlobalGiving version.
This is a listening process that works.
The only thing that remains to simplify it is to find a way for audio conversations to automatically be converted into text, so they can be archived, analyzed, and synthesized into common themes automatically on a large scale.
I also wish there was a simpler way for the people who share stories to instantly be a part of a conversation about the themes in those stories. But present technology doesn’t have an answer.
Why it works: Big data is about aggregation over precision.
Storytelling is about the emergence of a coherent message from many individual perspectives, a process best described by the Javanese concept of “djotjog.” In community story making, no one’s perspective is the truth. Everyone’s shared parts of the story become the truth.
And now, computers help us make sense of thousands of stories at once.
I spent a few years working on algorithms that could synthesize the main ideas in any set of stories. In the end the simplest were also the best:
- a demographic breakdown of the people who told stories and how they felt about the outcomes
- a self-organizing network map of words and phrases that trend in stories, what I call a wordtree.
Comparing two similar collections of stories can quickly reveal the main differences. We enable organizations to analyze stories using a common frame of reference about the world we all share. Our reference data set had (at last count) over 65,000 stories from East Africa, Japan, US, and the UK. We try to keep the analysis simple to read, so that the stories can speak for themselves.
We learned that the pronouns in a story can reveal more than the facts.
One thing it revealed is that almost no organizations are sharing unfiltered stories from the people they serve. They are speaking for the people, rather than giving the people a voice (and dignity) by letting them speak for themselves. How ironic when these organizations often claim to empower people!
Seeing two perspectives side-by-side can reveal a lot.
What’s the point of school? It depends on who you ask.
Street children: the outcome depends on what role the storyteller thought he or she played in the story. Actors saw more positive outcomes; Those affected by events in their stories were less positive.
And our language reveals our own bias in framing problems.
Why did we develop story-centered learning?
We know that traditional evaluations don’t work.
- Time: By the time you know “for sure” it’s too late to change anything.
- Money: Most projects don’t have enough budget to succeed to begin with, and evaluations rob projects of badly needed funding.
- Not generalizable: Even well-designed randomized controlled trials (RCTs) often fail to predict what happens in a different time and place with the same intervention. And people are not lab rats to be denied help as a “control.”
- Bias: Even when everything goes perfectly, there’s still a bias in how we interpret what we evaluate. So it’s best to let the people define the problem and tell us directly whether a solution works for them.
We want to copy the success of the Framingham Heart Study in philanthropy.
In 1948 researchers began tracking health records from all participants in the town of Framingham, Massachusetts. This was an observational study. They did not formulate causal theories or test specific hypotheses.
Since then, they have discovered every causal link that matters to treating heart disease in this study first, then validated that link in subsequent experiments. Without the Framingham Heart Study data, we would waste a lot more time and money looking for the solution.
- link between smoking and heart disease
- link with cholesterol
- high blood pressure
- sleep apnea
- depression spreads through social networks
- abdominal fat
- leptin protects against Alzheimer’s, dementia
- a-fib is big risk, found a-fib gene
- aldosterone and hypertesion
- cholesterol risk
- high blood pressure and stroke
- isolated underlying genes
Longitudinal studies work!
What the listen-act-learn cycle looks like with story data:
Why we still do it?
- A better way to gather evidence.
- Faster, cheaper, and more powerful than a “quantitative indicators” approach.
- Data is extensible and comparable across domains.
- We can detect and correct bias with narratives.
- Self-emerging view: It is always on the pulse of the community, so to speak.
- Gives people dignity and voice – listening is the starting point, but the end-point is letting them define the agenda in development.