Numbers mislead us by feeling more accurate than narratives, which have the power to encode more of reality in them. This is a great addition to the RealBook for Story Methods, found here: https://chewychunks.wordpress.com/2012/05/23/about-the-realbook-manual-for-story-based-monitoring-evaluation/
First, it equates ‘hard data’ with ‘statistics,’ as though qualitative (text/word) data cannot be hard (or, by implication, rigorously analysed). Qualitative work – even when producing ‘stories’ – should move beyond mere anecdote (or even journalistic inquiry).
Second, it suggests that the main role of stories (words) is to dress up and humanize statistics – or, at best, to generate hypotheses for future research. This seems both unfair and out-of-step with increasing calls for mixed-methods to take our understanding beyond ‘what works’ (average treatment effects) to ‘why’ (causal mechanisms) – with ‘why’ probably being fairly crucial to ‘decision-making’ (Paluck’s piece worth checking out in this regard).
Previously, I’ve expanded on this idea. Both numbers and paragraphs of text fall on a continuum of data:
Knowledge is an encoding problem. The complex truth is easier to decode when there is more data, and that often means working with higher resolution / greater bit-depth sources along this continuum:
surveys < photos < narratives < audio < video
These are the least to most efficient data encoding strategies, based on Shannon Information Theory. I’ve explained this theory and coherence testing of strategies in more detail here and demonstrated it with this tool. Our aim should be to improve technology to capture and encode behavior as a big data set directly – moving away from “quantitative indicator” approaches that are too slow and expensive to comply with the rapid iteration that is required for bad ideas to evolve into good ideas through selection pressure.
The problem is not the data – it is the current one dimensional tools – excel, SAS, ANOVAs – we use to understand our world “rigorously.” Just stop and think for a moment, is that the data’s fault? Or is it our own fault that we haven’t built the tools yet?
The NSA shows what you can accomplish in this realm if you set your mind to it. And since 2010 I’ve helped GlobalGiving move up the data-richenss latter from surveys to narratives and built tools that demonstrate the power of “unstructured” qualitative information when analyzed on a “big data” scale: djotjog.com/search or djotjog.com/bubbles.
*this blog post was also cross-posted on people, spaces, deliberation, including as one of the top 10 posts of 2014. In a recent blog post on stories, and following some themes from an earlier talk by Tyler Cowen, David Evans ends by suggesting: “Vivid and touching tales move us more than statistics. So let’s listen to some stories… then let’s look at some hard data and rigorous analysis before we make any big decisions.” Stories, in this sense, are potentially idiosyncratic and over-simplified and, therefore, may be misleading as well as moving. I acknowledge that this is a dangerous situation. However, there are a couple things that are frustrating about the above quote, intentional or not.
- First, it equates ‘hard data’ with ‘statistics,’ as though qualitative (text/word) data cannot be hard (or, by implication, rigorously analysed). Qualitative work – even when producing ‘stories’ – should…
View original post 1,590 more words