Below are conclusions from five papers that may change the way you think about change. But first, the basics of evolution.
Evolution is the mechanism by which simpler designs become more complex. Simple changes are scored and retested; complex changes and “intelligent” decisions are absent. Evolution makes a population more diverse and resilient, able to handle change. Computer programs modeled on evolution are interesting because they solve a domain of related problems instead of one specific problem.
These programs generate and compare a population of strategies for solving a problem. Each generation the program selects the best individuals from the population to be mated with each other, mutates a few, and retests the new population.
In this example: Encoding Mona Lisa in a tweet – programmers were trying to transfer the essence of the Mona Lisa painting in 140 characters. The genetic algorithm produced the best results. It was a mindless trick: lay down random polygons and keep them if the resulting image is closer to Mona Lisa than before:
Evolution and international development
Understand how to read a fitness landscape. You’ll be seeing more of them and fewer bar charts in the future.
The example of sexiness being a product of intelligence X (cuteness + kindness) is literally the dominant function determining what traits are being passed on in human evolution. See OKcupid‘s blog for details, and this post in particular.
Need more help reading fitness landscapes? Try the Beacon Center’s tutorial.
(1) In the real world, getting 40% of the way to the ideal result is the easy part; getting the final 60% covers much more rugged evolutionary “terrain.”
What this means for a complex task like ending poverty is that we are going to need to test and recycle our ideas faster and with greater diversity than we’ve ever done before.
(2) – The NGO world learns faster and is more resilient to challenges when we don’t focus on just one goal. Specifically, switching between two related goals (student enrollment or improved test scores) amplifies those in the population that are doing the best for the more complex goal (education).
This is the most important lesson for international development. It’s ramifications are broad: Sets of indicators are only useful when they measure different fundamental things (orthogonal measurements). Funders that focus too narrowly are not helping themselves find solutions faster.
(3) – A quality improvement collaborative (QIC or QuIC) out-performs randomized controlled trials, given the same time and resources. The paper is technical and difficult to decipher, but they model both under a variety of situations and find that the loss of rigor from doing distributed experiments is made up for by the diversity and rate of new ideas tested within the population, much like evolution itself.
This doesn’t argue for an end to RCTs, but that we should fund more quasi experiments performed by a broader group of practitioners, especially where unique local contexts will make RCT results irrelevant.
(4) Adopt some, but not all of the good ideas you see others using. Don’t be so quick to abandon ideas that didn’t work. There is value in trying them again in a different place, time, and context. The flatter peak below on the right represents less overall “fitness” of the population because they are employing the same strategy under different conditions where it doesn’t work as well as in the former case. This is often observed when adopting the lessons of on RCT to a different community, context, and time.
(5) We need a better system for international development that bakes in more of these ingredients for evolution. Without more diverse strategies, experimentation, and faster cycle iteration it will take thousands of years for us to “end poverty” – that being the most complex problem humanity has ever tried to solve. Take a look at the system today and how it needs to change in order for us to find solutions faster:
Discussion: What would “evolution” for the aid world really enable?
Ask yourself – is our global strategy of refining indicators and indices yielding results? Has the global corruption index reduced corruption? Are surveys of the problem detailed enough to feed an evolutionary fitness function? Most aren’t. Most efforts to clean up data slow down iteration, whereas other big data successes (outside international development) have come from just the opposite – performing faster iteration on stupider functions tolerant of much noisier data (ref).
We ought to spend more time experimenting, testing, adapting, and searching through what others have tried in a collaborate knowledge-sharing effort (also known as a QIC or quality improvement collaborative). That time to search will only come when we reduce the time required for writing reports, grant proposals, and surveys that do not feed the collaborative knowledge base.
Bill Savedoff, who invited me to give a talk on this at the Center for Global Development last week suggests we ask ourselves whether looking at “what works” in development is really going to teach us what works best everywhere. He likens it to studying a functional government in order to see what dysfunctional governments are lacking. It is a silly exercise because the functional government evolves out of a dynamic complex process. Yet, it is amazing how much time and energy aid agencies put into establishing “guidelines” and identifying “best practices” which are generally caricatures of any real program – like the “intelligent design” solutions to the Mona Lisa twitter problem – when the one designed by evolution is so much clearer.
The Big Question
The biggest, deepest unasked question I gleaned from the discussion at CGD is this:
Whereas biological evolution is the process of random changes creating a few more “fit” individuals that propagate into the next generation in greater numbers, is it possible for the individual to “intelligently” select the traits for offspring to have, given knowledge about other successful (or failing) individuals competing in the same landscape?
Obviously, if we can be intelligent instead of random, it will take dramatically fewer iterations, analogous to drawing the Mona Lisa with polygons yourself instead of by random placement.
Intuitively most people assume yes, but computationally, the answer may be provable as no. This is because once we know that fitness landscapes and the complexity that they encode fall into a class of hard problems called NP-complete (formal proof here), one cannot write an efficient deterministic algorithm that can optimize the search for a solution. That’s where the computer science provides immensely useful insights for development theorists. In plain speak, when a problem is “NP-complete” people think it is easier to solve than it really is in practice:
“Although any given solution to such a problem can be verified quickly, there is no known efficient way to locate a solution in the first place; indeed, the most notable characteristic of NP-complete problems is that no fast solution to them is known.”
My guess is that it’s easy to calculate fitness for any organization’s “poverty reduction” strategy once its characteristics have been defined and encoded, but that there exists no short cut for predicting which strategy will perform best without testing it. The more rugged the landscape, the more complex the problem, and therefore the more unpredictable the magic combination becomes. Curing a disease with a vaccine (solution: one dose to everybody) is simple; keeping 95% of the people on this planet earning a living wage is very hard, and depends on hundreds of interacting factors. Under these conditions, the “best practices” we should advocate are:
- Encourage massively parallel experimentation under a quality improvement collaborative (QIC or QuIC AKA Feedbacklabs.org), with a focus on encoding strategies more modularly. This means every NGO is an experimenter and an implementer.
- Limit RCTs to major questions that would shift or define the goal of the whole landscape from one aim to another.
- Reduce the required time per cycle from years to months, then to weeks. (By adopting Lean Startup principles)
- Capture and encode behavior as a big data set directly – moving away from quantitative indicator approaches that are too slow and expensive to comply with the rapid iteration requirements.
Part #4 forces us to think about knowledge as an encoding problem. The complex truth is easier to decode when there is more data, and that often means working with higher resolution / greater bit-depth sources:
surveys < photos < narratives < audio < video
These are the least to most efficient data recording strategies, based on Shannon Information Theory. I’ve explained this theory and coherence testing of strategies in more detail here and demonstrated it with this tool. Before you protest and say that audio and video are not really usable because we cannot feed the data bits into excel or SAS and run ANOVAs on the data – just stop and think for a moment, is that the data’s fault or is it our fault that we haven’t built the tools to make that easy yet? The NSA shows what you can accomplish in this realm if you set your mind to it. And since 2010 I’ve helped GlobalGiving move up the data-richenss latter from surveys to narratives and built tools that demonstrate the power of “unstructured” qualitative information when analyzed on a “big data” scale: djotjog.com/search or djotjog.com/bubbles.
And lastly, I suspect that a fitness landscape for health, education, and economic problems in Africa would appear simple, bumpy, and totally mountainous, respectively, for these three solution domains. That explains why we need a QIC model for discovering what works in poverty reduction (ref), because it is the closest thing to evolution we can do.