I was having tea with a professional goatherder the other day (who else would you expect to meet in Portland?) and I asked her,
“So tell me what you have learned from the goats?”
“Well,” she began, herding her thoughts together, “Whenever I want them to go to clear some part a field, I can’t make them do what I want. The goats will always cluster in some edge of the field near the fence and work their way across from left to right around the perimeter.”
(For context, Briana gets paid to set up a fence around some acres of land, bring in her herd of goats, and have them clear all the brush and weeds from it.)
My jaw dropped as she explained the pattern her goats naturally fall into for eating the field. The pattern turns out to be the same as what Melanie Mitchell’s genetic algorithm predicted as the best strategy for a very similar task.
This approach to writing computer algorithms, which Melanie Mitchell calls ‘genetic algorithms’, copies evolution. Instead of programming the solution itself, it takes the problem and makes a game out of it. Games have a set of possible “moves” and a way to “score” the individual players. Some combinations of moves perform better than others. Winners are mushed together (AKA python-code-sex). A few (1%) are mutated, and these winning programs play the game again. The scores get better with successive generations, though they rarely get better in each generation. It might take 100 generations of no-improvement before the code has a breakthrough. Here is an example from Complexity: A guided tour by Melanie Mitchell:
So imagine that the room full of empty beer cans that Robby the robot needs to clean looks like this:
Some sets of moves do better than other, regardless of where in the room the trash is laying. The robot has five possible moves, and a very limited set of information it can work with, fully summarized in this chart under the Actions:
So the “logic” that the robot can encode as a move depends on what is in the adjacent squares or whether there is a can in the current square. This is the situation that provides input and feedback. Note the robot has no memory of where it has been, only what is right next to it in the room.
The “individual” robot that plays the game of cleaning this room is nothing more than a series of 200 random moves. In the first generation all 200 moves are random, but in later generations these strings of moves will be some combination drawn from previous robots that performed better than the rest of the pack (say, top 20%).
So the “346” in the beginning of Individual #200 means “move up, move left, pick up can”, and so forth for the whole string. The overall design to any genetic algorithm (what I prefer to call evolutionary programming) is summarized here:
So how well did the best robot strategy do? Melanie first wrote one “smart program” do have something to compare the evolving programs against. It moves randomly (but not backtracking), picks up a can if it is present, then moves to an adjacent site if there is a can there, and picks that one up. Her “smart” program got a score of 346. Anything higher is a program that performed better than what a human can intelligently design.
So what was the most evolved robot doing to clean the room? Melanie writes:
[The best] Robby moves east until he either finds a can or reaches a wall. He then moves north, and continues to circle the edge of the grid in a counterclockwise direction until a can is encountered or sensed.
Second, Robby skips over cans in the middle of the room. Later, the can he didn’t pick up on the last move acts as a marker so Robby can “remember” that there are cans on the other side of it. He goes on to pick up all the remaining cans in the cluster when the perimeter has been circled.
Left: a highly evolved solution. Right: Melanie’s intelligently (human) designed solution
(In her explanation below, GA is the robot ‘genetic algorithm’ and M is the default human-designed strategy with a score of 346):
And remember Briana’s goats? The ones she used to try and direct to clean certain parts of the field before the parts near the fences she’d erected? Well they actually do the same thing as the robot! Herd them as she might, they look for a the fence and eat along the perimeter, then the middle!
Goats and robots aren’t “smart” in the sense that the herd doesn’t consciously discuss their plan for eating, but they still follow the optimum strategy because of evolution. What began as random movement became synchronized and directional because they ate more food faster and avoided getting eaten by predators (herding behavior). All it took was a million years of goats needing to eat more efficiently in order to survive.
This is what I am here in Portland to learn this month. Melanie Mitchell teaches here at Portland State University, and I needed to isolate myself to learn something radically new, like (r)evolutionary programming. Watch it work:
You can see that this programming example is also taught at Duke University. And yet, in my 20 years of schooling, including a PhD in Neuroscience and two bachelors of Chemistry & Biochemistry (minors in psychology and biology) I never encountered this idea of copying evolution to solve real world programs.
Evolving, not designing, social solutions and government policies
The beauty of evolution is that copying the EVOLUTION PROCESS frees you up from the burden of believing you need to know the answers. You must become an expert on designing systems that learn instead. It turns out what matters is skill in making a “game” out of real-world problems, classifying “moves”, and then figuring out what defines a “good score” or a “bad score” in that game.
Scientists call the gamification of social problems “modeling” but I prefer to think of it as a game. Modeling is for control freaks who want to define everything and assign the importance of variables in their model. Evolutionary programmers want to model the possible moves and guess at the best way to score results (align the scoring with the actual thing you desire to happen). Good moves with a bad scoring scheme results in models that produce undesirable outcomes when applied to the real world problem. For example, Economists are excellent at defining great models with terrible “scoring”, leading to the whole field to being called the “Dismal Science”.
I’m interested in taking this idea and using it with data and language, to “evolve” more optimal ways of classifying sets of texts into a hierarchy. Stay tuned.
Read a book!
What enables individually simple insects like ants to act with such precision and purpose as a group? How do trillions of neurons produce something as extraordinarily complex as consciousness? In this remarkably clear and companionable book, leading complex systems scientist Melanie Mitchell provides an intimate tour of the sciences of complexity, a broad set of efforts that seek to explain how large-scale complex, organized, and adaptive behavior can emerge from simple interactions among myriad individuals.