A recent Huffington Post article brought an interesting tool to my colleague Nick’s attention. Collusion helps you spy on the companies that are colluding to spy on you as you surf the internet. For example, every time you check the weather all of these sites are informed about you:
A list of websites that receive information from weather.com are shown on the left. About half are red and crossed out because collusion (this chrome plugin) blocked their access.
As you browse, collusion creates a network map showing how the different sites you visit talk to each other. You can hover over any node in the network to see a site’s connections and automatically block the transmission of data to known tracking sites like Google ad services, Doubleclick.net, etc. As you sift through your browsing’s connections, it quickly becomes clear that not all sites are created equal when it comes to tracking your metadata.
Our insight was that this tool could serve another purpose. You see, Nick and I are responsible for building up GlobalGiving’s database on organizational behavior and curiousity. This is used to measure each organization’s performance in a real-time, comprehensive way. If we could sort all organizations in the world into “good” and “bad” groups based on their habits, such as being responsive to the community they serve, demonstrating a tendency to learn from mistakes and remember what they’ve tried before (knowledge management), or their making effective use of free performance tools in their daily work (agility), we could help more money reach better NGOs, and ultimately improve more lives with the same amount of resources.
This is the same as saying “we’re going to make the whole aid world more efficient,” but when we say it, we mean it – because we have a way to do what we say. In the “big data” era, information will be used to make thousands of little evidence-based decisions that will improve the system overall.
But on to specifics. What do organizations’ websites reveal about their agility? A lot.
Look at these organization websites:
Each of these have hundred-million-dollar budgets. So how much effort to they make to optimize learning about visitors to their homepages?
I see a correlation between how much the organization focuses on public donations (versus government or private support) and whether they use free analysis software, such as google analytics. Of the ten organizations shown above (which are close to a top ten list of worldwide organizations by size) only Save the Children, Care, and World Vision made a serious effort to learn from their website traffic. Five our of ten at least have some kind of basic (free) analytics (google-analytics and/or google tag manager).
For the other half that do not, it is telling. These organizations don’t really need public support to survive, and are also (in my opinion) less accountable to community feedback because they are “too big to fail” in the aid world:
- World Bank
- United Way
- Heifer International
Types of 3rd party data collection sites
- vmmpxl – quantcast web traffic demographics
- mxpnl — mixpanel is like google analytics, but you pay for it and it offers more features
Visualization or dissemination
- anything in red (advertising)
Faster web loading and cloud data
- rackcdn — rackspace cloud storage
Social Media Plugins
- twimg — twitter
Design iteration and testing (curiosity)
For comparison, I took snapshots of GlobalGiving and various other online giving marketplaces or organizations we partner with:
Clearly, all of these organizations take their web traffic seriously. Each of GlobalGiving, DonorsChoose, Kiva, BetterPlace, and Razoo uses at least one analytics tool, one cloud hosting tool to speed up website load times, and many use an iterative design and testing tool like optimizely.
The surprise here is that GiveDirectly (the recent darling of the aid world and the media world) does nothing to learn about their traffic. It makes me question how much of a learning focus their organization has internally.
And that is what this is all about. I believe that organizations stamp an imprint of their internal learning on their external websites.
Curious, learning, experimenting organizations use web-based tools that help them achieve their goals (and leave a trace for us to track).
Large bureaucratic “stick-in-the-mud” organizations do not use any of these tools, leave no trace of their learning, and thus are probably not focused on learning.
Web footprints for a few randomly chosen GlobalGiving partner orgs
These organizations are much smaller than the ones listed above, but they still use more learning tools than even the world bank or BRAC uses, ergo they are probably learning more with fewer resources in my assessment:
Five out of seven local GlobalGiving partner organizations use google analytics.
That’s a small sample, but a larger fraction of the group are still using more tools to learn about web traffic than the million dollar orgs.
These are just screen shots to show that there is useful data out there. Once you realize that the tools exist to ask old questions in a new (and more efficient) way, you simply need to write a little code to gather all the information. This will be my take home message at the Georgetown University master’s program class I’m teaching this week:
Graduate School should help you learn how to ask better questions and to recognize when the status quo of information is insufficient to fix the problem.
We live in a world that clings to the “myth of evidence”‘: We think our leaders make decisions based on weighing evidence, but they do not. They never have. Throughout history they have made instead made experience-based decisions, limited by their own wisdom and prior failures. This is about to change.
Decisions used to be made using tiny scraps of information, because that is all that was available. But this decade is the turning point when evidence becomes cheaper to aggregate and interpret than the cost of making decisions without it. Some giants will fall and others will rise to take their places, all because they understand the new calculus of “big data.”
And when the dust clears, a new kind of democracy will be possible* where in the past is was merely theoretical: policy decisions will reflect all peoples’ opinions where choices are a matter of preference, or based on sound science and observing human behavior on a macro scale (like Isaac Asimov’s psychohistory idea) where policy depends on truth rather than preference.
(though this kind of democracy will be made possible, it will almost certainly be tried somewhere outside of North America or Europe first. My guess: somewhere in the middle east where people want real democracy)