Our present “map” of AI is wrong
What is the map that most people have in their heads today, when they think of AI? A good way to start answering that question—any question like that—is to do a Google Image search, and see what the top results are. And in the case of AI, the present map becomes obvious: Artificial Intelligence is a computer that is starting to think like a human does: seeing patterns, making decisions, exercising judgment and intuition, like the human brain. AI is more than a tool; it is a thinking machine.
That is a powerful mental picture. It inspires hype. AI will eliminate waste. AI will double and triple efficiency. AI will match people to their ideal job, or to their true love.
It also inspires hysteria. AI will create mass unemployment. AI will replace democracy with technocracy. AI will go to war with humanity and destroy us.
So society has a powerful picture of “AI”. But not, I think, an accurate one. An accurate picture of AI today looks far more mundane than the demon-machines on Hollywood screens. An accurate picture of AI today looks like this.
All of that other stuff is a distraction. AI is an equation. It is math. It is complex math, to be sure. But it is still just math.
And this math may be very complicated, but in his book Rob draws a picture of what the math is doing very simply:
For the most part, what this math, these algorithms, are doing is selecting. Trying to get us closer, faster to the “optimal” choice.
And the idea is that, if the choice that a human makes falls somewhere around…here, the choice that the AI makes is going to fall somewhere around…there. Somewhere closer to the optimal choice.
- In cars: The human hits the breaks here, the AI hits the breaks there.
- In finance. The human buys the stock here. The AI buys the stock there.
- In recruitment. The human hires this person. The AI hires that person.
- In our news feeds. The human editor shows you this story. The AI shows you that story.
Plus, AI choice-making—which is already ahead—is only going to get closer, and closer, and closer to the optimal choice.
Now, this picture is already pretty powerful. It already allows people to ask some powerful and important questions. Questions like: What do we mean by optimal? What data are we using? What assumptions are we making about what is important and what is not important?
But this picture is also a picture of one of the biggest problems with AI today, in business and society. People are trying to use AI to find optimal solutions. And worse, they think they’re finding them.
Optimal can also mean “Fragile”
But the real world is complex. It is changing very fast. So the optimal solution probably doesn’t exist. If it does exist, it doesn’t stay the optimal solution for long. To the extent that AI focuses our choices and behaviors on the optimal solution, we’re in trouble.
If we put too much pressure on the algorithms to only make optimal selections, then we’re going to end up with very fragile systems that lack diversity. There are already some big examples of this. Look at the US political system. You have Republicans who are being shown the best social media messages, the optimal messages, that will make them click or tap. You have Democrats who are being shown the best social media messages, the optimal messages, that will make them click.
Result? A more fragile public sphere.
In Rob’s book, one solution is to add a second dimension to this map. We need AI to help us do TWO things. One is Selecting. But the other is Mixing.
These two forces live in tension with each other. Mixing is inefficient if you are trying to optimize the number of clicks. But put too much pressure on Mixing up our usual patterns, and that’s not helpful, either. (If every day my news feed shows me a random collection of stories, it’s very hard to learn. My world becomes chaotic.)
But there’s a zone in between these two extremes, where the power to select what fits our pattern and the power to see our pattern and break it up, balance each other out. And we know that zone exists. Because Mixing and Selecting, these two forces, are the same forces responsible for natural evolution. They are the forces that made us, in nature.
If we can build algorithms that can help us find that zone, faster, more often, in business and society…wow. We would accelerate the evolution of human technology and culture and society. This is the zone in which evolution happens. In which learning happens. In which innovation happens. In which we develop resilience to systemic shocks, like climate change, or fake news, or a sudden swing in consumer demand for our product.
If you can see this picture of AI, then you start to see all the mistakes people and businesses are making with AI today. All the opportunities being missed. And you can communicate a powerful vision for the positive role that AI can play in getting us all to a different, more beautiful, more advanced world. Faster.
Questions To Lead By
- As citizens living in a world where algorithms play an increasing role in shaping our choices and behaviors, how can we demand a better balance between the incentives to optimize and the “inefficient” mixing that makes society more creative, adaptive and resilient? And from whom do we demand it? Government? Business? Data scientists? Ourselves?
- As businesses and organizations who must outperform the competition to survive, how far can we push the algorithms to drive gains before fragility becomes a concern? How do we get the organization into that adaptive zone where innovation flourishes? Is anyone in the organization even thinking about the company-wide consequences of AI adoption in this way, or are we just driving optimization programs everywhere we can?
- Why in the world can’t we sit down with these people who are driving AI adoption, and with policy makers, and experts, and influencers, and elites, and real people, altogether and make some thoughtful choices together about the new world we want to live in? (Answer: We can. And we do whenever we meet at basecamp.)