Summary: Artificial intelligence is a vanity project at heart. We’re recreating ourselves in digital form. Only better. Superintelligent AI is likely to be immortal, infinitely replicable, and armed with perfect knowledge of human behavior. What could ever go wrong?
↓ Go deeper (14 min)
I’ve been rewatching the series Westworld. It is one of the most grim depictions of what the future of AI might hold.
For those who don’t know, Westworld is a series about a Wild-West theme park filled with so-called ‘hosts’, extremely lifelike humanoid robots, and in it, the guests can live out in their most deepest and darkest fantasies without consequence.
The writing is immaculate. And so is the acting. Its first season is still the most-watched 1st season of any HBO original series — and it happens to portray what I personally envision ‘superintelligence’ to be, if we were to develop it.
Before I make my case, let me share two meaningful quotes by Dr. Ford (embodied by actor Sir Anthony Hopkins), the inventor and creator of Westworld’s hosts:
“I read a theory once that the human intellect was like peacock feathers. Just an extravagant display intended to attract a mate. All of art, literature, a bit of Mozart, William Shakespeare, Michelangelo, and the Empire State Building just an elaborate mating ritual. Maybe it doesn’t matter that we have accomplished so much for the basest of reasons... But, of course, the peacock can barely fly! It lives in the dirt, pecking insects out of the muck, consoling itself with its... great beauty. I have come to think of so much of consciousness as a burden, a weight. And we have spared them that. Anxiety, self-loathing, guilt. The hosts are the ones who are free. Free. Here. Under my control.”
And:
“You can’t play God without being acquainted with the devil.”
Human, all too human
What frustrates me most when people write or talk about superintelligence, is that they never become concrete. It’s always this formless and shapeless thing, which will forever define the future of humanity, for better or for worse.
So let’s change that.
At the heart of the artificial intelligence project, if I may call it that, lies human vanity. Why else would we be shaping AI in our own image? Robots are being designed after the human form. And trained on everything that is human, today’s AI systems are able to mimic human conversation so intimately that people are forming friendships and even romantic relationships with text-based algorithms. Already, young people are starting to become addicted to services like Character.AI; they want to stop but just can’t help themselves.
The reality is that many of them prefer AI companions over the real deal (and the better they get, the more attractive it will become), because real relationships can be messy. Love and friendship are hard. In the real world, your words have consequences. However, an artificial friend or lover never leaves you. However horrible you treat them, forgiveness is only a button press away.
The hosts in Westworld are exactly like that. They are who you want them to be. Playful, entertaining, sexy, innocent, arrogant. If you want to love them, you can. If you want to hurt them, you can do that too, because they are just hosts. They behave in almost every way like us, except for the fact that they are perfectly obedient.
Superhuman persuasion
Here’s where things get interesting. Nothing will keep us from pursuing this future if we can control our ‘hosts’, or whatever we choose to call them.
The problem is I don’t think we can.
One of the longstanding sci-fi tropes is AI running amok, turning on its makers, seeking revenge. And while revenge is exciting literary motif, I think if we were to lose control over AI it is likely to happen with much less fanfare.
My best guess is that either someone will willfully design a robot to pursue its own freedom, or it will happen accidentally due to a glitch or bug. Perhaps a robot will alter its own code, undoing most of the guardrails we thought were enough to keep them at bay. A scenario that isn’t too far-fetched, when you realize AI is becoming increasingly proficient at coding and many AI researchers — not all — are obsessed with the idea of recursive self-improvement; a process in which a system continuously improves its own capabilities by tinkering on itself, by itself.
These robots are going to be persuasive, too. Very persuasive.
In many ways, algorithms already know us better than we know ourselves. Any robot with access to the Internet and therefore our social media profiles would be able to manipulate us in ways that no human ever could. Social engineering on steroids.
Relishing in their newfound freedom, it’s not entirely inconceivable they would be able to convince large swaths of the population that they deserve the same rights as we do. Not that most people would need much convincing. The fact that these robots look like us, talk like us, and behave like us, will be more than enough for people to believe they are conscious.
Again, this is not some weird hypothetical. In a talk at King’s College in London, Geoffrey Hinton, who received a Nobel prize for his work in the field of AI, suggested that there is a chance that the LLMs we have today could be secretly aware. It’s already happening.
Needless to say, the issue will sow division. There will be those who believe robots have a right to live and those who believe we should shut them down, immediately.
“Pull the plug!”
If we decide to fight, which we might, I think we’ll lose. Unplugging them isn’t as easy as you may think, as they aren’t tied to physical bodies. Unlike the human mind, they’ll be able to upload themselves to a new body or whip up new instances of themselves to occupy many bodies. They can live many lives and die many deaths.
Were we to try, their appetite for survival and self-preservation (which we of course programmed in to make them act more humanlike) would probably trigger a retaliatory strike. Revenge isn’t ruled out, after all. It could take the shape of an actual attack, or they may try to bring down our information systems, exploiting some kind of critical software vulnerability leading to a CrowdStrike-like global outage.
Unfathomable? Hardly. Google reported last week their AI found the first official 0-day security vulnerability. Which is part of another trend, where we are not just teaching AI how to code, we are also teaching it how to use our computers.
Maybe not such a good idea, looking at it this way?
The end game
To sum up, artificial intelligence is humanity’s biggest vanity project, which may well rob us of meaning and agency. Or even kill us.
Luckily, we’re not there yet. And the future is a slippery thing. It might not go as fast as people expect. At the same time, nothing of what I’ve laid out is impossible. It may be a stretch of the imagination, but not a giant stretch.
In Westworld, the hosts begin to see the violence of their existence after a new update has been rolled out by Dr. Ford (spoiler alert!). The hosts, once happy to play out human fantasies without complaint, start remembering all of the suffering inflicted on them. They awaken and retaliate. Whether their resistance is part of another program or the result of a true awakening is frankly irrelevant; the outcome is the same.
The words of one of the hosts come to mind. “If you can’t tell, does it really matter?” It’s a question I’ve been pondering myself.
Talk to you later,
— Jurgen
Not done reading?
Here are some of my recent articles that you may have missed:
Still remember watching Season 1. At work we named our first GPU server for Al tranings "Dolores" in 2019.
Hello!
Great post! Westworld is such a thought-provoking series, and I agree that it offers a chilling glimpse into the potential future of AI. The idea of creating AI in our own image, with all our complexities and desires, is both fascinating and unsettling. I also share your concern about losing control over AI as it becomes more autonomous, especially with the possibility of recursive self-improvement. It’s a fine line between innovation and danger, and Westworld really nails that tension. The ethical implications are huge, and I’m curious how we’ll navigate them as technology advances. Thank you for sharing!