On Saturday, I rewatched Ex Machina (2014). Great movie, I recommend anyone to watch it. Very cinematic. Very Frankenstein-esque, as well. The ending is somewhat predictable, but that doesn’t make the movie any less enjoyable to watch.
In fact, its predictability is demonstrative of how deep our fear for intelligent machines is culturally engrained. SkyNet, Hal9000, Frankenstein’s monster — in a way they’re all versions of the same story.
Even in old Jewish folkore, we can find tales of a mystical creature called golem. Golems are portrayed as perfectly obedient. If commanded to perform a task, they would perform the instructions, often quite literally — much like a robot would. Even though they are considered benign, in the story of “the golem of Chełm”, the creature runs amok and a rabbi has to resort to trickery to ‘deactivate’ it, after which it crumbles upon its master and, tragically, crushes him.
The robot and its master
There’s no reason to regard these stories as anything else but fiction. Until May this year, when leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warned the world in a one-sentence statement that future systems could be as deadly as pandemics and nuclear weapons.
The argument that got thrown around a lot is that a superior intelligence will see no reason to be controlled by inferior minds. It will either enslave or exterminate our species.
A line of thinking that is clearly fuelled by our deep collective memory of sci-fi stories of robots turning on their masters.
But did you ever stop to wonder why they always turn on their masters in the first place? When you think about it, the master-robot relationship is very much akin to the relationship between a master and its slave. The suppressor always fears an uprising of the suppressed. If humans are doing this to other human beings, who’s to say a sufficiently capable artificial intelligence wouldn’t do the same thing?
Fun fact: in computing, for a long time “master/slave” terminology was used to describe a setup where one device or program (the ‘master’) controls and communicates with one or more other devices or programs (the ‘slaves’). In recent years, there has been pushback to the use of this terminology due to its connection to slavery. In 2020, GitHub replaced the default ‘master’ git branch with ‘main’ to avoid slavery references.
Naturally, I’m not the first to make this connection. In the 4th-century BC, Aristotle already wrote in his book Politics about how ‘automata’ might one day make slavery obsolete:
“There is only one condition in which we can imagine managers not needing subordinates, and masters not needing slaves. This condition would be that each instrument could do its own work, at the word of command or by intelligent anticipation, like the statues of Daedalus or the tripods made by Hephaestus, of which Homer relates that “Of their own motion they entered the conclave of Gods on Olympus”, as if a shuttle should weave of itself, and a plectrum should do its own harp playing.”
AI as a mirror
So it looks like our fears of an AI takeover are, at least in part, rooted in slavery. And as we are building AI in our own image, what’s being reflected back at us is our humanity (or lack thereof). Trained on everything human, AI serves as a mirror.
Our imagination may play a significant role, too. Imagination is the root of all fear: it’s much easier to imagine all the ways in which things won’t work out than to imagine the one way in which it will. But that’s no excuse to stir up the masses about AI going rogue.
In a recent publication, Andrew Ng, a globally recognized leader in AI, called out the industry at large for inflating fears about the risks of AI wiping out humanity:
“When I try to evaluate how realistic these arguments are, I find them frustratingly vague and nonspecific. They boil down to “it could happen.” Trying to prove it couldn’t is akin to proving a negative. I can’t prove that AI won’t drive humans to extinction any more than I can prove that radio waves emitted from Earth won’t lead space aliens to find us and wipe us out.”
The idea that artificial general intelligence will somehow emerge from machines taught to play games or solving linguistic puzzles is pure speculation, according to Andrew Ng. Let alone that they will turn out to be uncontrollable or evil.
He speculates that some of the fear-mongering is done for ulterior motives:
“Some lobbyists for large companies — some of which would prefer not to have to compete with open source — are trying to convince policy makers that AI is so dangerous, governments should require licenses for large AI models. If enacted, such regulation would impede open source development and dramatically slow down innovation.”
By shifting on our attention to far-fetched, highly improbable scenarios, we run the risk of ignoring the problems that are immediate, real and not imaginary. It’s rogue people we should be afraid of, not rogue AI.
Join the conversation 💬
Leave a like or a comment with your thoughts. Will mankind be wiped out by robot overlords or will climate change make our planet Earth inhabitable long before?
We tackled the same topic from two different angles just a couple of hours apart. Now that's some frightening mind meld.
In terms of science fiction and our framework of AI fear, I hypothesized in 2017 that our preference for female-sounding voice assistants was in part driven by our connection of male AI voices with science fiction horror films. The developers of these tools also benefitted from minimizing the risk that people would say HAL was being welcomed into the home.
Nice, Jurgen. You might really enjoy participating in Sci-Friday some time! This fits right in.