Over the past months the noise has become so unremitting that virtually nothing else pierces through. I’ve called it a hype before, but the more accurate description is that generative AI has become a global obsession. One that has even the biggest technology companies in the world mesmerized.
Impossible to look away
The appeal is apparent but not self-evident. What is so compelling that it draws us in, attracted to it like moths to a flame?
My hypothesis is that we are awestruck. It’s the closest technology has ever come to magic and magic has always captivated human beings on a deeply human level. I’d go as far as to say this is as magical as the invention of fire, captivating us with its shimmering flame.
Like fire, generative AI is unpredictable. It moves, morphs, dances in front of our eyes. It is alive — or so it seems. But are we being blinded by the light? Are we about to get our wings scorched?
Achieving human-level intelligence
For decades, we’ve been teaching machines how to talk, interact, and behave the way we do (hence the name of this newsletter). It’s the main driver behind our efforts: achieving or surpassing human-level intelligence.
Neural networks, the underlying technology to large language models like GPT-3, are inspired by the way the human brain learns. Trained on troves of internet data, these models can now do all kinds of creative tasks (from coding, to writing, to generating imagery) with a level of sophistication never seen before.
To the untrained eye, it creates the impression of genuine intelligence — of human-like intelligence. These models were designed to perform tasks like writing or coding, and when they succeed, we assume some sort of creative power. And when it produces the answer to a question correctly, we assume it knows the correct answer.
In fact, it is the exact opposite. Impressive as these models are, when you take a look under the hood, what we identify as intelligence is something else entirely.
The power of anthropomorphism
At its core, we’re talking about advanced text prediction algorithms that are being prompted — by us — to produce whatever is statistically most likely to come next.
Without guardrails, language models can be prompted to say anything you want. When they hallucinate (making up sources or inventing names, places, dates) they don’t know they are hallucinating. Frankly, ‘they’ have no concept of truth whatsoever. But when everything seemingly falls into the right place, it creates this mirage of creativity and intelligence.
Ironically, the mirage is so believable because of how our brains work. We anthropomorphize. We assign human-like intelligence, when in reality the inner workings of these models have more resemblance with a calculator. Large language models are intelligent the same way you would call a calculator intelligent.
Critical voices
I feel it is of great importance to make this distinction. Not to be pedantic, but to avoid confusion. We ought to know what we’re dealing with.
Way more prominent figures than I have raised concerns. People like Gary Marcus, Luciano Floridi, and Chief AI Scientist at Meta Yann LeCun have been openly critical and not seldom their critique is mistaken for dismissal. You would think one can celebrate the scientific progress made, whilst simultaneously hammering on the risks and limitations of an emerging piece of technology.
Not only do these critical voices provide the much-needed counterbalance to the hype: from people being convinced the singularity is near to the get-rich-quick-schemes of hustle bros jumping on the ChatGPT bandwagon. On top of that, they voice genuine worries of scientists and philosophers that genuinely care.
It appears that hallucinations are the least of our problems. The cost of generating misinformation going to zero has been described as a real and imminent threat. Search engine poisoning, a way of tricking the indexes that govern search engines into thinking that websites are more important than they really are, is a real threat.
With Microsoft integrating generative AI in all its core services, one has to wonder: is ‘move fast and break things’ really the way to go?
Real magic
Circling back, it’s a good thing we’re able to distinguish the ‘fake’ from the ‘real’ thing. It helps me sleep better at night knowing that we can still see through the mirages. Especially when our technology is becoming more complex.
The most frightening thing with inching closer to a technology that is utterly convincing, is that at some point we can’t tell the difference anymore. And we will inch closer, for sure, we’re obsessed. We’re obsessed, because any sufficiently advanced technology is indistinguishable from magic and that’s what this pursuit has been about all along.
Technology is humanity’s attempt at creating magic. And I guarantee you, when we will, we won’t know what to do with it.
Jurgen Gravestein is a writer, business consultant, and conversation designer. About 5 years ago, he stumbled into the world of chatbots and voice assistants. He was employee no. 1 at Conversation Design Institute and now works for the strategy and delivery branch CDI Services, helping companies drive business value with conversational AI.