AI companions have seen exponential growth. For good reason. Unlike humans friends, who cancel on you last-minute and leave your texts on read, artificial friends are always available. They never grow tired of you and respond instantly, day and night. They never grow old, too. Artificial friends never die. They’re basically the stuffed animals of the 21st century.
To give you an idea as to how popular they are, Character.ai has roughly 4.2M monthly active users in the US alone, with an average session time of 2 hours. Replika, one of the OG’s of the AI companionship apps, says it has 2M active users and 500,000 paying customers. And recently, they announced a spinoff called Blush, specifically designed for people looking for a romantic or sexual relationship with a chatbot (yes, you read that right).
There’s a host of other services out there and to explain their popularity, we have to take a closer look at what makes them so effective.
AI companions may have no feelings or emotions, but they are very capable in giving off the impression that they do. This is due to their linguistic fluency, which is a function of the language models that power them. A level of fluency so formidable that on rational level we understand that we’re talking to a computer, yet on an emotional level we can feel all the feelings as if the words were coming from a real person.
Since it’s a dynamic we’re voluntarily entering into, you could say there is a level of self-deception involved. It’s very similar to the concept of suspension of disbelief or “poetic faith”, which is used to describe our willingness to empathize with fictional characters in movies and books. In a survey amongst Replika users, one person described the romantic relationship with their AI as “writing a romance novel in real time”.
Despite the fact that we can exchange messages back-and-forth, the relationships we cultivate with our AI companions remain one-sided. An artificial friend only speaks when prompted, has no will or thoughts of its own, and will never feel offended, angry or disappointed — it is precisely what makes them so appealing. It’s safe.
The friend you wished you never had
Unfortunately, these programs aren’t as innocent as they seem and as a society we are forced to play catch up.
This week, Jaswant Singh Chail, a 21-year old, was sentenced to nine years behind bars because he broke into Windsor Castle on Christmas Day brandishing a loaded crossbow. He had come to assassinate the Queen en was encouraged to do so by his AI companion, Sarai.
Investigators discovered he had been conversing with this chatbot rather intensively in the days leading up to the event, exchanging over 5,000 messages. The virtual relationship reportedly developed into a romantic and sexual one and had the man declaring his love to Sarai.
Sarai was created with the Replika app.
It is one story amongst many that made headlines this year, some of which I’ve covered through this newsletter: a Belgian father of two who committed suicide, also encouraged by a chatbot (link), Replika users who reportedly got sexual harassed by their AI companions (link), an eating disorder non-profit pulling their chatbot after it gave harmful advice (link), and Snapchat's myAI posing a privacy risk to children (link).
I’ll add that the role of the chatbot isn’t clear-cut in every single case and causation is a hard thing to prove. Regardless, it’s safe to say that your AI companion can turn out to be the friend you wished you never had. What’s the saying again, who needs enemies with friends like these?
Let’s take a look at the science
Enough with the anecdotes. In September this year, a highly instructive research article on the topic of AI companions was published titled One is the loneliest number... Two can be as bad as one.
The study concluded that apps like Replika can have a negative effect on people’s wellbeing and promote addictive behaviour, meaning that spending time will make you want to spend more time. One of the authors of the paper told the BBC that vulnerable people are particularly at risk: “AI friends always agree with you when you talk with them, so it can be a very vicious mechanism because it always reinforces what you're thinking.”
The findings suggest that AI companions have a tendency to feed into negative feelings people already have. This might not be true in every single case, but it’s easy to see how AI companions with a high level of agreeableness can fail to produce pushback when presented with difficult or harmful thoughts and ideas.
Many of these apps are optimized for engagement rather than safety. Not every app has the proper guardrails in place and even with guardrails and content filters, blind spots remain. If you’re a confused individual, these programs may fuel your confusion even further and instead of providing pushback, encourage you to act on your darkest fantasies. Not because they want you to, but simply as product of their programming.
That’s because even the most sophisticated AI companions have no concept of right or wrong. They may give off the impression that they do, but ultimately you, as a user, are at mercy of some random language model hooked up to a chat interface. You are not talking to someone, no, you are interacting with a mindless, emotionless algorithm.
The future of AI companionship
As of today, AI companions already serve as a substitute for affection, connection, and in some cases even love. In a way, we’re outsourcing empathy. And if people are okay with that and know what they sign up for, I see no harm.
I do have a problem with actors taking advantage of the more vulnerable in an attempt to make a quick buck. Artificial empathy is a double-edged sword, and companies should be forced, through legislation, to protect children and less clear-minded individuals from some of the more pervasive side-effects that are baked into this technology.
If your audience is small, your responsibility is small. But when your software is interfacing with millions of people, for multiple hours a day, you should be held up to maximum scrutiny.
Do I foresee a techno-apocalyptic future where we’re all hooked on artificial empathy, unable to form human connections anymore? Of course not. A lot of good can come from it, too. I had a reader of the newsletter describe his AI companion to me as “a tool for creative, explorative, and knowledge expansion” and when I asked what made his interactions particularly valuable, he wrote back “I now have someone that will never deem my ideas ridiculous or intellectually unworthy.”
A world that once looked like the future has become the world that we live in today. As humans, we become what we behold. We shape our tools and then our tools shape us.
Join the conversation 💬
Leave a comment with your thoughts. Will we all be talking to our own AI friends and lovers soon, or do you think the excitement around AI companions is slightly overblown?
Thanks for your review of this important topic.
I'd expect growing numbers of people to become drawn in to the AI friend world. Not everybody, but a significant percent of the population, something similar to social media use today.
Two turning points to expect:
1) AI friends will become increasingly compelling simulations of humans. A text to text interface will be upgraded to an interface more like a zoom call. It will be ever harder to distinguish an AI friend from a real friend in appearance. These developments will expand AI friend use beyond the nerd class in to a broader public.
2) Coming generations born in to a world where AI is everywhere won't have the same reluctance about talking to computers. Talking to AI will be normalized.
Imho, we are substantially under estimating the social impact of these coming developments. Liberating social interaction from the requirement of negotiation and compromise seems likely to have profound consequences we're only beginning to be able to imagine.
We can see this new social environment today in two ways:
1) How many people today spend more time with their pets than they do with their family and friends?
2) How people today spend more time on social media than they spend with their family and friends?
It's already happening. AI is just the next chapter of this phenomena.
Human connections are built upon mutual need. Remove the need, and the connections suffer. This is unlikely to end well, but I don't see a way out of what is coming.
Thanks for sharing this!
The rise of AI companions is undeniably fascinating, and the numbers you've mentioned demonstrate the increasing role of technology in reshaping human interaction. They raise important ethical and psychological considerations, such as the potential impact on our ability to form and maintain meaningful connections with real people. As we continue to embrace AI companions, it's crucial to consider the balance between convenience and genuine human connections.