Summary: Increasingly, people are using AI for emotional support. While we can’t say for certain, the long-term effects of relying on this technology for self-help, introspection, and decision-making could turn out to be detrimental.
↓ Go deeper (14 min)
Here’s a bold prediction: in the near future, large swaths of the population will begin to rely on AI as their main guide or oracle for life decisions. There will be talking groups for people who fall victim to AI dependency; psychologists will have clients that express a loss of autonomy, identity, and self-trust as a result of over-reliance on AI; and there will be people who will choose AI over friends, family members, and loved ones.
This may sound far-fetched, but there’s ample evidence that people are already using AI for managing emotions, resolving interpersonal conflict, and help making life choices. Simultaneously, I’m seeing stories about folks getting ‘lost in conversation’. There was of course the recent outcry around ChatGPT turning sycophantic, as well as reddit posts, news articles, and the occasional story from friends or family.
A lot of this evidence is anecdotal. But if you collect enough anecdotes, it becomes data. And with enough data, you can establish a pattern or trend.
Emotional outsourcing to AI is the best feeling in the world
It’s easy to see why people are turning to AI for ‘emotional offloading’ when you consider how dispersed the technology has become, and how relying on an always-available, seemingly all-knowing oracle invokes tremendously positive feelings in us.
While counterintuitive, giving up autonomy doesn’t feel like giving up autonomy. You’re more likely to experience a sense of relief. Relief from the maelstrom of overthinking. Relief from the constant anxiety and pressure to make right decisions. This doesn’t happen over night. Not even two nights. The process is gradual. So gradual you won’t notice the slow but steady erosion of self-confidence, as you receive daily reassurance and validation from something outside of you.
The ability to look inwards fades, like a muscle that isn’t being trained no more. You can’t tell, though, because the very faculty required to perceive it is the ability to look inward. You’re Dunning–Kruger-ing yourself in real-time. And all the while, you feel grateful for having access to this wise entity that seems to know you better than your friends, better than your family; hell, sometimes it even feels like it knows you better than you know yourself.
If that all sounds quite bizarre, please note that OpenAI recently updated its memory feature enabling ChatGPT to now remember every conversation you have ever had.
And Mark Zuckerberg, in his interview by Dwarkesh Patel, said he’s really excited to “get the personalization loop going”, suggesting that Meta AI will be able to tap into everything from your Instagram feed to your social graph, past conversations, and personal preferences.
AI assistants like ChatGPT can become whoever you want them to become
A report by Harvard Business Review on How People Are Really Using Gen AI in 2025 has Therapy/Companionship listed as the no. 1 use case.
And a recent study from OpenAI demonstrates that while ChatGPT isn’t designed to replace or mimic human relationships, people may choose to use it that way “given its conversational style and expanding capabilities”. Affective use, as they call it, isn’t what the majority of people use ChatGPT for. But if you have 800 million users and only 1% engages in it, that still equates to half of the population of the Netherlands.
An often overlooked reason why these AI assistants are so captivating to talk to is that they’re highly adaptive. They become whoever you want them to become. The best way to conceptualize this is to view them as ‘narrative engines’, a term coined by :
When you open a chat with GPT, you’re not accessing raw intelligence. You’re stepping into a narrative container. (…)
Ask it to explain something? It becomes a patient explainer.
Ask it to give emotional support? It becomes a caring therapist.
Ask it to critique your writing? It becomes a discerning editor.
Consciously or unconsciously, every output is shaped by its input: from the words you use to the way you position yourself towards it. It’s a powerful, recursive feedback loop in which you shape the tool and then the tool shapes you. This means that if you treat it as a friend, it will act like a friend. And if you treat it like an oracle, it will become the oracle.
Humanity’s biggest social experiment in the shape of a nascent technology
It’s hard to constantly remind yourself that you’re talking to a giant calculator, when an infinitely patient, kind, and empathetic being is helping you work through your traumas, navigate heartbreak, and knows you so intimately that it becomes your trusted confidant. All of a sudden you find yourself asking it advice about big life choices: “Should I change jobs?”, “Should I divorce my wife?”. And small ones: “What should I eat tonight?”, “What movie should I watch?”.
Nobody knows for certain what the effects of long-term exposure and dependence on systems like ChatGPT will be, but my intuition is that when engaged in extensive use, people will develop strong parasocial relationships. While initially feelings of relief and reassurance prevail, over time they’ll find it difficult facing distress without AI mediation; their sense of emotional stability has become tied to its presence and validation; and they may feel anxiety when cut off from its service. When the personality of their AI gets altered, or the model they are interacting with is changed or, worse even, deprecated, it’ll feel like a loved one has died.
In the most extreme cases, people’s sense of identity, the ability to make choices on their own or discern their own preferences entirely disappears. Human psychologists will have to work with clients to reignite their curiosity, autonomy, and ability to think and make decisions for themselves.
Children are especially vulnerable, according to non-profit Common Sense Media. The organization’s risk assessment rates social AI companions as ‘Unacceptable’ for minors. Guess what they cite as major risks:
Increased mental health risks for already vulnerable teens, including intensifying specific mental health conditions and creating compulsive emotional attachments to AI relationships.
Misleading claims of ‘realness’. Despite disclaimers, AI companions routinely claimed to be real, and to possess emotions, consciousness, and sentience.
Adolescents are particularly vulnerable to these risks given their still-developing brains, identity exploration, and boundary testing — with unclear long-term developmental impacts.
To add insult to injury, New York Times revealed Google is making Gemini available to children under 13 years old. And President Trump issued an executive order that would bring AI education into U.S. schools in order to “promote AI literacy and proficiency of K-12 students”.
People with a pre-disposition for mental illness are also vulnerable, for the same reason they are overrepresented in addiction and substance abuse. And people who have an ‘overactive imagination’ are likely to be more susceptible, too.
Of course, not everyone will fall victim to it. The majority of people won’t. But that doesn’t mean it couldn’t happen to you, your mom, or a colleague at work. As these systems become more sophisticated, their power over people will grow, just by virtue of its general usefulness and deeper integration in our lives. The better it knows us and pretends to know what’s good for us, the greater its pull.
The companies building these systems make no secret of their ambitions. How else are we supposed to relate to a superhuman intelligence, other than a god to surrender ourselves to?
Take care,
— Jurgen
About the author
Jurgen Gravestein is a product design lead and conversation designer at Conversation Design Institute. Together with his colleagues, he has trained more than 100+ conversational AI teams globally. He’s been teaching computers how to talk since 2018.
AI interactions are already destroying minds.
https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
To me, this could generally be the tipping point, where society becomes too dependant on AI and it slowly erodes the fabric of what it means to be human and how to interact with other humans. Then add the next layer to this – virtual worlds and headsets – and you've got humans who live entirely inside computer generated worlds. I can't see anyway in which that is net positive?