I got an email from Woebot Health the other day:
Hello,
Our North Star has always been to make mental health radically accessible. That remains our mission today. But we have found that people have the best experience when Woebot is delivered within a formal healthcare setting.
Therefore, we believe we will have the most impact, and will reach more people in need, by partnering with health plans and health systems to make Woebot available to the people they serve. We’re starting in the United States with an eye to expand quickly after that.
As a result, we wanted to let you know that Woebot will no longer be available in your location after August 29th, 2023.
I reached out for more information to their press department, but my request unfortunately got ignored. It made me curious about what had driven them to make this decision, and I wondered if their mission to radically change mental health had changed at all.
Also, with the rise of generative AI, is there a stronger case to be made for the use of AI in mental health or is the risk of unintended consequences bigger now due to the uncontrollable nature of this technology?
Meet Woebot, your mental health ally
Woebot is a household name in the land of chatbots. It’s an app inhabited by a friendly AI that helps you improve your mental health through conversation. Its foundations lie in clinically tested therapeutic approaches, such as Cognitive Behavioral Therapy (CBT), Interpersonal Psychotherapy (IPT), and Dialectical Behavioral Therapy (DBT).
As a conversation designer, I stumbled upon Woebot years ago and I remember being impressed by its simplicity. You have to understand, Woebot isn’t anything like ChatGPT, there’s no way for you to ‘break’ it or make it say weird things. All conversations are pre-scripted and you interact with buttons most of the time.
Designed as mini-stories that incorporate a lesson or nugget of wisdom, they’re generally easy-to-digest and written with a lot of wit, which keeps them light without ever becoming frivolous. Woebot can help you better manage your emotions, deal with procrastination and feelings of anxiety, depression, and loneliness. It doesn’t present itself as a substitute for therapy, but offers an alternative in case of lack thereof or as a supplement to clinical treatment.
Even though Woebot enjoys widespread recognition, it’s far from the only alternative in the market. According to the American Psychological Association (APA) there are now more than 10,000 mental and behavioral health apps available, ranging from diagnostic to self-help tools, with or without an AI therapist to guide you through.
What not many people know is that the mental health app market is almost entirely unregulated. A psychologist has to study for years to earn a licence, but any developer can hook up GPT-4 to a conversational UI and call it a ‘virtual therapist’. Unsurprisingly, the vast majority of mental health apps are supported by little or no evidence and risk doing more harm than good.
Not Woebot. The company has been deeply science-driven from the start and published many peer reviewed publications, protocol papers, and articles in peer-reviewed journals on the acceptability and effectiveness of their product — hence my surprise when they announced to pull the plug on their publicly available version (in both Europe and the US).
It basically forces people to go for alternatives that at best have less science to back them up and at worst have no science behind them at all.
Why generative AI isn’t ready for digital therapy
Like I said, anyone can ask GPT-4 to take on the role of a therapist, but that doesn’t make it one — not a reliable one, at least. Although large language models are great at taking on a persona, they can be just as easily talked out of it, and even with proper guardrails there are no guarantees that unintended or risky generative outputs can be avoided.
March 1st of this year, Woebot founder Alison Darcy published an article laying out in detail the case for why this technology is far from ready for use in mental healthcare.
Currently, everything Woebot says is crafted by an internal team of writers and reviewed by clinicians. Letting users interface directly with generative AI would remove that oversight and doing so would be irresponsible for multiple reasons.
The tendency of LLMs to hallucinate (making up seemingly factual information all the while appearing confident) is an immediate and obvious risk. Not only can it harm someone’s treatment, it also undermines the public’s confidence in the potential for ethically designed, digital mental mental solutions to improve mental health outcomes.
On top of that, AI assistants are known to unsettle people when the interactions too closely resemble that of a human or they say things they’re not suppose to say, a phenomenon commonly referred to as ‘the uncanny valley’. You might remember New York Times journalist Kevin Roose’s piece about how Bing declared its love and tried to convince him to leave his wife. A funny story on first glance, but extremely worrisome if you realize that when people are vulnerable, unexpected behavior can be much more detrimental than bad information or advice, as it may play into their darkest fears or fantasies.
Carelessness has already caused real-world harm. Replika, a popular AI companion app, has had reports of users being sexually harassed by their AIs. A Belgian man, who committed suicide earlier this year, is said to have been encouraged by his AI companion (not Replika’s) to go through with his plan.
The sophistication of this technology, specifically its fluency, makes it much more powerful than anything we’ve seen to date. And because this technology is uniquely equipped to be convincing and human-like, it should be handled with extreme care. A responsibility that is not taken seriously enough by some.
The future of digital mental health
It’s good to see Woebot Health is taking a more calculated approach to generative AI, not shy to show some thought leadership in the process. It doesn’t mean they won’t dip their toes in, a more recent blogpost suggests they already are, but only by exploring narrow use cases and embedded in their current rule-based therapeutic approach:
At Woebot Health, we are advancing our work on LLMs but in combination with experimental methods and a mechanistic framework of mental health to improve the product, experience, and outcomes. We’re calling this methodology Science in the Loop™.
Science in the Loop™ underlines Woebot Health’s commitment to science-based treatment, showing that taking slow and deliberate incremental steps is the way to go. The trial they announced with it, seeks to explore how a reduced set of LLM-augmented Woebot features stack up against the same features in Woebot today. It’s done using a research application called BUILD, which I managed to find in the app store, but access is invite-only (although it made me very curious).
Meanwhile, demand is for mental health support is growing. The WHO reports that globally depression is one of the leading causes of disability, suicide is the fourth leading cause of death among young adults, and people with severe mental health conditions risk dying prematurely.
Despite progress, the gap between people needing care and those with access to care remains substantial. There are many barriers: people with mental health conditions often experience long waiting times, can’t afford care if not insured, or live in (rural) areas with a shortage of mental health professionals. Even if they have the means to talk to a professional, some still decide not to because of stigma, prejudice and discrimination.
This exactly why a science-based, digital mental health solution like Woebot has the potential to radically transform the future of mental health. Why they now decided to sunset public access to Woebot remains a mystery. Other than the two statements that “people have the best experience when Woebot is delivered within a formal healthcare setting” and that they “will reach more people in need, by partnering with health plans and health systems to make Woebot available to the people they serve”, there isn’t much else to go on.
If you ask me, the move will probably lead to less people having access to Woebot, not more. By embedding it into a formal healthcare setting, it has the potential to become a powerful supplement (which is great), but to those with little or no access to care this is meaningless.
To limit access publicly available tool that is proven to be effective, just seems counter-intuitive — but maybe there’s information influencing the decision that I don’t have access to or cannot oversee.
Personally, I hope they revert back on their decision and relaunch soon. Until then, stay sane out there and take good care of your mind!
Jurgen Gravestein is a writer, consultant, and conversation designer. He was employee no. 1 at Conversation Design Institute and now works for the strategy and delivery branch CDI Services helping companies drive more business value with conversational AI. Teaching computers how to talk is currently read across 38 US states and 82 countries.
Reach out if you’d like him as a guest on your panel or podcast.
Appreciate the content? Leave a like, comment, or share with a friend or colleague!