From Dumb to (Artificially) Intelligent: The Rise of Mental Health Chatbots

From primitive, offensive chatbots to intelligent, empathetic ones, technology has come a long way. AI’s application in mental healthcare especially in the form of chatbots will undoubtedly democratize mental healthcare. But what exactly are the potential and pitfalls of chatbots?

Jiyun Hyo
Contributor

December 14, 2023

From Dumb to (Artificially) Intelligent: The Rise of Mental Health Chatbots

This article is part one of our series on mental health chatbots written by Jiyun Hyo, AI researcher at Duke University and founder of Seon Health. Read part two as Jiyun shares four lessons building a mental health chatbot.

Believe it or not, mental health chatbots have existed for decades. The very first mental health chatbot was ELIZA, a chatbot developed in the 1960s at MIT by modeling a Rogerian psychotherapist. Honestly, it kind of sucks. Try it yourself. You can’t even have a sensible conversation with it and for me personally, ELIZA always ends up making me more angry.

Then, in 2017, came an influx of traditional mental health chatbots such as Woebot and Wysa. They are essentially hard-coded modules in the form of a “chatbot” that delivers those contents. I call them “textbooks disguised as chatbots.” The problem with these traditional chatbots is that after you use them for a few weeks, you can basically memorize their entire content. Once the content becomes predictable, the chatbots stop being helpful, at least for me.

So when does AI come in?

Back when GPT-3 was first released in 2020, the computer science world was blown away by the model’s language capabilities. It was crazy how we could have a basic yet logical conversation with a computer! Finally, we thought. However, the rest of the world wasn’t as impressed just yet. Traditional mental health chatbot companies still stayed away from incorporating AI. Why?

The AI challenges

There are two main problems when it comes to using AI for mental health chatbots: supplying of false information (hallucination) and dangerous information.

  • Hallucination was and still is a huge issue for many LLMs (large language models) even today. Language models are essentially a huge collection of neural networks that is trying to predict the next set of words based on the input and the dataset it was trained on. So its job is to produce something that sounds plausible, without any regard to the accuracy of the information. Hallucinations especially for mental health-related purposes can be detrimental as they directly affect the safety of the users.
  • Another potential problem is that AI might sometimes supply harmful information such as suggestions of suicide methods. Some chatbots have been known to provide different suicide/self-harm methods and sometimes have even been known to encourage those behaviors.

The rise of ChatGPT and mental health applications

Then came ChatGPT (GPT-3.5) in 2022. It was much more powerful than its predecessor GPT-3. Fascinated by the technology, engineers flocked in to apply ChatGPT to almost anything you can think of, including for mental health. Sadly though, that meant a lot of these mental health chatbots were built without any clinical knowledge. While the engineers had good intentions, non-evidence-based mental health chatbots drew a lot of criticism from the clinical world because they were supplying their users with unfounded clinical information.

As you might expect, we’re at a critical moment in history for mental health chatbots. The technology is there. But technology by definition can harm or help people. As AI technology especially in the realm of large language models quickly starts to mature, the next mental health chatbot revolution will come from evidence-based chatbots that mimic the logic of the best clinical psychologists. All in all, mental health chatbots will democratize and change the world.

Jiyun Hyo

Jiyun Hyo

Co-Founder and CEO at VizualMind

Bio

Jiyun is an entrepreneur, AI researcher and investor. He the founder of the Robotics Lab at Duke University.

Jiyun Hyo

Co-Founder and CEO at VizualMind

Bio

Jiyun is an entrepreneur, AI researcher and investor. He the founder of the Robotics Lab at Duke University.

Our latest articles

© 2024 - Startup Snapshot Design: Obys | Code: Eli Cohen