Four lessons I learned building a mental health chatbot

In the ever-evolving landscape of artificial intelligence, the integration of technology into the mental health space poses unique challenges and opportunities. In this article, AI researcher Jiyun Hyo shares profound insights gained from the experience of building a chatbot designed to support mental health.

Jiyun Hyo
Contributor

February 22, 2024

Four lessons I learned building a mental health chatbot

This article is part two of our series on mental health chatbots. Read part one where Jiyun shares a brief history of the rise of chatbots, focusing on the development of new tech in the mental health space.

The release of ChatGPT (GPT-3.5) in November 2022 hit the world with a shockwave and marked a historical moment for AI for the rest of human history. The powerful GPT-3.5 naturally sparked a huge interest in chatbots as evidenced by the exponential increase in the number of research papers on chatbots: 8 papers in 2011, 189 papers in 2019 (right before the release of GPT-3), and 1636 papers in 2023. What specifically interests me is the application of chatbots for mental health.

As an AI/ML researcher at Duke University and founder of Seon Health, a tech company in the mental health space, I’ve used almost all mental health chatbots out there. I’ve also built multiple versions of mental health chatbots while experimenting with the extreme capabilities of AI.

Here is what I’ve learned in the past 10 months of building in this space.

1) Hallucination is a huge issue

Hallucination was and still is a huge issue for many LLMs (large language models) even today. Language models are essentially a huge collection of neural networks that is trying to predict the next set of words based on the input and the dataset it was trained on. So its job is to produce something that sounds plausible, without any regard to the accuracy of the information. Hallucinations especially for mental health-related purposes can be detrimental as they directly affect the safety of the users.

2) A “real person” experience is possible

Today, there are language models that you can chat with. AI avatars modeled after a fake person or even a specific person are also possible. You can now also clone anyone’s voice using AI. You can literally converse with these clones as you would talk to a real person. The technology is getting better and better.

Little technical detail as to how it works: you speak to the chatbot, STT (speech-to-text) converts your voice into written words, words are sent to LLM such as ChatGPT, responses are received from LLM, TTS (text-to-speech) converts text response into the cloned voice, and the avatar speaks the words back to you.

3) Therapists won’t be replaced in the near future

Chatbots today can’t replace therapists, as they are not yet able to “lead” a session. When you go to a therapy session, you expect the therapist to lead the session and ask you questions to get you to talk. However, with a chatbot, you have to be the one to initiate the conversation. It feels more like an input-output machine than an actual person. Moreover, it could become very tiring to explain the entire situation, especially when the situation is very complicated and you have no idea where to start explaining. In those situations, it is crucial to have a therapist to guide you and ask the right questions.

Even though chatbots today can’t replace a therapist, what they can do is provide the lowest, most accessible level of support to everyone who is in need. They can also provide people with a safe space with absolutely no judgment to talk about whatever they want. Mental health chatbots that are clinically endorsed can deliver certain relevant clinical skills (e.g. cognitive behavior therapy) for the users.

4) Clinical information has to be the central part of the solution

In order to build clinically endorsed chatbots, LLMs not only must be trained on top clinical knowledge but also be trained with the appropriate training method. Both must be true in order for chatbots to be clinically endorsed. Too many chatbots out there are either trained on materials found on the internet with no clinical evidence or trained with the sub-optimal methodology, which results in chatbots that frequently hallucinate and provide potentially dangerous, false information.

First, top experts have to prepare clinical information. Most LLMs including ChatGPT are trained on general materials found on the internet with no special regards to clinically-endorsed knowledge. This could result in chatbots providing potentially harmful information. For example, earlier this year, the NEDA chatbot designed to help people with eating disorders went off the rails and told a user suffering from eating disorder to eat less and lose weight. Of course, clinical experts would have never given such a suggestion.

Secondly, most LLMs have traditionally been trained using what’s called a outcome-supervised method. Instead, LLMs that need to perform complex reasoning should be trained using a proess-supervised method. Simply put, outcome-supervised method trains an AI to talk like an expert whereas process-supervised method trains an AI to reason like the expert. Former often results in chatbots that provide false information yet talk like an expert. Latter results in chatbots that reason like the expert and is thus much less likely to hallucinate.

In conclusion, building AI mental health chatbots is an excellent way to democratize mental health when built on solid clinical evidence and when layers of guardrails are in place to ensure no harmful or false information gets delivered.

Jiyun Hyo

Jiyun Hyo

Co-Founder and CEO at VizualMind

Bio

Jiyun is an entrepreneur, AI researcher and investor. He the founder of the Robotics Lab at Duke University.

Jiyun Hyo

Co-Founder and CEO at VizualMind

Bio

Jiyun is an entrepreneur, AI researcher and investor. He the founder of the Robotics Lab at Duke University.

Our latest articles

© 2024 - Startup Snapshot Design: Obys | Code: Eli Cohen