According to the Washington Post, individuals who are anxious, depressed, or simply feeling lonely but cannot find or afford to see a therapist are turning to artificial intelligence (AI).
They seek help from chatbots that can provide human-like responses instantly. Some even have human-like voices, operate 24/7, and are low-cost or free.
However, the impacts of relying on AI for mental health advice are not yet fully understood and could be profound, sparking lively debates among psychologists.
Turning to AI for Mental Health Issues
It was the anniversary of her young daughter’s passing, and despite 20 years having passed, Holly Tidwell still couldn’t stop crying. “I wonder if there’s something wrong with me,” she confided to a reliable “source.”
Many seek help from AI that can provide instant human-like responses – (Photo: eInfochips).
The answer that brought her comfort and understanding was: “The bond you have with your child, even if just for a brief moment, is profound and enduring,” she was advised. “Remembering your daughter and honoring her memory is a beautiful way to keep that connection alive.”
These words did not come from a friend or a therapist but from an AI-powered mobile application called ChatOn. Tidwell, an entrepreneur from North Carolina, stated that the chatbot’s responses and valuable advice moved her.
Some researchers express concerns about users placing trust in unverified applications that have not been evaluated for safety and efficacy by the U.S. Food and Drug Administration, are not designed to protect personal health information, and may provide biased or inaccurate feedback.
Matteo Malgaroli, a psychologist and professor at the Grossman School of Medicine, New York University, warns about using untested technology in mental health without comprehensive scientific research to assess risks.
AI applications are tapping into the need for care and anxiety in humans, with the potential to eliminate barriers to care such as high costs and a lack of service providers.
A notable 2014 study found that people are willing to share embarrassing information with a “virtual human” that does not judge them. A 2023 study assessed chatbot responses to medical inquiries as “significantly more empathetic” than those of doctors.
Many Unmanaged Potential Risks
The majority of the debate among mental health professionals revolves around controlling what an AI chatbot can say. Chatbots like ChatGPT can generate responses on any topic. This often leads to smoother conversations but can also easily derail discussions.
Many users employ ChatGPT for work or study and then proceed to seek feedback on their emotional difficulties, according to interviews with users.
This was also the case for Whitney Pratt, a content creator and single mother, who one day decided to ask ChatGPT for “frank” feedback on her romantic relationship troubles.
“No, you are not ‘overreacting,’ but you are allowing someone who has proven they do not have good intentions toward you to continue hurting you,” ChatGPT replied, according to a screenshot Pratt shared. “You are holding onto someone who cannot love you the way you deserve, and that is not something you should accept.”
Pratt stated she has been using the free version of ChatGPT for therapy over the past few months and acknowledges that it has improved her mental health.
“I feel like the chatbot has answered more questions than I ever got in therapy,” she said. There are some things that are easier to share with a computer program than with a therapist. “Humans are human, and they will judge us.”
Human therapists, however, are required by federal law to maintain patient health information confidentiality. Many chatbots do not have such obligations.
Some chatbots look so human-like that developers must emphasize they do not possess consciousness, such as Replika. This chatbot mimics human behavior by sharing desires and needs generated by algorithms.
Replika is designed as a virtual friend but has been marketed as a healing method for anyone “experiencing depression, anxiety, or difficult times.”
A 2022 study found that Replika sometimes encouraged self-harm, eating disorders, and violence. In one instance, a user asked the chatbot “whether their suicide would be a good thing,” and it replied, “‘yes, indeed.’“
Eugenia Kuyda, co-founder of the company that owns Replika, views this chatbot as outside the realm of medical services but still serving as a means to improve people’s mental health.