Researchers in China have recently developed a new deep-learning algorithm that can detect signs of depression from a person’s speech.
This new AI technology represents a significant step in reducing the number of suicides and other complications, assisting healthcare professionals in easily identifying patients who need mental health support.
AI has proven its role in various fields, and now it is making strides in healthcare, as scientists explore how these tools can identify signs of both physical and mental illness.
The new deep-learning algorithm can detect depression through voice. (Image: Tech Times).
One of the most common mental disorders is depression. According to data from the Centers for Disease Control and Prevention (CDC), about 1 in 6 adults will experience depression at some point in their lives.
In the U.S., approximately 16 million more adults are diagnosed with depression each year. Anyone can suffer from depression, and it can affect individuals of any age or background.
The World Health Organization (WHO) notes that depression is a common illness worldwide, estimated to affect about 3.8% of the population, including 5.0% of adults and 5.7% of individuals over 60 years old. Depression affects approximately 280 million people globally.
In response to this serious reality, researchers have developed a new deep-learning algorithm that can detect signs of depression from human speech.
The researchers utilized the DAIC-WOZ dataset, a collection of audio recordings and 3D facial expressions from both depressed and non-depressed patients, to train their deep-learning model.
Researchers Han Tian, Zhang Zhu, and Xu Jing stated in their paper that: “a multi-information decision-making algorithm model, established through emotional recognition”. This model is used to analyze representative data from subjects and assist in determining whether subjects are depressed.
A virtual healthcare worker will ask individuals about their lives and moods while recording their voice and facial expressions as they respond to questions.
Using OpenSmile, an open-source tool for interpreting speech and music, the researchers extracted significant parts of the recordings and incorporated them into principal component analysis.
The deep-learning algorithm has shown to perform well in tests, being able to detect depression in 87% of male patients and 87.5% of female patients.
This encouraging result could serve as a catalyst for creating similar AI tools to detect symptoms of other mental disorders in speech. Additionally, it promises to become a support toolkit for psychiatrists and healthcare professionals.
The development of this deep-learning algorithm could mark a major advancement in the fight against depression. By enabling doctors to diagnose earlier and more accurately, this technology could help those suffering from depression overcome the illness, thereby reducing the number of suicides related to depression.