Researchers Use AI to Identify Early Alzheimer‘s Signs
Through Patient’s Voice By employing artificial intelligence (AI) voice analysis, Boston University researchers were able to predict the start of Alzheimer’s disease within six years with 78.5% accuracy.
Researchers at Boston University have created an artificial intelligence system that can recognize early symptoms of Alzheimer’s disease in a patient’s speech.
Although there isn’t a cure for the illness at this time, early detection increases a patient’s therapeutic alternatives.
By combining machine learning and natural language processing, the researchers created a program that can automatically forecast how a patient’s Alzheimer’s disease will advance.
The AI is fed patient voice recordings and is able to predict whether or not the patient will develop Alzheimer’s disease within six years.
In the study, 166 patients had their Alzheimer’s disease identified by the AI system with an accuracy of 78.5%.
In the study, 166 patients had their Alzheimer’s disease identified by the AI system with an accuracy of 78.5%.
The AI “offers a fully automated procedure, providing an opportunity to develop an inexpensive, broadly accessible, and easy-to-administer screening tool for mild cognitive impairments (MCIs) to Alzheimer’s progression prediction,” according to the researchers, who published the study’s findings in the Alzheimer’s Association Journal.
They continue by saying that the device might potentially be used for remote evaluations, enabling patients to send voice recordings to physicians in order to receive a diagnosis.
By 2050, it is expected that approximately 13 million Americans will have Alzheimer’s disease, up from the current estimated 7 million.
Even though there isn’t a cure at this time, patients who identify the illness early have more options for therapy, such as enrolling in clinical trials in the hopes of discovering one.
To see if AI could be used to deliver early diagnoses, the researchers looked into voice recordings from neuropsychological exams together with basic demographic data.
Speech recognition was used to examine the recordings, converting the audio input into text that was further processed using language models.
The multimodal approach created what the researchers described as a “fully automated assessment” capable of identifying patients most at risk.
The model showed that older women with lower education levels are more likely to develop Alzheimer’s, as well as women with the APOE-ε4 gene.
The AI also found that the likelihood of developing Alzheimer’s increases “significantly” with age. Some 19% of patients aged 75 to 84 were found likely to develop the disease, which rises to 35% for those older than 85.
“Our study demonstrates the potential of using automatic speech recognition and natural language processing techniques to develop a prediction tool for identifying individuals with MCIs who are at risk of developing Alzheimer’s,” the researchers wrote.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.