A technology based on artificial intelligence is helping to spot biomarkers and document the progression of amyotrophic lateral sclerosis (ALS) in a large speech study being conducted by EverythingALS.
The technology, developed by Modality.ai, is a web-based computer program that uses audio (speech) and video (facial) recordings to assess neurological states automatically through AI and machine learning algorithms.
Its greatest advantage is that data can be collected remotely at home on any computer device with the help of a virtual assistant called “Tina.” This is important for people with ALS, who often have limited mobility due to muscle weakness, which may affect their ability to participate in clinical studies.
“Our mission is to discover and deploy initiatives that focus on new ways to diagnose and treat neurological disorders at the intersection of computing and brain science with a focus on ALS,” Indu Navar, CEO and co-founder of EverythingALS, a U.S. nonprofit that is part of the Peter Cohen Foundation, said in a press release.
“We need to reinvent the research platform, and we are doing this now by rapid patient recruitment, partnerships with pharma and medical device companies, and collaboration with leading medical and research institutions, including MIT, Massachusetts General, and Johns Hopkins,” added Navar, who lost her husband to ALS.
Biomarkers help doctors make a diagnosis. They also may provide an accurate picture of the likely course of disease in a patient, and aid in patient selection for clinical studies. Biomarkers for ALS, however, are currently lacking.
Speech problems resulting from difficulties articulating speech sounds are common and among the first symptoms of ALS. Biomarkers identified in EverythingALS’ speech study may help with earlier diagnosis and expedite support for clinical studies.
The nonprofit is looking for adults with diagnosed or suspected ALS, as well as healthy volunteers, to participate in the speech study, which has included more than 650 people so far. People may participate in the study whether their speech is normal or impaired by ALS.
Participants are asked to spend about 20 minutes per week in tasks such as reading sentences at a comfortable pace to measure speaking speed and to produce a variety of sounds. All data collected are protected and stored anonymously.
The first results were shared collaboratively by members of EverythingALS and Modality at Interspeech 2021, an international conference focused on speech that was held in Brno, Czech Republic, and virtually Aug. 30–Sept. 3.
The presentation, “Investigating the utility of multimodal conversational technology and audiovisual analytic measures for the assessment and monitoring of amyotrophic lateral sclerosis,” (abstract Fri-A-SS-2-2) was delivered on the last day of the conference.
Results have demonstrated that a number of speech and visual measures — such as speech rate, duration, and voicing, as well as movements of the lip and jaw — were significantly different between healthy controls, pre-symptomatic patients, and symptomatic patients.
In addition to its own research, EverythingALS hosts webinars twice a month with researchers and clinicians; the goal is to keep the ALS community informed and involved. It is planning to launch a weekly podcast called “Stories and innovations in ALS,” featuring interviews with researchers and people living with the disease.
“As a patient-oriented organization, we have experienced caring for our loved ones with ALS firsthand and know why it’s critical we end the disease,” said Navar.
The speech study is sponsored by the Peter Cohen Foundation.