Researchers Turn to AI to Help Diagnose Children’s Speech Disorders

Artificial Intelligence

Researchers Turn to AI to Help Diagnose Children’s Speech Disorders

By Lauren Coffey     May 28, 2025

Researchers Turn to AI to Help Diagnose Children’s Speech Disorders

When Marisha Speights first started as a speech language pathologist in preschools serving affluent families in Nashville, Tennessee, she used the typical screening and assessment measures that she believed — and was taught — worked well.

But when she was placed in Jackson, Mississippi, at preschools that served poorer families, she found the tests were no longer working.

“It was, ‘I don’t think this child has a speech or language issue, but the test says they’re at risk.’ And also the other way, of it not identifying children I thought were at risk,” Speights says. “I had this question of when you utilize these measures in groups with different characteristics.”

She eventually took that question to Northwestern University, where she is now building out her own artificial intelligence technology system that could potentially quell the issue.

The Pediatric Speech Technologies and Acoustics Research Lab, known as the PedzSTAR Lab, builds toolboxes of acoustic biomarkers to track children’s speech patterns using samples from both children with and without speech disorders. Once researchers could verify differences between the two groups, the team moved into creating applications with artificial intelligence and machine learning, hoping to eventually predict speech disorders.

Speights has compiled samples from 400 children so far, ranging across geographic location, cultural background and socioeconomic status. She eventually hopes to collect more than 2,000 speech samples from just as many children.

“In the current dataset, we have many children that are not represented, and that was one of our big goals: to have more representation of different kinds of children speakers,” she says.

PedzSTAR joins a growing number of attempts to use AI in the speech pathology world.

Jordan Green, a professor of communication sciences and disorders at Harvard University’s Massachusetts General Hospital Institute of Health Professions, says in a recent research paper the excitement is “palpable” around AI in health care. Its usage in the speech world extends across virtual therapists, interactive games, chatbot conversational partners and AI driven diagnostics.

Nina Benway, a postdoctoral fellow at the University of Maryland, College Park, says the upped AI usage can be attributed to three things: more data to train the AI technology systems, more accessible computing power and more mainstream large language models, such as ChatGPT.

“It’s been used in the field broadly most by clinicians to help with lesson planning, material generation, things like that, but the idea of using AI to assist with treatment has been relatively new,” Benway says.

Improving Student Outcomes

When it comes to the speech language pathology world, Speights says the pre-K level is largely overlooked in comparison to older children or adults.

“Collecting speech data with children is hard; you can’t just give them something to read,” she says. “You have to create engaging activities, have to control the environment to get quality recordings and have to get people who are skilled with young children.”

In her work, Speights has children play with toy farm animals, since many of the words use early development speech sounds — the “kuh” sound in cow, for example. She and her team capture the sounds that children make during play time, then move children on to structured tasks like looking at pictures and describing what they see, as well as a formal assessment.

Speights says she hopes the work of the lab will eventually yield software to help better diagnose children’s speech disorders.

The University at Buffalo is similarly pushing for AI to help with speech diagnosis. In the fall of 2022, the university, as part of the State University of New York (SUNY) system, received a five-year, $20 million grant from the National Science Foundation to study the effects of the technology on diagnosing and treating speech problems in children.

“Everyone knows someone who has children who are struggling or have struggled with some of their language,” says Venu Govindaraju, director of the NSF National AI Institute for Exceptional Education. “Because of the potential of AI, people are catching on to ‘If AI can do this, maybe it can do this” — meaning speech development support — “as well.’”

The project is currently collecting and validating data, with the ultimate goal to create universal screening tools for teachers to use in schools. Researchers also hope to help with intervention, focusing on personalized attention for each student.

“This struck a chord with a lot of people; they can see AI and its potential not just in this field but others, so they’re open to the potential and I think the two [tools] resonate,” Govindaraju says. “We want to make sure we catch it early on; like anything else, the sooner you detect the easier [treatment] it will be.”

Alleviating Heavy Workloads

Both Govindaraju and Speights were quick to say AI would not replace speech pathologists and the technology would never make its own diagnoses. It would be overseen by a licensed care provider, who would make the final call.

But in some parts of the country, speech professionals are scarce, and the field needs solutions to help.

Lauren Arner, associate director of school services in speech language pathology at the American Speech-Language-Hearing Association, says the organization believes with the right guardrails in place, AI can help right-size the increasing workload many speech language pathologists are seeing.

“A lot of the workload surrounds completing the assessments and associated documentation, so any of these technologies we can use to alleviate some of the workload [helps],” she said. “It allows more access for SLPs to see the students and provide interventions because they are less bogged down with paperwork.”

According to ASHA’s 2024 annual school survey, the number of children getting diagnosed with speech disorders is growing, overwhelming the number of speech pathologists. Roughly 27 percent of pathologists stated they were considering leaving the profession because of burnout, as many teachers have done. Like in the case of teachers, some experts attribute the widening gap to a lack of pay and organizational funding, but many believe the chasm will never fully be closed again.

“There are always going to be more kids than speech language pathologists,” Speights says, adding that she thinks automation can decrease workload in some areas allowing care providers to “focus more on precision care, like helping children that really need that support to get individual personalized care.”

Speights adds the tools can help pathologists follow children’s language progression over time, while Arner says it can be particularly helpful for families in rural areas that see speech pathologists less often versus those with more access to support in cities.

But with the use of AI comes important safety considerations, namely keeping any identifiable information of the children out of the AI systems, and ensuring the data that is collected is properly secured. With PedzSTAR, Speights assures no personal information is taken during the speech sample collection process, and what is collected is housed in internal servers, versus in the broader cloud that can be more easily accessed.

“Because of pediatric vulnerability, we do want to make sure the children are protected,” she says.

Arner says ASHA is planning to release AI guidance this summer and advises speech pathologists to check with their school or organizational policy on AI before using the tools. University of Maryland’s Benway recently released an article outlining considerations to take when implementing AI in the speech pathology field, boiling it down to three things: validity, reliability and representation.

“When a clinician makes an assessment, the AI might help gather those measures, but the clinician makes a plan with treatment, diagnosis, etc.,” Benway says. “It’s likely AI will be most useful in the short term when it’s automating things clinicians already do, versus trying to be a clinician itself.”

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up