AI detects autistic speech patterns across different languages

Summary: Machine learning algorithms help researchers identify speech patterns in children on the autism spectrum that are consistent between different languages.

Sources: Northwestern University

A new study led by Northwestern University researchers used machine learning – a branch of artificial intelligence – to identify speech patterns in children with autism that were consistent between English and Cantonese, suggesting that speech features may be a useful tool for diagnosing the condition.

Conducted with collaborators in Hong Kong, the study yielded insights that could help scientists distinguish between genetic and environmental factors that shape the communication abilities of people with autism, which could help them learn more about the condition’s origins and develop new treatments.

Children with autism often speak more slowly than typically developing children, and they show other differences in tone, pitch, and rhythm. But these differences (which researchers call “accidental differences”) have been surprisingly difficult to characterize in a consistent and objective way, and their origins have remained unclear for decades.

However, a team of researchers led by Northwestern scientists Molly Loach and Joseph C.Y. Lau, along with Hong Kong-based collaborator Patrick Wong and his team, has successfully used supervised machine learning to identify speech differences associated with autism.

The data used to train the algorithm were recordings of English and Cantonese speaking young men with and without autism telling their own version of the storyboard in a wordless children’s picture book called “Frog, where are you?”

The results were published in the journal PLUS ONE On June 8, 2022.

“When you have structurally different languages, any similarities in speech patterns in autism across the two languages ​​are likely to be traits strongly influenced by genetic responsibility for autism,” Loch said. Ann G and Peter F. Dolly is Professor of Learning Disabilities at Northwestern.

“But also interesting is the variance that we observed, which may indicate more fluid speech features, which would potentially be good targets for intervention.”

Lau added that using machine learning to identify key elements of speech that were predictive of autism is an important step forward for the researchers, who have been constrained by English bias in autism and human subjectivity research when it comes to classifying speech differences. between autistic and non-autistic.

“Using this method, we were able to identify speech traits that can predict a diagnosis of autism,” said Lau, a postdoctoral researcher working with Loach in the Roxlin and Richard Pepper Department of Communication Sciences and Disorders at Northwestern.

“The most notable of these features is rhythm. We hope this study will be the basis for future work on autism that enhances machine learning.”

The researchers believe their work has the potential to contribute to an improved understanding of autism. Lau said AI has the potential to make autism diagnosis easier by helping reduce the burden on healthcare professionals, making autism diagnosis more accessible to more people. It could also provide a tool that might one day transcend cultures, due to the ability of a computer to analyze words and sounds in a quantitative way regardless of language.

The researchers believe their work could provide a tool that may one day transcend cultures, due to a computer’s ability to analyze words and sounds in a quantitative manner regardless of language. The image is in the public domain

Since the features of speech identified via machine learning include both features common to English, Cantonese, and those specific to one language, Loch said, machine learning could be useful for developing tools that not only identify aspects of speech appropriate for therapeutic interventions but also measure the impact of those Interventions by assessing the speaker’s progress over time.

Finally, the study’s findings could inform efforts to identify and understand the role of specific genes and brain processing mechanisms involved in genetic susceptibility to autism, the authors said. Ultimately, their goal is to form a more comprehensive picture of the factors that make up people with autistic speech differences.

“One of the brain networks involved is the auditory pathway at the subcortical level, which is closely related to differences in how speech sounds are processed in the brain by individuals with autism relative to those that typically develop across cultures,” Lau said.

The next step will be to determine whether these differences in processing in the brain lead to the behavioral speech patterns we observe here, and the neurogenetics underlying them. We are excited about what’s to come.”

see also

This shows a bunch of different nuts on a cutting board

About this research news for AI and ASD

author: Max Wittinsky
Sources: Northwestern University
Contacts: Max Wittinsky – Northwestern University
Pictures: The image is in the public domain

original search: open access.
“Cross-language patterns of speech differences in autism: a machine learning study” by Joseph C. Y. Lau et al. PLUS ONE


Summary

Cross-linguistic patterns of speech differences in autism: a machine learning study

Differences in speech presentation are a widely observed feature of autism spectrum disorder (ASD). However, it is unclear how stereotypical differences in ASD across different languages ​​show cross-linguistic variance in presentation.

Using a supervised machine learning approach, we examined vocal features relevant to the rhythmic and tonal aspects of performances derived from narrative samples obtained in English and Cantonese, two languages ​​that are typically distinct and episodic.

Our models revealed a successful classification of ASD diagnosis using relative features of rhythm within and across both languages. Classification with features related to intonation was important for English but not Cantonese.

The findings highlight differences in tempo as a major episodic feature affected by autism, and also illustrate an important diversity of other general characteristics that appear to be shaped by language-specific differences, such as intonation.

Leave a Comment