ENDING SOON! The Care is Core Bundle has everything you need to deliver the SSP, featuring resources from Deb Dana! Learn more

×

[gravityform id="12" title="true" description="false" ajax="true"]
Blog

Difficulty Integrating Sight and Sound in Autism

🕑 2 minutes read
Posted November 19, 2014

Integrating Sight and SoundFor people with autism spectrum disorder (ASD), it is hard work to interpret what people say. A new study from the University of Toronto investigated the amount of mental effort involved in processing speech. They find that individuals with ASD are less efficient at integrating the visual and auditory components of speaking, compared to neurotypical individuals. The findings could help clinicians develop more effective sensory processing interventions for people on the autism spectrum.

The researchers used a new technique called BOLD to measure the brain’s energy use for 34 participants—16 adults with ASD and 18 who served as the control group. The participants watched videos while resting in an MRI scanner. They watched two videos of a woman telling a story. In one video, the woman’s lip movements matched the audio, but in the other video, the audio component lagged one-half second behind her lip movements. The participants also watched a pair of videos of a woman making non-speech sounds. The woman in the video shouted “Wooo,” smacked her lips, and made other noises. As with the storytelling videos, one of these videos had synchronous visual and auditory components and the other had a one-half second audio delay.

The control group’s performance corroborated previous research indicating that neurotypical adults process speech more efficiently—that is, their brains use less energy—when sight and sound are synchronized. For the control group, efficiency increased by 18 percent when sight and sound were synchronized. The ASD group exhibited a much smaller benefit: their energy efficiency increased by only seven percent when they processed the synchronized video compared to the unsynchronized video. This suggests that for people with ASD, normal speech is almost as hard to parse as speech slightly out of synch

For both groups, there was only a small increase in efficiency between the synchronized and unsynchronized non-speech video. Both groups saw around a five percent increase in efficiency. This indicates that the difference between ASD and neurotypical individuals in speech processing efficiency is only present for language, not sound in general.

The findings may provide additional evidence for the “intense world theory” of autism, which posits that everyday life is filled with overwhelming stimuli for people with ASD.

This research was presented at the 2014 Society for Neuroscience annual meeting in Washington, D. C.

Previous news in autism:

Recent Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt

Start typing and press Enter to search