IQ and the Ability to Hear in Noisy Environments: What a New Study Reveals

A new scientific study has highlighted an important connection between intellectual ability and the capacity to understand speech in noisy environments. This research shows that even people with perfectly normal hearing can struggle to follow conversations in places like busy restaurants, classrooms, or social gatherings if their cognitive ability is lower. In other words, the challenge is not just about the ears—it’s also about the brain.
The work was carried out by a team of researchers led by Bonnie K. Lau, a research assistant professor in the Department of Otolaryngology–Head and Neck Surgery at the University of Washington School of Medicine. The findings were published on September 24, 2025 in the journal PLOS One.
Why This Research Matters
When people complain about being unable to hear well in noisy environments, the common assumption is that they might have hearing loss. However, this study challenges that idea directly. It suggests that difficulty understanding speech in a crowd may be more closely tied to cognitive processing power than to the health of the ears themselves.
This connection was found not just in neurotypical individuals, but also in people with conditions like autism spectrum disorder (ASD) and fetal alcohol spectrum disorder (FASD). Both groups often report trouble following conversations in noisy places despite having no measurable hearing loss.
Who Participated in the Study
The study involved 49 participants in total, split into three groups:
- 12 individuals with autism spectrum disorder
- 10 individuals with fetal alcohol spectrum disorder
- 27 neurotypical individuals (comparison group), matched for age and sex
The participants ranged in age from 13 to 47 years. Before the main experiment, everyone underwent an audiological screening to confirm that their hearing was clinically normal. This included both a standard hearing threshold test and an otoacoustic emissions test.
The Listening Task
The main experiment was designed to mimic the classic “cocktail party problem”—the challenge of following one voice among many.
- Each participant was given headphones and introduced to a target voice (always male).
- While listening, they also heard two competing voices (maskers) that could be male and female or both male.
- Each voice spoke sentences structured in the same way: a call sign, followed by a color and a number. For example: “Ready, Eagle, go to green five now.”
- The participant’s task was to identify the target sentence and select the correct colored number box on a computer screen.
To make the task increasingly difficult, the researchers gradually raised the volume of the competing voices. This allowed them to calculate each participant’s speech perception threshold—the point at which they could correctly identify the target about half of the time.
This threshold is measured as the target-to-masker ratio (TMR). A lower or negative TMR means better performance, because it shows the person can still follow the target voice even when the background voices are louder.
Measuring Intelligence
After the listening test, participants completed a set of standardized intelligence tests using the WASI-II (Wechsler Abbreviated Scale of Intelligence, Second Edition).
- The researchers calculated each participant’s Full-Scale IQ (FSIQ-4).
- They also looked at two sub-scores:
- Verbal Comprehension Index (VCI)
- Perceptual Reasoning Index (PRI)
- Performance on individual subtests—Vocabulary, Similarities, Block Design, and Matrix Reasoning—was also recorded.
This gave the researchers a complete picture of each participant’s intellectual profile.
The Results
The findings were clear and consistent:
- Higher IQ scores were strongly correlated with better speech perception in noisy environments.
- This relationship held true across all three groups—autism, FASD, and neurotypical individuals.
- Both verbal ability and nonverbal reasoning ability contributed to better performance. The results were not driven by verbal IQ alone.
- Even when controlling for age, sex, and diagnostic group, IQ remained the strongest predictor of how well someone performed on the multitalker task.
- Each of the four subtests (Vocabulary, Similarities, Block Design, Matrix Reasoning) was significantly correlated with performance.
One notable detail: on average, the neurotypical group could achieve negative TMRs, meaning they could still follow the target voice when it was quieter than the background voices. In contrast, many participants in the autism and FASD groups required the target voice to be slightly louder than the background to perform successfully.
What This Means
This research demonstrates that understanding speech in a noisy setting requires more than just healthy ears. It involves a combination of:
- Attention control (focusing on the speaker of interest)
- Working memory (holding bits of speech in mind as they are processed)
- Inhibitory control (suppressing irrelevant sounds and voices)
- Language processing (decoding syllables, words, and meaning)
- Social-cognitive skills (using context, body language, or facial cues in real conversations)
All of these cognitive processes place a high demand on the brain. This explains why individuals with lower cognitive ability—or those with certain neurodevelopmental conditions—may find noisy environments especially challenging.
Practical Applications
The implications of this study are important in several areas:
- Classroom environments: Children who struggle in noisy classrooms may benefit from simple adjustments like sitting closer to the teacher or using assistive listening devices.
- Audiological assessments: Traditional hearing tests may miss real-world difficulties. Including multitalker listening tasks could provide a more accurate picture.
- Clinical interventions: Instead of assuming hearing loss, professionals might consider whether cognitive support strategies could improve listening outcomes.
- Public awareness: The findings counter a common misconception—that struggling to hear in a noisy environment always means you have hearing damage.
Limitations of the Study
The authors were careful to note several limitations:
- The sample size was relatively small (fewer than 50 participants).
- Both autism and FASD are highly variable conditions, so results may not apply to every individual.
- The possibility of ADHD influencing results could not be fully ruled out, since ADHD often co-occurs with autism and FASD.
- The listening task, while well-designed, was still simpler than real-world environments, which involve more unpredictable sounds, reverberation, and visual distractions.
- The study could not pinpoint which exact cognitive mechanisms (e.g., working memory vs. attention) were driving the relationship between IQ and listening performance.
The Broader Picture: The Cocktail Party Effect
This study ties into a classic phenomenon known as the cocktail party effect. This effect describes the brain’s remarkable ability to tune into a single conversation while ignoring many others in the background.
Researchers have studied this effect for decades, and it is now widely recognized that it requires both auditory and cognitive processing. For example:
- The auditory system separates streams of sound based on frequency, pitch, and spatial location.
- The cognitive system then selects the relevant stream and suppresses the rest.
This new research strengthens the view that intellectual ability significantly enhances this process, explaining why some people can handle noisy environments much better than others.
Takeaway
The University of Washington study underscores a simple but powerful idea: being able to follow conversations in noisy places is not just about how well your ears work, but also about how efficiently your brain processes information.
By highlighting the role of intelligence and cognition, the research encourages educators, clinicians, and the public to rethink assumptions about listening difficulties. It also opens doors for more tailored interventions that consider both hearing and cognition together.