Campus News

Exploring how the cochlea responds to speech sounds: John Oghalai and Alberto Recio receive an American Hearing Research Foundation (AHRF) Discovery Grant

The one-year grant will support research on how the cochlea responds to fricative consonants like “S” and “SH.”

Michelle Meyers March 03, 2026
From left, USC researchers John Oghalai, MD, and Alberto Recio, PhD (Image courtesy of the USC Caruso Department of Otolaryngology–Head and Neck Surgery)
From left, USC researchers John Oghalai, MD, and Alberto Recio, PhD (Image courtesy of the USC Caruso Department of Otolaryngology–Head and Neck Surgery)

USC researchers John Oghalai, MD, and Alberto Recio, PhD, were inspired by a simple question with no easy answer: why do patients have trouble understanding speech in the presence of background noise as they get older?

To address this question, the team has been awarded an American Hearing Research Foundation (AHRF) Discovery Grant, which provides up to $50,0000 in seed funding for studies that investigate hearing and balance disorders related to the inner ear. Founded in 1956, AHRF funds research to understand and find solutions for hearing and balance disorders, and to educate the public about these conditions.

The AHRF Discovery Grant will give Oghalai and Recio the opportunity to study how the cochlea responds to speech sounds, particularly fricative consonants, such as “S” or “SH.”

Traditionally, researchers have studied the cochlea’s response to tones and clicks played at specific frequencies and volumes. However, this approach doesn’t capture the full complexity of what it’s like for the cochlea to process and respond to actual human speech, which may contain both high-frequency consonant sounds and low-frequency vowel sounds.

“We want to better understand what the vibrations of the cochlea look like in response to speech rather than tones or clicks, starting with consonants and then perhaps someday moving on to vowels,” said Recio, a research scientist in the USC Caruso Department of Otolaryngology–Head and Neck Surgery.

The way the ear works is, in Oghalai’s words, “really confusing,” though the official word scientists would use to describe it is “non-linear.”

“Let’s say you take two sounds and play one and then the other, or both simultaneously. The responses together are not just going to be the sum of the first and the second,” said Oghalai, chair of the USC Caruso Department of Otolaryngology–Head and Neck Surgery, professor of Otolaryngology-Head and Neck Surgery, Neurological Surgery, and Biomedical Engineering, and Leon J. Tiber and David S. Alpert Chair in Medicine. “We have no way to predict exactly how the ear is going to process speech because speech has a lot of different frequency information within it.”

Hearing aids aren’t yet designed to mimic the non-linear way in which the cochlea perceives and differentiates sounds in situations with background noise. This is why these devices tend to be far less effective in noisy environments such as restaurants.

While the experiment supported by the AHRF Discovery Grant will take place over the next year, Oghalai and Recio anticipate that in the next five to 10 years, there will be advances in technology that will help clinicians better care for patients with age-related hearing loss and that will help researchers use less invasive methodologies to conduct imaging studies on the inner ear in humans. Recio hopes to eventually be able to work with patients to more directly measure how the human cochlea works.

Oghalai and Recio have known each other for more than 30 years, having both studied under the same mentor, neurophysiologist William S. Rhode at the University of Wisconsin-Madison. They began collaborating about 15 years ago to develop optical coherence tomography (OCT) technology to study the inner ear, and they look forward to many more years of working together to help patients with hearing and balance disorders.

“Hearing loss is important,” said Oghalai, “because everybody gets it as they get older.”