In the SoundBrain Lab, we study the sensory and cognitive processes that underlie speech and music perception. Using functional magnetic resonance imaging (fMRI), event-related potentials (ERPs), brainstem electrophysiology, and behavioral methods, we study the representation of speech and music in the human brain, and how these representations are modified by listening experiences.
We are running several studies investigating Contributing Factors for Speech Perception In Noise (SPIN). We are exploring how speech perception is affected by various speech-enhancing cues in different noise conditions, from multiple background talkers to speech spectrally-matched noise. We will consider our results in reference to current clinical speech-in-noise testing practices.
- Cue-Enhancing Effects on SPIN- This study considers how speech-enhancing cues such as contextual information, speaking style and visual cues affect speech intelligibility in various background noise types ranging from multi-talker babble (informational masking) to speech-shaped noise (energetic masking).
- Nativeness Effect on SPIN- This study explores native speech versus non-native speech perception in the presence of multiple background talkers.
- Noise Level Effects on SPIN: 2 discrete investigations of speech intelligibility in various signal to noise ratios in purely energetic masking (pink noise)-
- Effect of Context
- Effect of Nativeness
In our Musician Study, we are attempting to understand the cognitive benefits and limitations that a wide range of musical training may have on an individual. Specifically, we are looking at learning differences between classically-trained, jazz-trained, and informally-trained musicians in the Austin area. We will also be investigating how music learning may bolster successful aging in the older adult population.
In our Elderly Study, we are investigating how visual and auditory information is used to process speech across the lifespan.
We are also currently investigating the Neural Correlates for Speech Perception and Learning. Using the fMRI (Functional Magnetic Resonance Imaging) technique available via the Imaging Research Center at the University of Texas at Austin, we are able to monitor neural activation and connectivity patterns while participants undergo tasks that involve: (1) native vs. non-native accented speech perception; (2) integration of visual cues in speech recognition; (3) foreign speech category learning. The data are analyzed from the perspective of revealing the sources of immense degree of variability across individuals in terms of both behavioral performance and neural efficiency.