Functional asymmetries in the representation of noise-vocoded speech.

Rebecca E Millman, Will P Woods, Philip T Quinlan

Research output: Contribution to journalArticlepeer-review

Abstract

It is generally accepted that, while speech is processed bilaterally in auditory cortical areas, complementary analyses of the speech signal are carried out across the hemispheres. However, the Asymmetric Sampling in Time (AST) model (Poeppel, 2003) suggests that there is functional asymmetry due to different time scales of temporal integration in each hemisphere. The right hemisphere preferentially processes slow modulations commensurate with the theta frequency band (~4-8 Hz), whereas the left hemisphere is more sensitive to fast temporal modulations in the gamma frequency range (~25-50 Hz). Here we examined the perception of noise-vocoded, i.e. spectrally-degraded, words. Magnetoencephalography (MEG) beamformer analyses were used to determine where and how noise-vocoded speech is represented in terms of changes in power resulting from neuronal activity. The outputs of beamformer spatial filters were used to delineate the temporal dynamics of these changes in power. Beamformer analyses localised low-frequency "delta" (1-4 Hz) and "theta" (3-6 Hz) changes in total power to the left hemisphere and high-frequency "gamma" (60-80 Hz, 80-100 Hz) changes in total power to the right hemisphere. Time-frequency analyses confirmed the frequency content and timing of changes in power in the left and right hemispheres. Together the beamformer and time-frequency analyses demonstrate a functional asymmetry in the representation of noise-vocoded words that is inconsistent with the AST model, at least in brain areas outside of primary auditory cortex.
Original languageEnglish
JournalNeuroImage
Volume54
Issue number3
DOIs
Publication statusPublished - 1 Feb 2011

Fingerprint

Dive into the research topics of 'Functional asymmetries in the representation of noise-vocoded speech.'. Together they form a unique fingerprint.

Cite this