Binaural summation of amplitude modulation involves weak interaural suppression

D. H. Baker, Greta Vilidaite, E. McClarnon, E. Valkova, A. Bruno, Rebecca Millman

Research output: Contribution to journalArticlepeer-review


The brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. Discrimination thresholds followed a ‘dipper’ shaped function of pedestal modulation depth, and were consistently lower for binaural than monaural presentation of modulated tones. The EEG responses were greater for binaural than monaural presentation of modulated tones, and when a masker was presented to one ear, it produced only weak suppression of the response to a signal presented to the other ear. Both data sets were well-fit by a computational model originally derived for visual signal combination, but with suppression between the two channels (ears) being much weaker than in binocular vision. We suggest that the distinct ecological constraints on vision and hearing can explain this difference, if it is assumed that the brain avoids over-representing sensory signals originating from a single object. These findings position our understanding of binaural summation in a broader context of work on sensory signal combination in the brain, and delineate the similarities and differences between vision and hearing.

Original languageEnglish
Article number3560
Pages (from-to)1-14
Number of pages14
JournalScientific Reports
Issue number1
Early online date26 Feb 2020
Publication statusPublished - 1 Dec 2020


  • binaural summation
  • EEG
  • amplitude modulation


Dive into the research topics of 'Binaural summation of amplitude modulation involves weak interaural suppression'. Together they form a unique fingerprint.

Cite this