DescriptionABSTRACT: Early intervention for infants with impaired hearing is critical for improved outcomes in speech, language, and cognitive development. Currently, clinical decisions are often delayed until late in an infant’s first year when behavioural assessments become feasible. These delays mean the benefits of early screening may not always be fully realised. However, reliable behavioural assessment of hearing before a developmental age of around 9 months remains a significant challenge.
Our ongoing work explores a novel approach to measuring behaviour; automated classification of sound-evoked responses in the head and face. This approach aims to overcome some of the traditional limitations of behavioural assessment of infants, and to lower the age at which reliable behavioural measures can be obtained. We will discuss how facial recognition software can be used to detect and track changes in facial features and head movements in response to the detection of supra-threshold sounds. The ability to reliably classify sound-evoked facial behaviours would provide a highly sensitive, automated, and affordable tool to complement common behavioural tests. Such a tool would be potentially valuable for a number of clinical populations. For infants, automated classification could be applied to existing test procedures such as visual reinforcement audiometry, and might also allow for assessment at a younger age than is currently possible.
|Period||14 Nov 2019|
|Event title||British Academy of Audiology, 16th Annual Conference of.|
|Location||Liverpool, United Kingdom|
|Degree of Recognition||National|