Extraction of visual features for lipreading

Iain Matthews, Timothy F. Cootes, J. Andrew Bangham, Stephen Cox, Richard Harvey

    Research output: Contribution to journalArticlepeer-review

    Abstract

    The multimodal nature of speech is often ignored in human-computer interaction, but lip deformations and other body motion, such as those of the head, convey additional information. We integrate speech cues from many sources and this improves intelligibility, especially when the acoustic signal is degraded. This paper shows how this additional, often complementary, visual speech information can be used for speech recognition. Three methods for parameterizing lip image sequences for recognition using hidden Markov models are compared. Two of these are top-down approaches that fit a model of the inner and outer lip contours and derive lipreading features from a principal component analysis of shape or shape and appearance, respectively. The third, bottom-up, method uses a nonlinear scale-space analysis to form features directly from the pixel intensity. All methods are compared on a multitalker visual speech recognition task of isolated letters.
    Original languageEnglish
    Pages (from-to)198-213
    Number of pages15
    JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
    Volume24
    Issue number2
    DOIs
    Publication statusPublished - Feb 2002

    Keywords

    • Active appearance model
    • Audio-visual speech recognition
    • Connected-set morphology
    • Sieve
    • Statistical methods

    Fingerprint

    Dive into the research topics of 'Extraction of visual features for lipreading'. Together they form a unique fingerprint.

    Cite this