Visual Speech Synthesis Using a Variable-Order Switching Shared Gaussian Process Dynamical Model

Salil Deena, Shaobo Hou, Aphrodite Galata

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we present a novel approach to speech- driven facial animation using a non-parametric switching state space model based on Gaussian processes. The model is an extension of the shared Gaussian process dynamical model, augmented with switching states. Two talking head corpora are processed by extracting visual and audio data from the sequences followed by a parameterization of both data streams. Phonetic labels are obtained by performing forced phonetic alignment on the audio. The switching states are found using a variable length Markov model trained on the labelled phonetic data. The audio and visual data corresponding to phonemes matching each switching state are extracted and modelled together using a shared Gaussian process dynamical model. We propose a synthesis method that takes into account both previous and future phonetic context, thus accounting for forward and backward coarticulation in speech. Both objective and subjective evaluation results are presented. The quantitative results demonstrate that the proposed method outperforms other state-of-the-art methods in visual speech synthesis and the qualitative results reveal that the synthetic videos are comparable to ground truth in terms of visual perception and intelligibility.
Original languageEnglish
Pages (from-to)1755-1768
JournalIEEE Transactions on Multimedia
Volume15
Issue number8
Early online date26 Aug 2013
DOIs
Publication statusPublished - 1 Dec 2013

Fingerprint

Dive into the research topics of 'Visual Speech Synthesis Using a Variable-Order Switching Shared Gaussian Process Dynamical Model'. Together they form a unique fingerprint.

Cite this