Facial Expression Synthesis and Feature Learning for Convolutional Neural Networks

  • Yao Peng

Student thesis: Phd

Abstract

One of the most challenging aspects of face recognition lies in dealing with variations in unconstrained environment, among which expression variations often dramatically change human appearance and affect recognition performance. Recently deep learning provides a plausible way of deriving robust image representations and learning to approximate data distributions. This thesis investigates the integration of image analysis, facial expression synthesis and their applications to image classification and invariant face recognition. The work first focuses on modelling expression variations and recognising individuals across different expressions. An eigentransformation based algorithm is developed to generate natural facial expressions from neutral faces, and then extended to synthesise expression manifolds with varying intensities. A simple yet effective expression transfer scheme is also presented to further extend the manifold based synthesis with limited number of training subjects. Extensive experiments are conducted and results are presented to validate the efficacy of proposed eigentransformation algorithm in expression verification, classification, and invariant face recognition, as well as its tolerance of landmark misalignments and generalisation ability across various databases. Inspired by recent advances of deep generative models especially the generative adversarial nets, an appearance-based generative adversarial network, ApprGAN, is developed for facial expression synthesis. The proposed ApprGAN synthesises natural-looking and identity-preserving expressions, and generalises well across databases. Comprehensive experiments and comparisons are conducted. Results show the effectiveness of ApprGAN, and the marked improvements in performance over existing methods both visually and quantitatively. Finally, an alternative, data-independent feature learning mechanism based on Markov random fields (MRFs) and self-organising maps (SOMs), termed the MRF$_\text{Rot5}$-SOM$_\text{TI}$, is developed. The proposed MRF$_\text{Rot5}$-SOM$_\text{TI}$ generates data-independent, generic and transferable low-level features, models both intra- and inter-image dependencies and helps understanding image representations. Further incorporated with convolutional neural networks (CNNs), an image object classification framework, MRF$_\text{Rot5}$-SOM$_\text{TI}$-CNN, is presented. Theoretical analysis and experiments show the powerfulness of the proposed approach, as well as its advantages over other unsupervised feature learning mechanisms.
Date of Award31 Dec 2019
Original languageEnglish
Awarding Institution
  • The University of Manchester
SupervisorHujun Yin (Supervisor) & Wuqiang Yang (Supervisor)

Cite this

'