Component-based discriminative classification for hidden Markov models

Elzbieta Pekalska, Manuele Bicego, Elzbieta Pȩkalska, David M J Tax, Robert P W Duin

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Hidden Markov models (HMMs) have been successfully applied to a wide range of sequence modeling problems. In the classification context, one of the simplest approaches is to train a single HMM per class. A test sequence is then assigned to the class whose HMM yields the maximum a posterior (MAP) probability. This generative scenario works well when the models are correctly estimated. However, the results can become poor when improper models are employed, due to the lack of prior knowledge, poor estimates, violated assumptions or insufficient training data. To improve the results in these cases we propose to combine the descriptive strengths of HMMs with discriminative classifiers. This is achieved by training feature-based classifiers in an HMM-induced vector space defined by specific components of individual hidden Markov models. We introduce four major ways of building such vector spaces and study which trained combiners are useful in which context. Moreover, we motivate and discuss the merit of our method in comparison to dynamic kernels, in particular, to the Fisher Kernel approach. © 2009 Elsevier Ltd. All rights reserved.
    Original languageEnglish
    Pages (from-to)2637-2648
    Number of pages11
    JournalPattern Recognition
    Volume42
    Issue number11
    DOIs
    Publication statusPublished - Nov 2009

    Keywords

    • Dimensionality reduction
    • Discriminative classification
    • Generative embeddings
    • Hidden Markov models
    • Hybrid models

    Fingerprint

    Dive into the research topics of 'Component-based discriminative classification for hidden Markov models'. Together they form a unique fingerprint.

    Cite this