Dense message passing for sparse principal component analysis

Kevin Sharp, Magnus Rattray

    Research output: Contribution to journalArticlepeer-review

    Abstract

    We describe a novel inference algorithm for sparse Bayesian PCA with a zero-norm prior on the model parameters. Bayesian inference is very challenging in probabilistic models of this type. MCMC procedures are too slow to be practical in a very high-dimensional setting and standard mean-field variational Bayes algorithms are ineffective. We adopt a dense message passing algorithm similar to algorithms developed in the statistical physics community and previously applied to inference problems in coding and sparse classification. The algorithm achieves near-optimal performance on synthetic data for which a statistical mechanics theory of optimal learning can be derived. We also study two gene expression datasets used in previous studies of sparse PCA. We find our method performs better than one published algorithm and comparably to a second. Copyright 2010 by the authors.
    Original languageEnglish
    Pages (from-to)725-732
    Number of pages7
    JournalJournal of Machine Learning Research
    Volume9
    Publication statusPublished - 2010

    Fingerprint

    Dive into the research topics of 'Dense message passing for sparse principal component analysis'. Together they form a unique fingerprint.

    Cite this