Towards Further Understanding of Sparse Filtering via Information Bottleneck

Fabio Massimo Zennaro, Ke Chen

Research output: Contribution to journalArticle

26 Downloads (Pure)


In this paper we examine a formalization of feature distribution learning (FDL) in information-theoretic terms relying on the analytical approach and on the tools already used in the study of the information bottleneck (IB). It has been conjectured that the behavior of FDL algorithms could be expressed as an optimization problem over two information-theoretic quantities: the mutual information of the data with the learned representations and the entropy of the learned distribution. In particular, such a formulation was offered in order to explain the success of the most prominent FDL algorithm, sparse filtering (SF). This conjecture was, however, left unproven. In this work, we aim at providing preliminary empirical support to this conjecture by performing experiments reminiscent of the work done on deep neural networks in the context of the IB research. Specifically, we borrow the idea of using information planes to analyze the behavior of the SF algorithm and gain insights on its dynamics. A confirmation of the conjecture about the dynamics of FDL may provide solid ground to develop information-theoretic tools to assess the quality of the learning process in FDL, and it may be extended to other unsupervised learning algorithms.
Original languageEnglish
Publication statusPublished - 20 Oct 2019


  • cs.LG
  • eess.SP
  • stat.ML


Dive into the research topics of 'Towards Further Understanding of Sparse Filtering via Information Bottleneck'. Together they form a unique fingerprint.

Cite this