Core Concepts
Exponential family latent variable models can be formulated as conjugated harmoniums, which enable exact inference and learning algorithms.
Abstract
The content presents a unified theory of exact inference and learning in exponential family latent variable models (LVMs). The key insights are:
Under mild assumptions, the authors derive necessary and sufficient conditions for the prior and posterior of an LVM to be in the same exponential family, such that the prior is conjugate to the posterior. This class of LVMs is referred to as conjugated harmoniums.
For conjugated harmoniums, the authors derive general inference and learning algorithms, and demonstrate them on various example models, including mixture models, linear Gaussian models, and Gaussian-Boltzmann machines.
The authors show how to compose conjugated harmoniums into graphical models that retain tractable inference and learning.
The authors have implemented their algorithms in a collection of libraries, which they use to provide numerous demonstrations of the theory and enable researchers to apply the theory in novel statistical settings.
The content unifies the theory of exact inference and learning for a broad class of exponential family LVMs, facilitating theoretical understanding and practical implementation.
Stats
The content does not contain any key metrics or important figures to extract.
Quotes
The content does not contain any striking quotes to capture.