Skip to main content

Variational Inference via χ Upper Bound Minimization

Author(s): Dieng, Adji B; Tran, Dustin; Ranganath, Rajesh; Paisley, John; Blei, David M

To refer to this page use:
Abstract: Variational inference (VI) is widely used as an efficient alternative to Markov chain Monte Carlo. It posits a family of approximating distributions q and finds the closest member to the exact posterior p. Closeness is usually measured via a divergence D(q||p) from q to p. While successful, this approach also has problems. Notably, it typically leads to underestimation of the posterior variance. In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes Dχ(p||q), the χ-divergence from p to q. CHIVI minimizes an upper bound of the model evidence, which we term the χ upper bound (CUBO). Minimizing the CUBO leads to improved posterior uncertainty, and it can also be used with the classical VI lower bound (ELBO) to provide a sandwich estimate of the model evidence. We study CHIVI on three models: probit regression, Gaussian process classification, and a Cox process model of basketball plays. When compared to expectation propagation and classical VI, CHIVI produces better error rates and more accurate estimates of posterior variance.
Publication Date: 2017
Citation: Dieng, Adji Bousso, Dustin Tran, Rajesh Ranganath, John Paisley, and David Blei. "Variational Inference via χ Upper Bound Minimization." Advances in Neural Information Processing Systems 30 (2017), pp. 2732-2741.
ISSN: 1049-5258
Pages: 2732 - 2741
Type of Material: Conference Article
Journal/Proceeding Title: Advances in Neural Information Processing Systems
Version: Author's manuscript

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.