Skip to main content

Automatic Variational Inference in Stan

Author(s): Kucukelbir, Alp; Ranganath, Rajesh; Gelman, Andrew; Blei, David M

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1wg00
Abstract: Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational inference algorithms requires tedious model-specific calculations; this makes it difficult for non-experts to use. We propose an automatic variational inference algorithm, automatic differentiation variational inference (ADVI); we implement it in Stan (code available), a probabilistic programming system. In ADVI the user provides a Bayesian model and a dataset, nothing else. We make no conjugacy assumptions and support a broad class of models. The algorithm automatically determines an appropriate variational family and optimizes the variational objective. We compare ADVI to MCMC sampling across hierarchical generalized linear models, nonconjugate matrix factorization, and a mixture model. We train the mixture model on a quarter million images. With ADVI we can use variational inference on any model we write in Stan.
Publication Date: 2015
Citation: Kucukelbir, Alp, Rajesh Ranganath, Andrew Gelman, and David M. Blei. "Automatic Variational Inference in Stan." Advances in Neural Information Processing Systems 28 (2015), pp. 568-576.
ISSN: 1049-5258
Pages: 568 - 576
Type of Material: Conference Article
Journal/Proceeding Title: Advances in Neural Information Processing Systems
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.