An adaptive learning rate for stochastic variational inference
Author(s): Ranganath, R; Wang, C; Blei, DM; Xing, EP
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1tv29
Abstract: | Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic variational inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates. |
Publication Date: | 2013 |
Citation: | Ranganath, R., Wang, C., David, B., & Xing, E. (2013, February). An adaptive learning rate for stochastic variational inference. In International Conference on Machine Learning (pp. 298-306). |
Pages: | 298 - 306 |
Type of Material: | Conference Article |
Journal/Proceeding Title: | 30th International Conference on Machine Learning, ICML 2013 |
Version: | Final published version. This is an open access article. |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.