Skip to main content

Optimal learning for stochastic optimization with nonlinear parametric belief models

Author(s): He, X; Powell, William B

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1456g
Abstract: © 2018 Society for Industrial and Applied Mathematics. We consider the problem of estimating the expected value of information (the knowledge gradient) for Bayesian learning problems where the belief model is nonlinear in the parameters.Our goal is to maximize an objective function represented by a nonlinear parametric belief model,while simultaneously learning the unknown parameters, by guiding a sequential experimentationprocess which is expensive. We overcome the problem of computing the expected value of an experiment, which is computationally intractable, by using a sampled approximation, which helps toguide experiments but does not provide an accurate estimate of the unknown parameters. We thenintroduce a resampling process which allows the sampled model to adapt to new information, exploiting past experiments. We show theoretically that the method generates sequences that convergeasymptotically to the true parameters, while simultaneously maximizing the objective function. Weshow empirically that the process exhibits rapid convergence, yielding good results with a very smallnumber of experiments.
Publication Date: 1-Jan-2018
Citation: He, X, Powell, WB. (2018). Optimal learning for stochastic optimization with nonlinear parametric belief models. SIAM Journal on Optimization, 28 (3), 2327 - 2359. doi:10.1137/16M1073042
DOI: doi:10.1137/16M1073042
ISSN: 1052-6234
Pages: 2327 - 2359
Type of Material: Journal Article
Journal/Proceeding Title: SIAM Journal on Optimization
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.