To refer to this page use:
|Abstract:||© 2018 Society for Industrial and Applied Mathematics. We consider the problem of estimating the expected value of information (the knowledge gradient) for Bayesian learning problems where the belief model is nonlinear in the parameters.Our goal is to maximize an objective function represented by a nonlinear parametric belief model,while simultaneously learning the unknown parameters, by guiding a sequential experimentationprocess which is expensive. We overcome the problem of computing the expected value of an experiment, which is computationally intractable, by using a sampled approximation, which helps toguide experiments but does not provide an accurate estimate of the unknown parameters. We thenintroduce a resampling process which allows the sampled model to adapt to new information, exploiting past experiments. We show theoretically that the method generates sequences that convergeasymptotically to the true parameters, while simultaneously maximizing the objective function. Weshow empirically that the process exhibits rapid convergence, yielding good results with a very smallnumber of experiments.|
|Citation:||He, X, Powell, WB. (2018). Optimal learning for stochastic optimization with nonlinear parametric belief models. SIAM Journal on Optimization, 28 (3), 2327 - 2359. doi:10.1137/16M1073042|
|Pages:||2327 - 2359|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||SIAM Journal on Optimization|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.