To refer to this page use:
|Abstract:||We present a technique for adaptively choosing a sequence of experiments for materials design and optimization. Specifically, we consider the problem of identifying the choice of experimental control variables that optimize the kinetic stability of a nanoemulsion, which we formulate as a ranking and selection problem. We introduce an optimization algorithm called the knowledge gradient with discrete priors (KGDP) that sequentially and adaptively selects experiments and that maximizes the rate of learning the optimal control variables. This is done through a combination of a physical, kinetic model of nanoemulsion stability, Bayesian inference, and a decision policy. Prior knowledge from domain experts is incorporated into the algorithm as well. Through numerical experiments, we show that the KGDP algorithm outperforms the policies of both random exploration (in which an experiment is selected uniformly at random among all potential experiments) and exploitation (which selects the experiment that appears to be the best, given the current state of Bayesian knowledge).|
|Citation:||Chen, Si, Kristofer-Roy G. Reyes, Maneesh K. Gupta, Michael C. McAlpine, and Warren B. Powell. "Optimal Learning in Experimental Design Using the Knowledge Gradient Policy with Application to Characterizing Nanoemulsion Stability." SIAM/ASA Journal on Uncertainty Quantification 3, no. 1 (2015): pp. 320-345. doi:10.1137/140971129|
|Pages:||320 - 345|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||SIAM/ASA Journal on Uncertainty Quantification|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.