Skip to main content

Optimal Learning in Experimental Design Using the Knowledge Gradient Policy with Application to Characterizing Nanoemulsion Stability

Author(s): Chen, Si; Reyes, Kristofer-Roy G; Gupta, Maneesh K; McAlpine, Michael C; Powell, Warren B

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr12p37
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, Si-
dc.contributor.authorReyes, Kristofer-Roy G-
dc.contributor.authorGupta, Maneesh K-
dc.contributor.authorMcAlpine, Michael C-
dc.contributor.authorPowell, Warren B-
dc.date.accessioned2021-10-08T20:20:11Z-
dc.date.available2021-10-08T20:20:11Z-
dc.date.issued2015en_US
dc.identifier.citationChen, Si, Kristofer-Roy G. Reyes, Maneesh K. Gupta, Michael C. McAlpine, and Warren B. Powell. "Optimal Learning in Experimental Design Using the Knowledge Gradient Policy with Application to Characterizing Nanoemulsion Stability." SIAM/ASA Journal on Uncertainty Quantification 3, no. 1 (2015): pp. 320-345. doi:10.1137/140971129en_US
dc.identifier.urihttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.696.483&rep=rep1&type=pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr12p37-
dc.description.abstractWe present a technique for adaptively choosing a sequence of experiments for materials design and optimization. Specifically, we consider the problem of identifying the choice of experimental control variables that optimize the kinetic stability of a nanoemulsion, which we formulate as a ranking and selection problem. We introduce an optimization algorithm called the knowledge gradient with discrete priors (KGDP) that sequentially and adaptively selects experiments and that maximizes the rate of learning the optimal control variables. This is done through a combination of a physical, kinetic model of nanoemulsion stability, Bayesian inference, and a decision policy. Prior knowledge from domain experts is incorporated into the algorithm as well. Through numerical experiments, we show that the KGDP algorithm outperforms the policies of both random exploration (in which an experiment is selected uniformly at random among all potential experiments) and exploitation (which selects the experiment that appears to be the best, given the current state of Bayesian knowledge).en_US
dc.format.extent320 - 345en_US
dc.language.isoen_USen_US
dc.relation.ispartofSIAM/ASA Journal on Uncertainty Quantificationen_US
dc.rightsAuthor's manuscripten_US
dc.titleOptimal Learning in Experimental Design Using the Knowledge Gradient Policy with Application to Characterizing Nanoemulsion Stabilityen_US
dc.typeJournal Articleen_US
dc.identifier.doi10.1137/140971129-
dc.identifier.eissn2166-2525-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
OptimalLearningGradient.pdf8.39 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.