Optimal Learning in Experimental Design Using the Knowledge Gradient Policy with Application to Characterizing Nanoemulsion Stability
Author(s): Chen, Si; Reyes, Kristofer-Roy G; Gupta, Maneesh K; McAlpine, Michael C; Powell, Warren B
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr12p37
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Si | - |
dc.contributor.author | Reyes, Kristofer-Roy G | - |
dc.contributor.author | Gupta, Maneesh K | - |
dc.contributor.author | McAlpine, Michael C | - |
dc.contributor.author | Powell, Warren B | - |
dc.date.accessioned | 2021-10-08T20:20:11Z | - |
dc.date.available | 2021-10-08T20:20:11Z | - |
dc.date.issued | 2015 | en_US |
dc.identifier.citation | Chen, Si, Kristofer-Roy G. Reyes, Maneesh K. Gupta, Michael C. McAlpine, and Warren B. Powell. "Optimal Learning in Experimental Design Using the Knowledge Gradient Policy with Application to Characterizing Nanoemulsion Stability." SIAM/ASA Journal on Uncertainty Quantification 3, no. 1 (2015): pp. 320-345. doi:10.1137/140971129 | en_US |
dc.identifier.uri | http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.696.483&rep=rep1&type=pdf | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr12p37 | - |
dc.description.abstract | We present a technique for adaptively choosing a sequence of experiments for materials design and optimization. Specifically, we consider the problem of identifying the choice of experimental control variables that optimize the kinetic stability of a nanoemulsion, which we formulate as a ranking and selection problem. We introduce an optimization algorithm called the knowledge gradient with discrete priors (KGDP) that sequentially and adaptively selects experiments and that maximizes the rate of learning the optimal control variables. This is done through a combination of a physical, kinetic model of nanoemulsion stability, Bayesian inference, and a decision policy. Prior knowledge from domain experts is incorporated into the algorithm as well. Through numerical experiments, we show that the KGDP algorithm outperforms the policies of both random exploration (in which an experiment is selected uniformly at random among all potential experiments) and exploitation (which selects the experiment that appears to be the best, given the current state of Bayesian knowledge). | en_US |
dc.format.extent | 320 - 345 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | SIAM/ASA Journal on Uncertainty Quantification | en_US |
dc.rights | Author's manuscript | en_US |
dc.title | Optimal Learning in Experimental Design Using the Knowledge Gradient Policy with Application to Characterizing Nanoemulsion Stability | en_US |
dc.type | Journal Article | en_US |
dc.identifier.doi | 10.1137/140971129 | - |
dc.identifier.eissn | 2166-2525 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/journal-article | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
OptimalLearningGradient.pdf | 8.39 MB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.