Skip to main content

A lasso-based sparse knowledge gradient policy for sequential optimal learning

Author(s): Li, Yan; Liu, Han; Powell, Warren B.

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1wv3d
Abstract: Copyright 2016 by the authors. We propose a sequential learning policy for noisy discrete global optimization and ranking and selection (R&S) problems with high dimensional sparse belief functions, where there are hundreds or even thousands of features, but only a small portion of these features contain explanatory power. Our problem setting, motivated by the experimental sciences, arises where we have to choose which experiment to run next. Here the experiments are time-consuming and expensive. We derive a sparse knowledge gradient (SpKG) decision policy based on the ℓ1-penalized regression Lasso to identify the sparsity pattern before our budget is exhausted. This policy is a unique and novel hybrid of Bayesian R&S with a frequentist learning approach. Theoretically, we provide the error bound of the posterior mean estimate, which has shown to be at the minimax optimal √slog p/n rate. Controlled experiments on both synthetic data and real application for automatically designing experiments to identify the structure of an RNA molecule show that the algorithm efficiently learns the correct set of nonzero parameters. It also outperforms several other learning policies.
Publication Date: 1-Jan-2016
Citation: Li, Y, Liu, H, Powell, WB. (2016). A lasso-based sparse knowledge gradient policy for sequential optimal learning. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, 417 - 425. Retrieved from http://proceedings.mlr.press/v51/li16a.html
Pages: 417 - 425
Type of Material: Journal Article
Journal/Proceeding Title: Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016
Version: Final published version. This is an open access article.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.