Skip to main content

Risk-averse approximate dynamic programming with quantile-based risk measures

Author(s): Jiang, DR; Powell, William B

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1cs22
Abstract: © 2017 INFORMS. In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose data-driven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of ine cient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.
Publication Date: 1-May-2018
Citation: Jiang, DR, Powell, WB. (2018). Risk-averse approximate dynamic programming with quantile-based risk measures. Mathematics of Operations Research, 43 (2), 554 - 579. doi:10.1287/moor.2017.0872
DOI: doi:10.1287/moor.2017.0872
ISSN: 0364-765X
EISSN: 1526-5471
Pages: 554 - 579
Type of Material: Journal Article
Journal/Proceeding Title: Mathematics of Operations Research
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.