To refer to this page use:
|Abstract:||© 2017 INFORMS. In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose data-driven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of ine cient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.|
|Citation:||Jiang, DR, Powell, WB. (2018). Risk-averse approximate dynamic programming with quantile-based risk measures. Mathematics of Operations Research, 43 (2), 554 - 579. doi:10.1287/moor.2017.0872|
|Pages:||554 - 579|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||Mathematics of Operations Research|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.