Skip to main content

Defining admissible rewards for high-confidence policy evaluation in batch reinforcement learning

Author(s): Prasad, Niranjani; Engelhardt, Barbara; Doshi-Velez, Finale

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1zg13
Abstract: A key impediment to reinforcement learning (RL) in real applications with limited, batch data is in defining a reward function that reflects what we implicitly know about reasonable behaviour for a task and allows for robust off-policy evaluation. In this work, we develop a method to identify an admissible set of reward functions for policies that (a) do not deviate too far in performance from prior behaviour, and (b) can be evaluated with high confidence, given only a collection of past trajectories. Together, these ensure that we avoid proposing unreasonable policies in high-risk settings. We demonstrate our approach to reward design on synthetic domains as well as in a critical care context, to guide the design of a reward function that consolidates clinical objectives to learn a policy for weaning patients from mechanical ventilation.
Publication Date: Apr-2020
Citation: Prasad, Niranjani, Barbara Engelhardt, and Finale Doshi-Velez. "Defining admissible rewards for high-confidence policy evaluation in batch reinforcement learning." In Proceedings of the ACM Conference on Health, Inference, and Learning (2020): pp. 1-9. doi:10.1145/3368555.3384450
DOI: 10.1145/3368555.3384450
Pages: 1 - 9
Type of Material: Conference Article
Journal/Proceeding Title: Proceedings of the ACM Conference on Health, Inference, and Learning
Version: Final published version. This is an open access article.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.