Corruption Robust Exploration in Episodic Reinforcement Learning

APA

(2020). Corruption Robust Exploration in Episodic Reinforcement Learning. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/corruption-robust-exploration-episodic-reinforcement-learning

MLA

Corruption Robust Exploration in Episodic Reinforcement Learning. The Simons Institute for the Theory of Computing, Oct. 30, 2020, https://simons.berkeley.edu/talks/corruption-robust-exploration-episodic-reinforcement-learning

BibTex

          @misc{ scivideos_16706,
            doi = {},
            url = {https://simons.berkeley.edu/talks/corruption-robust-exploration-episodic-reinforcement-learning},
            author = {},
            keywords = {},
            language = {en},
            title = {Corruption Robust Exploration in Episodic Reinforcement Learning},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {oct},
            note = {16706 see, \url{https://scivideos.org/Simons-Institute/16706}}
          }
          
Aleksandrs Slivkins (Microsoft Research NYC)
Source Repository Simons Institute

Abstract

We initiate the study of episodic RL under adversarial corruptions in both the rewards and the transition probabilities of the underlying system. Our solution adapts to unknown level of corruption, degrading gracefully in the total corruption encountered. In particular, we attain near-optimal regret for a constant level of corruption. We derive results for "tabular" MDPs as well as MDPs that admit a linear representation. Notably, we provide the first sublinear regret guarantee that goes beyond i.i.d. transitions in the bandit-feedback model for episodic RL. We build on a new framework which combines the paradigms of "optimism under uncertainty" and "successive elimination". Neither paradigm alone suffices: "optimism under uncertainty", common in the current work on stochastic RL, cannot handle corruptions, while "successive elimination" works for bandits with corruptions, but is provably inefficient even for stochastic RL. Joint work Thodoris Lykouris, Max Simchowitz and Wen Sun https://arxiv.org/abs/1911.08689