TY - JOUR TI - On asymptotically optimal reinforcement learning DO - https://doi.org/doi:10.7282/t3-awrf-bw96 PY - 2020 AB - We consider the problem of minimizing the long term average expected regret of an agent in an online reinforcement learning environment. In particular, we model this as a Markov Decision Process (MDP) where the underlying transition laws are unknown. There have been many recent successful applications in this area as well as many recent advances in theoretical techniques. However, there still is a significant gap between rigorous theoretical techniques and those that are in actual use. This work represents a step towards shrinking that gap. In the first part we develop a set of properties sufficient to guarantee that any policy satis- fying them will achieve asymptotically minimal regret (up to constant factor of the logarithmic term). The goal in this is to, rather than simply add one more learning policy to the mix, build a flexible framework that may be adapted to a variety of estimative and adaptive policies that are already in use and grant confidence in the performance. To that aim, this work lays the groundwork for what we believe is a useful technique for proving asymptotically minimal rate of regret growth. The conditions are presented here along with hints for how a verifier may prove that their particular algorithm satisfies these conditions. The ideas in this work build strongly on those of [1]. In the second part of this work, we derive an efficient method for computing the indices as- sociated with an asymptotically optimal upper confidence bound algorithm (MDP-UCB) of [1] that only requires solving a system of two non-linear equations with two unknowns, irrespective of the cardinality of the state space of the MDP. In addition, we develop the MDP-Deterministic Minimum Empirical Divergence (MDP-DMED) algorithm extending the ideas of [2] for the Multi-Armed Bandit (MAB-DMED) and we derive a similar acceleration for computing these indices that involves solving a single equation of one variable. We provide experimental results demonstrating the computational time savings and regret performance of these algorithms. In these comparison we also consider the Optimistic Linear Programming (OLP) algorithm [3] and a method based on Posterior (Thompson) sampling (MDP-PS). KW - Reinforcement learning KW - Management LA - English ER -