LanguageTerm (authority = ISO 639-3:2007); (type = text)
English
Abstract (type = abstract)
We consider the problem of minimizing the long term average expected regret of an agent in an online reinforcement learning environment. In particular, we model this as a Markov Decision Process (MDP) where the underlying transition laws are unknown. There have been many recent successful applications in this area as well as many recent advances in theoretical techniques. However, there still is a significant gap between rigorous theoretical techniques and those that are in actual use. This work represents a step towards shrinking that gap.
In the first part we develop a set of properties sufficient to guarantee that any policy satis- fying them will achieve asymptotically minimal regret (up to constant factor of the logarithmic term). The goal in this is to, rather than simply add one more learning policy to the mix, build a flexible framework that may be adapted to a variety of estimative and adaptive policies that are already in use and grant confidence in the performance. To that aim, this work lays the groundwork for what we believe is a useful technique for proving asymptotically minimal rate of regret growth. The conditions are presented here along with hints for how a verifier may prove that their particular algorithm satisfies these conditions. The ideas in this work build strongly on those of [1].
In the second part of this work, we derive an efficient method for computing the indices as- sociated with an asymptotically optimal upper confidence bound algorithm (MDP-UCB) of [1] that only requires solving a system of two non-linear equations with two unknowns, irrespective of the cardinality of the state space of the MDP. In addition, we develop the MDP-Deterministic Minimum Empirical Divergence (MDP-DMED) algorithm extending the ideas of [2] for the Multi-Armed Bandit (MAB-DMED) and we derive a similar acceleration for computing these indices that involves solving a single equation of one variable. We provide experimental results demonstrating the computational time savings and regret performance of these algorithms. In these comparison we also consider the Optimistic Linear Programming (OLP) algorithm [3] and a method based on Posterior (Thompson) sampling (MDP-PS).
Subject (authority = local)
Topic
Reinforcement learning
Subject (authority = RUETD)
Topic
Management
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.