Staff View
On asymptotically optimal reinforcement learning

Descriptive

TitleInfo
Title
On asymptotically optimal reinforcement learning
Name (type = personal)
NamePart (type = family)
Pirutinsky
NamePart (type = given)
Daniel
DisplayForm
Daniel Pirutinsky
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Katehakis
NamePart (type = given)
Michael
DisplayForm
Michael Katehakis
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Cowan
NamePart (type = given)
Wesley
DisplayForm
Wesley Cowan
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
co-chair
Name (type = personal)
NamePart (type = family)
BEN-ISRAEL
NamePart (type = given)
ADI
DisplayForm
ADI BEN-ISRAEL
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Govindaraj
NamePart (type = given)
Suresh
DisplayForm
Suresh Govindaraj
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Ross
NamePart (type = given)
Sheldon
DisplayForm
Sheldon Ross
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
Graduate School - Newark
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
Genre (authority = ExL-Esploro)
ETD doctoral
OriginInfo
DateCreated (qualifier = exact); (encoding = w3cdtf); (keyDate = yes)
2020
DateOther (type = degree); (qualifier = exact); (encoding = w3cdtf)
2020-10
Language
LanguageTerm (authority = ISO 639-3:2007); (type = text)
English
Abstract (type = abstract)
We consider the problem of minimizing the long term average expected regret of an agent in an online reinforcement learning environment. In particular, we model this as a Markov Decision Process (MDP) where the underlying transition laws are unknown. There have been many recent successful applications in this area as well as many recent advances in theoretical techniques. However, there still is a significant gap between rigorous theoretical techniques and those that are in actual use. This work represents a step towards shrinking that gap.

In the first part we develop a set of properties sufficient to guarantee that any policy satis- fying them will achieve asymptotically minimal regret (up to constant factor of the logarithmic term). The goal in this is to, rather than simply add one more learning policy to the mix, build a flexible framework that may be adapted to a variety of estimative and adaptive policies that are already in use and grant confidence in the performance. To that aim, this work lays the groundwork for what we believe is a useful technique for proving asymptotically minimal rate of regret growth. The conditions are presented here along with hints for how a verifier may prove that their particular algorithm satisfies these conditions. The ideas in this work build strongly on those of [1].

In the second part of this work, we derive an efficient method for computing the indices as- sociated with an asymptotically optimal upper confidence bound algorithm (MDP-UCB) of [1] that only requires solving a system of two non-linear equations with two unknowns, irrespective of the cardinality of the state space of the MDP. In addition, we develop the MDP-Deterministic Minimum Empirical Divergence (MDP-DMED) algorithm extending the ideas of [2] for the Multi-Armed Bandit (MAB-DMED) and we derive a similar acceleration for computing these indices that involves solving a single equation of one variable. We provide experimental results demonstrating the computational time savings and regret performance of these algorithms. In these comparison we also consider the Optimistic Linear Programming (OLP) algorithm [3] and a method based on Posterior (Thompson) sampling (MDP-PS).
Subject (authority = local)
Topic
Reinforcement learning
Subject (authority = RUETD)
Topic
Management
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_11190
PhysicalDescription
Form (authority = gmd)
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
1 online resource (viii, 78 pages) : illustrations
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
RelatedItem (type = host)
TitleInfo
Title
Graduate School - Newark Electronic Theses and Dissertations
Identifier (type = local)
rucore10002600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/t3-awrf-bw96
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Pirutinsky
GivenName
Daniel
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2020-09-23 11:11:21
AssociatedEntity
Name
Daniel Pirutinsky
Role
Copyright holder
Affiliation
Rutgers University. Graduate School - Newark
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
RightsEvent
Type
Embargo
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2020-10-31
DateTime (encoding = w3cdtf); (qualifier = exact); (point = end)
2022-10-31
Detail
Access to this PDF has been restricted at the author's request. It will be publicly available after October 31st, 2022.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
CreatingApplication
Version
1.5
ApplicationName
pdfTeX-1.40.20
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2020-10-16T15:28:28
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2020-10-16T15:28:28
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024