Staff View
Model-based Bayesian reinforcement learning with generalized priors

Descriptive

TitleInfo
Title
Model-based Bayesian reinforcement learning with generalized priors
Name (type = personal)
NamePart (type = family)
Asmuth
NamePart (type = given)
John Thomas
DisplayForm
John Asmuth
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Littman
NamePart (type = given)
Michael L
DisplayForm
Michael L Littman
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Pavlovich
NamePart (type = given)
Vladimir
DisplayForm
Vladimir Pavlovich
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Hirsh
NamePart (type = given)
Haym
DisplayForm
Haym Hirsh
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Poupart
NamePart (type = given)
Pascal
DisplayForm
Pascal Poupart
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
Graduate School - New Brunswick
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (qualifier = exact)
2013
DateOther (qualifier = exact); (type = degree)
2013-05
Place
PlaceTerm (type = code)
xx
Language
LanguageTerm (authority = ISO639-2b); (type = code)
eng
Abstract (type = abstract)
Effectively leveraging model structure in reinforcement learning is a difficult task, but failure to do so can result in computer agents that repeatedly take sub-optimal actions, despite having enough information to perform better. The Bayesian approach is a principled and well-studied method for leveraging model structure, and it is useful to use in the reinforcement learning setting. This dissertation studies different methods for bringing the Bayesian approach to bear for model-based reinforcement learning agents, as well as different models that can be used. The contributions include several examples of models that can be used for learning MDPs, and two novel algorithms, and their analyses, for using those models for efficient exploration: BOSS and BFS3. The Bayesian approach to model-based reinforcement learning provides a principled method for incorporating prior knowledge into the design of an agent, and allows the designer to separate the problems of planning, learning and exploration. The BOSS and BFS3 algorithms are efficient (polynomial time) mechanisms for decision making within this framework with provable bounds on their accuracy.
Subject (authority = RUETD)
Topic
Computer Science
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_4536
PhysicalDescription
Form (authority = gmd)
electronic resource
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
xx, 161 p. : ill.
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
Note (type = vita)
Includes vita
Note (type = statement of responsibility)
by John Thomas Asmuth
Subject (authority = ETD-LCSH)
Topic
Reinforcement learning
Subject (authority = ETD-LCSH)
Topic
Bayesian statistical decision theory
Identifier (type = hdl)
http://hdl.rutgers.edu/1782.1/rucore10001600001.ETD.000068810
RelatedItem (type = host)
TitleInfo
Title
Graduate School - New Brunswick Electronic Theses and Dissertations
Identifier (type = local)
rucore19991600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/T3HX1B9M
Genre (authority = ExL-Esploro)
ETD doctoral
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Asmuth
GivenName
John
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2013-03-08 12:36:00
AssociatedEntity
Name
John Asmuth
Role
Copyright holder
Affiliation
Rutgers University. Graduate School - New Brunswick
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024