Staff View
Local planning for continuous Markov decision processes

Descriptive

TitleInfo
Title
Local planning for continuous Markov decision processes
Name (type = personal)
NamePart (type = family)
Weinstein
NamePart (type = given)
Ariel
DisplayForm
Ariel Weinstein
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Littman
NamePart (type = given)
Michael L.
DisplayForm
Michael L. Littman
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Bekris
NamePart (type = given)
Kostas E.
DisplayForm
Kostas E. Bekris
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Bordiga
NamePart (type = given)
Alexander
DisplayForm
Alexander Bordiga
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Feldman
NamePart (type = given)
Jacob
DisplayForm
Jacob Feldman
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = personal)
NamePart (type = family)
Smart
NamePart (type = given)
William D.
DisplayForm
William D. Smart
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
Graduate School - New Brunswick
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (qualifier = exact)
2014
DateOther (qualifier = exact); (type = degree)
2014-01
Place
PlaceTerm (type = code)
xx
Language
LanguageTerm (authority = ISO639-2b); (type = code)
eng
Abstract (type = abstract)
In this dissertation, algorithms that create plans to maximize a numeric reward over time are discussed. A general formulation of this problem is in terms of reinforcement learning (RL), which has traditionally been restricted to small discrete domains. Here, we are concerned instead with domains that violate this assumption, as we assume domains are both continuous and high dimensional. Problems of swimming, riding a bicycle, and walking are concrete examples of domains satisfying these assumptions, and simulations of these problems are tackled here. To perform planning in continuous domains, it has become common practice to use discrete planners after uniformly discretizing dimensions of the problem, leading to an exponential growth in problem size as dimension increases. Furthermore, traditional methods develop a policy for the entire domain simultaneously, but have at best polynomial planning costs in the size of the problem, which (as mentioned) grows exponentially with respect to dimension when uniform discretization is performed. To sidestep this problem, I propose a twofold approach of: using algorithms designed to function natively in continuous domains, and performing planning locally. By developing planners that function natively in continuous domains, difficult decisions related to how coarsely to discretize the problem are avoided, which allows for more flexible and efficient algorithms that more efficiently allocate and use samples of transitions and rewards. By focusing on local planning algorithms, it is possible to somewhat sidestep the curse of dimensionality, as planning costs are dependent on planning horizon as opposed to domain size. The properties of some local continuous planners are discussed from a theoretical perspective. Empirically, the superiority of continuous planners is demonstrated with respect to their discrete counterparts. Both theoretically and empirically, it is shown that algorithms designed to operate natively in continuous domains are simpler to use while providing higher quality results, more efficiently.
Subject (authority = RUETD)
Topic
Computer Science
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_5159
PhysicalDescription
Form (authority = gmd)
electronic resource
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
xviii, 181 p. : ill.
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
Note (type = statement of responsibility)
by Ari Weinstein
Subject (authority = ETD-LCSH)
Topic
Markov processes
Subject (authority = ETD-LCSH)
Topic
Markov processes--Numerical solutions
Subject (authority = ETD-LCSH)
Topic
Reinforcement learning
RelatedItem (type = host)
TitleInfo
Title
Graduate School - New Brunswick Electronic Theses and Dissertations
Identifier (type = local)
rucore19991600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/T3BR8Q83
Genre (authority = ExL-Esploro)
ETD doctoral
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Weinstein
GivenName
Ariel
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2013-10-13 11:31:04
AssociatedEntity
Name
Ariel Weinstein
Role
Copyright holder
Affiliation
Rutgers University. Graduate School - New Brunswick
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
RightsEvent
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2014-01-31
DateTime (encoding = w3cdtf); (qualifier = exact); (point = end)
2016-01-31
Type
Embargo
Detail
Access to this PDF has been restricted at the author's request. It will be publicly available after January 31st, 2016.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024