Staff View
Evaluating model free policy optimization strategies for non linear systems

Descriptive

TitleInfo
Title
Evaluating model free policy optimization strategies for non linear systems
Name (type = personal)
NamePart (type = family)
Chukka
NamePart (type = given)
Aditya H.
NamePart (type = date)
1992-
DisplayForm
Aditya H. Chukka
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Bekris
NamePart (type = given)
Kostas E
DisplayForm
Kostas E Bekris
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
Graduate School - New Brunswick
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (qualifier = exact)
2017
DateOther (qualifier = exact); (type = degree)
2017-05
CopyrightDate (encoding = w3cdtf); (qualifier = exact)
2017
Place
PlaceTerm (type = code)
xx
Language
LanguageTerm (authority = ISO639-2b); (type = code)
eng
Abstract (type = abstract)
The Iterative Linear Quadratic Regulator (ILQR), a variant of Differential Dynamic Programming (DDP) is a tool for optimizing both open-loop trajectories and guiding feedback controllers using dynamics information that can be inferred from data. This technique assumes linear dynamics and quadratic cost functions and improves the control policy iteratively until convergence. We demonstrate the capabilities of this framework in designing controllers for regulating both natural and custom behavior on a simple pendulum, the primitive non linear system. The method's assumptions limit its validity to smaller regions of the state space. Direct Policy Search methods use Reinforcement Learning to develop controllers for such scenarios. Nevertheless, these methods require numerous samples to generate an optimal policy and often converge to poor local optima. Guided Policy Search (GPS) is a new technique that optimizes complex non-linear policies, such as those represented through deep neural networks, without computing policy gradients in high dimensional parameter space. It trains the policy in a "supervised" fashion using numerous locally valid controllers produced by ILQR. GPS provides appealing improvement and convergence guarantees in simple convex and linear settings and bounds the error in a non-linear setting. We apply Guided Policy Search to generate control policies for locomotion of a tensegrity robot, producing closed-loop motion that could not be achieved with previous methods.
Subject (authority = RUETD)
Topic
Computer Science
Subject (authority = ETD-LCSH)
Topic
Mathematical optimization
Subject (authority = ETD-LCSH)
Topic
Reinforcement learning--Mathematical models
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_8093
PhysicalDescription
Form (authority = gmd)
electronic resource
InternetMediaType
application/pdf
InternetMediaType
text/xml
Note
Supplementary File: Tensegrity SUPERball robot motion planning
Extent
1 online resource (ix, 59 p. : ill.)
Note (type = degree)
M.S.
Note (type = bibliography)
Includes bibliographical references
Note (type = statement of responsibility)
by Aditya H. Chukka
RelatedItem (type = host)
TitleInfo
Title
Graduate School - New Brunswick Electronic Theses and Dissertations
Identifier (type = local)
rucore19991600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/T3FB55SV
Genre (authority = ExL-Esploro)
ETD graduate
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Chukka
GivenName
Aditya
MiddleName
H.
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2017-04-17 16:33:43
AssociatedEntity
Name
Aditya Chukka
Role
Copyright holder
Affiliation
Rutgers University. Graduate School - New Brunswick
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
CreatingApplication
Version
1.5
ApplicationName
pdfTeX-1.40.17
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2017-04-17T20:32:03
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2017-04-17T20:32:03
RULTechMD (ID = TECHNICAL2)
ContentModel
ETD
CreatingApplication
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2017-04-17T04:40:06
ApplicationName
Microsoft Office PowerPoint
Version
16.0000
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024