Staff View
Maximum likelihood inverse reinforcement learning

Descriptive

TitleInfo
Title
Maximum likelihood inverse reinforcement learning
Name (type = personal)
NamePart (type = family)
Vroman
NamePart (type = given)
Monica C.
NamePart (type = date)
1980-
DisplayForm
Monica C. Vroman
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Littman
NamePart (type = given)
Michael L
DisplayForm
Michael L Littman
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Eliassi-Rad
NamePart (type = given)
Tina
DisplayForm
Tina Eliassi-Rad
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Borgida
NamePart (type = given)
Alex
DisplayForm
Alex Borgida
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Ziebart
NamePart (type = given)
Brian
DisplayForm
Brian Ziebart
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
Graduate School - New Brunswick
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (qualifier = exact)
2014
DateOther (qualifier = exact); (type = degree)
2014-10
CopyrightDate (encoding = w3cdtf)
2014
Place
PlaceTerm (type = code)
xx
Language
LanguageTerm (authority = ISO639-2b); (type = code)
eng
Abstract (type = abstract)
Learning desirable behavior from a limited number of demonstrations, also known as inverse reinforcement learning, is a challenging task in machine learning. I apply maximum likelihood estimation to the problem of inverse reinforcement learning, and show that it quickly and successfully identifies the unknown reward function from traces of optimal or near-optimal behavior, under the assumption that the reward function is a linear function of a known set of features. I extend this approach to cover reward functions that are a generalized function of the features, and show that the generalized inverse reinforcement learning approach is a competitive alternative to existing approaches covering the same class of functions, while at the same time, being able to learn the right rewards in cases that have not been covered before. I then apply these tools to the problem of learning from (unlabeled) demonstration trajectories of behavior generated by varying ``intentions'' or objectives. I derive an EM approach that clusters observed trajectories by inferring the objectives for each cluster using any of several possible IRL methods, and then uses the constructed clusters to quickly identify the intent of a trajectory. I present an application of maximum likelihood inverse reinforcement learning to the problem of training an artificial agent to follow verbal instructions representing high-level tasks using a set of instructions paired with demonstration traces of appropriate behavior.
Subject (authority = RUETD)
Topic
Computer Science
Subject (authority = ETD-LCSH)
Topic
Reinforcement learning
Subject (authority = ETD-LCSH)
Topic
Reward (Psychology)
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_5974
PhysicalDescription
Form (authority = gmd)
electronic resource
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
1 online resource (xiv, 109 p. : ill.)
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
Note (type = statement of responsibility)
by Monica C. Vroman
RelatedItem (type = host)
TitleInfo
Title
Graduate School - New Brunswick Electronic Theses and Dissertations
Identifier (type = local)
rucore19991600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/T3GQ70C8
Genre (authority = ExL-Esploro)
ETD doctoral
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Vroman
GivenName
Monica
MiddleName
C.
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2014-09-30 15:37:43
AssociatedEntity
Name
Monica Vroman
Role
Copyright holder
Affiliation
Rutgers University. Graduate School - New Brunswick
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
Back to the top
Version 8.4.8
Rutgers University Libraries - Copyright ©2022