Staff View
Methods of temporal differences for risk-averse dynamic programming and learning

Descriptive

TitleInfo
Title
Methods of temporal differences for risk-averse dynamic programming and learning
Name (type = personal)
NamePart (type = family)
Kose
NamePart (type = given)
Umit
NamePart (type = date)
1991
DisplayForm
Kose, Umit, 1991-
Role
RoleTerm (authority = RULIB); (type = text)
author
Name (type = personal)
NamePart (type = family)
Ruszczynski
NamePart (type = given)
Andrzej
DisplayForm
Andrzej Ruszczynski
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Eckstein
NamePart (type = given)
Jonathan
DisplayForm
Jonathan Eckstein
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Gurbuzbalaban
NamePart (type = given)
Mert
DisplayForm
Mert Gurbuzbalaban
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Dentcheva
NamePart (type = given)
Darinka
DisplayForm
Darinka Dentcheva
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
Graduate School - Newark
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (encoding = w3cdtf); (keyDate = yes); (qualifier = exact)
2020
DateOther (qualifier = exact); (type = degree)
2020-05
Language
LanguageTerm (authority = ISO 639-3:2007); (type = text)
English
Abstract (type = abstract)
Stochastic sequential decision-making problems are generally modeled and solved as Markov decision processes. When the decision-makers are risk-averse, their risk-aversion can be incorporated into the model using dynamic risk-measures. Such risk-averse Markov decision processes can be theoretically solved by specialized dynamic programming methods. However, when the state space of the system becomes very large, then such methods become impractical.

We consider reinforcement learning with performance evaluated by a dynamic risk measure for Markov decision processes. We use a linear value function approximation scheme and construct a projected risk-averse dynamic programming equation that involves this scheme. We study the properties of this equation. To solve this equation, we propose risk-averse counterparts of the methods of temporal differences and we prove their convergence with probability one. We also perform an empirical study on a complex transportation problem where we demonstrate that the risk-averse methods of temporal differences outperform the well known risk-neutral methods in terms of average profit over time.
Subject (authority = RUETD)
Topic
Management
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_10730
PhysicalDescription
Form (authority = gmd)
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
1 online resource (vi, 44 pages)
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
RelatedItem (type = host)
TitleInfo
Title
Graduate School - Newark Electronic Theses and Dissertations
Identifier (type = local)
rucore10002600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/t3-5b3y-5c51
Genre (authority = ExL-Esploro)
ETD doctoral
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Kose
GivenName
Umit
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2020-04-13 10:48:11
AssociatedEntity
Name
Umit Kose
Role
Copyright holder
Affiliation
Rutgers University. Graduate School - Newark
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
CreatingApplication
Version
1.5
ApplicationName
MiKTeX pdfTeX-1.40.21
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2020-04-21T18:44:47
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2020-04-21T18:44:47
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024