Staff View
Optimal learning via dynamic risk

Descriptive

TitleInfo
Title
Optimal learning via dynamic risk
Name (type = personal)
NamePart (type = family)
McGinity
NamePart (type = given)
Curtis
NamePart (type = date)
1987-
DisplayForm
Curtis McGinity
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Ruszczynski
NamePart (type = given)
Andrzej
DisplayForm
Andrzej Ruszczynski
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Boros
NamePart (type = given)
Endre
DisplayForm
Endre Boros
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
BEN-ISRAEL
NamePart (type = given)
ADI
DisplayForm
ADI BEN-ISRAEL
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Dentcheva
NamePart (type = given)
Darinka
DisplayForm
Darinka Dentcheva
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
Graduate School - New Brunswick
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (qualifier = exact)
2017
DateOther (qualifier = exact); (type = degree)
2017-05
CopyrightDate (encoding = w3cdtf); (qualifier = exact)
2017
Place
PlaceTerm (type = code)
xx
Language
LanguageTerm (authority = ISO639-2b); (type = code)
eng
Abstract (type = abstract)
We consider the dilemma of taking sequential action within a nebulous and costly stochastic system. In such problems, the decision-maker sequentially takes an action from a given set, then incurs a cost and observes a response depending stochastically on the action. Confronted with an unknown system, the decision-maker must learn about the system by experimenting with risky actions, thus enabling better decisions over time. We thus consider the {risk-averse optimal learning} problem to dynamically choose actions to minimize the risk of the cumulative costs of learning. Motivated by problems in clinical trial design for novel pharmaceutical agents, we formulate the problem of Bayesian statistical inference under binary response as a Markov decision process with belief states. We formulate a certain class of standardized logistic models with quantile parameterizations and offer some general conditions under which belief states satisfy stochastic order and log-concavity under Bayesian dynamics. We also establish some stronger results under assumptions on the policy class. We then introduce dynamic Markov risk measures, formulate dynamic programming equations, and discuss the challenges of their solution. We then offer an approximate DP (ADP) schema based on a coarse grid approximation within a parameterized distribution family utilizing log-concavity constraints. We also study risk-averse lookahead policies, introducing a {robust-response} policy and a heuristic policy. We compare the performance of the above policy classes to the state-of-the-art, and demonstrate its performance in computational experiments, including the design of dose-escalation policies for three chemotherapeutic agents (bleomycin, etoposide, 5-fluorouracil). The robust-response policy exhibits strong peformance in the problem class, clarifing the role of risk measures under Bayesian belief dynamics and suggesting avenues of future research.
Subject (authority = RUETD)
Topic
Operations Research
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_8096
PhysicalDescription
Form (authority = gmd)
electronic resource
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
1 online resource (ix, 147 p. : ill.)
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
Subject (authority = ETD-LCSH)
Topic
Bayesian statistical decision theory
Subject (authority = ETD-LCSH)
Topic
Machine learning
Note (type = statement of responsibility)
by Curtis McGinity
RelatedItem (type = host)
TitleInfo
Title
Graduate School - New Brunswick Electronic Theses and Dissertations
Identifier (type = local)
rucore19991600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/T3474DQN
Genre (authority = ExL-Esploro)
ETD doctoral
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
McGinity
GivenName
Curtis
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2017-04-17 16:58:56
AssociatedEntity
Name
Curtis McGinity
Role
Copyright holder
Affiliation
Rutgers University. Graduate School - New Brunswick
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
RightsEvent
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2017-05-31
DateTime (encoding = w3cdtf); (qualifier = exact); (point = end)
2017-11-30
Type
Embargo
Detail
Access to this PDF has been restricted at the author's request. It will be publicly available after November 30th, 2017.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
CreatingApplication
Version
1.5
ApplicationName
pdfTeX-1.40.15
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2017-04-21T07:29:23
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2017-04-21T07:29:23
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024