Staff View
Object category recognition through visual-semantic context networks

Descriptive

TitleInfo
Title
Object category recognition through visual-semantic context networks
Name (type = personal)
NamePart (type = family)
Chakraborty
NamePart (type = given)
Ishani
NamePart (type = date)
1982-
DisplayForm
Ishani Chakraborty
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Elgammal
NamePart (type = given)
Ahmed
DisplayForm
Ahmed Elgammal
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Kulikowski
NamePart (type = given)
Casimir
DisplayForm
Casimir Kulikowski
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Eliassi-Rad
NamePart (type = given)
Tina
DisplayForm
Tina Eliassi-Rad
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Hadsell
NamePart (type = given)
Raia
DisplayForm
Raia Hadsell
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
Graduate School - New Brunswick
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (qualifier = exact)
2014
DateOther (qualifier = exact); (type = degree)
2014-01
Place
PlaceTerm (type = code)
xx
Language
LanguageTerm (authority = ISO639-2b); (type = code)
eng
Abstract (type = abstract)
Understanding and interacting with one’s environment requires parsing the image of the environment and recognizing a wide range of objects within it. Despite wide variations, such as viewpoint, occlusion and background clutter, humans can achieve this task effortlessly and almost instantaneously. In this thesis, we explore computational algorithms that teach computers to recognize objects in natural scenes. Inspired by the findings in human cognition, our algorithms are based on the notion that visual inference involves not only recognizing individual objects in isolation but also exploiting rich visual and semantic associations between object categories that form complex scenes. We view artificial object recognition as a fusion of information from two interconnected representations. The first is the inter-image representation in which an image location is visually associated with previously learnt object categories based on appearance models to find the most likely interpretations. The second is the intra-image representation in which the objects in an image are semantically associated with each other to find the most meaningful spatial and structural arrangements. The two representations are interconnected in that the visual process proposes object candidates to the semantic process, while the semantic process verifies and corrects the visual processs hypotheses. The primary goal of this thesis is to develop computational models for visual recognition that characterize the visual and semantic associations and their inter-dependencies to resolve object identities. In order to do so, we model object associations in contextual spaces. Unlike traditional approaches for object recognition that use context as a postprocessing filter to discard inconsistent object labels, we stratify scene generation into a Bayesian hierarchy and simultaneously learn semantic and visual context models for objects in scenes. The semantic-visual contexts among objects are represented through latent variables in this hierarchy. The intra-image associations within a scene are modeled as semantic context while the inter-image relations due to appearance similarities between object categories are modeled as visual context. To combine the complementary information derived from the two spaces, object labels are inferred by context switching; labels activated by appearance matches constrain semantic search while semantic coherence, in turn, constrains object identities. We demonstrate how this novel context network for modeling associations between objects leads to highly accurate object detection and scene understanding in natural images, especially when training data is impoverished and negative exemplars are not easily available.
Subject (authority = RUETD)
Topic
Computer Science
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_5190
PhysicalDescription
Form (authority = gmd)
electronic resource
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
xi, 103 p. : ill.
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
Note (type = statement of responsibility)
by Ishani Chakraborty
Subject (authority = ETD-LCSH)
Topic
Computer vision
Subject (authority = ETD-LCSH)
Topic
Pattern recognition systems
RelatedItem (type = host)
TitleInfo
Title
Graduate School - New Brunswick Electronic Theses and Dissertations
Identifier (type = local)
rucore19991600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/T3MS3QT6
Genre (authority = ExL-Esploro)
ETD doctoral
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Chakraborty
GivenName
Ishani
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2013-12-09 00:12:44
AssociatedEntity
Name
Ishani Chakraborty
Role
Copyright holder
Affiliation
Rutgers University. Graduate School - New Brunswick
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024