Staff View
Language guided visual perception

Descriptive

TitleInfo
Title
Language guided visual perception
Name (type = personal)
NamePart (type = family)
Elhoseiny
NamePart (type = given)
Mohamed
DisplayForm
Mohamed Elhoseiny
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Elgammal
NamePart (type = given)
Ahmed
DisplayForm
Ahmed Elgammal
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Boularias
NamePart (type = given)
Abdeslam
DisplayForm
Abdeslam Boularias
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Kulikowski
NamePart (type = given)
Casimir
DisplayForm
Casimir Kulikowski
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Gupta
NamePart (type = given)
Abhinav
DisplayForm
Abhinav Gupta
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
Graduate School - New Brunswick
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (qualifier = exact)
2016
DateOther (qualifier = exact); (type = degree)
2016-10
CopyrightDate (encoding = w3cdtf); (qualifier = exact)
2016
Place
PlaceTerm (type = code)
xx
Language
LanguageTerm (authority = ISO639-2b); (type = code)
eng
Abstract (type = abstract)
People typically learn through exposure to visual facts associated with linguistic descriptions. For instance, teaching visual concepts to children is often accompanied by descriptions in text or speech. In a machine learning context, these observations motivate the question of how this learning process could be computationally modeled to learn visual facts. We explored three settings where we showed that combining language and vision is useful for visual perception in both images and videos. First, we addressed the question of how to utilize purely textual description of visual classes with no training images, to learn explicit visual classifiers for them. We propose and investigate two baseline formulations, based on regression and domain transfer that predict a classifier. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the classifier parameters for new classes. We also proposed kernelized models which allows defining any two kernel functions in the visual space and text space. We applied the studied models to predict visual classifiers for two fine-grained categorization datasets, and the results indicate successful predictions of our final model against several baselines that we designed. Second, we modeled video event search as a language&vision problem where we proposed a zero-shot Event Detection method by Multi-modal Distributional Semantic embedding of videos. Our Zero-Shot event detection model is built on top of distributional semantics and extends it in the following directions: (a) semantic embedding of multimodal information in videos (with focus on the visual modalities), (b) automatically determining relevance of concepts/attributes to a free text query, which could be useful for other applications, and (c) retrieving videos by free text event query (e.g., ``changing a vehicle tire'') based on their content. We validated our method on the large TRECVID MED (Multimedia Event Detection) challenge. Using only the event title as a query, our method outperformed the state-of-the-art that uses big descriptions. Third and motivated by the aforementioned results, we proposed a uniform and scalable setting to learn unbounded number of visual facts. We proposed models that can learn not only objects but also their actions, attributes and interactions with other objects in one unified learning framework and in a never ending way. The training data comes as structured facts in images, including (1) objects (e.g., <boy>), (2) attributes (e.g.,<boy, tall>), (3) actions (e.g., <boy, playing>, and (4) interactions (e.g., <boy, riding, a horse >). We have worked on the scale of 814,000 images and 202,000 unique visual facts. Our experiments show the advantage of relating facts by the structure in the proposed models compared to four designed baselines on bidirectional fact retrieval.
Subject (authority = RUETD)
Topic
Computer Science
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_7689
PhysicalDescription
Form (authority = gmd)
electronic resource
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
1 online resource (xiv, 112 p. : ill.)
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
Subject (authority = ETD-LCSH)
Topic
Computer vision
Subject (authority = ETD-LCSH)
Topic
Visual Perception
Note (type = statement of responsibility)
by Mohamed Elhoseiny
RelatedItem (type = host)
TitleInfo
Title
Graduate School - New Brunswick Electronic Theses and Dissertations
Identifier (type = local)
rucore19991600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/T3QC05T7
Genre (authority = ExL-Esploro)
ETD doctoral
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Elhoseiny
GivenName
Mohamed
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2016-09-29 22:22:11
AssociatedEntity
Name
Mohamed Elhoseiny
Role
Copyright holder
Affiliation
Rutgers University. Graduate School - New Brunswick
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
CreatingApplication
Version
1.5
ApplicationName
pdfTeX-1.40.15
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2016-10-03T00:07:43
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2016-10-03T00:07:43
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024