Staff View
Discovering visual saliency for image analysis

Descriptive

TitleInfo
Title
Discovering visual saliency for image analysis
Name (type = personal)
NamePart (type = family)
Kim
NamePart (type = given)
Jongpil
NamePart (type = date)
1977-
DisplayForm
Jongpil Kim
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Pavlovic
NamePart (type = given)
Vladimir
DisplayForm
Vladimir Pavlovic
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Elgammal
NamePart (type = given)
Ahmed
DisplayForm
Ahmed Elgammal
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Michmizos
NamePart (type = given)
Konstantinos
DisplayForm
Konstantinos Michmizos
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Nguyen
NamePart (type = given)
Minh Hoai
DisplayForm
Minh Hoai Nguyen
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
Graduate School - New Brunswick
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (qualifier = exact)
2017
DateOther (qualifier = exact); (type = degree)
2017-01
CopyrightDate (encoding = w3cdtf); (qualifier = exact)
2017
Place
PlaceTerm (type = code)
xx
Language
LanguageTerm (authority = ISO639-2b); (type = code)
eng
Abstract (type = abstract)
Salient object detection is a key step in many image analysis tasks such as object detection and image segmentation, as it not only identifies relevant parts of a visual scene but may also reduce computational complexity by filtering out irrelevant segments of the scene. Traditional methods of salient object detection are based on binary classification to determine whether a given pixel or region belongs to a salient object. However, binary classification-based approaches are limited because they ignore the shape of the salient object by assigning a single output value to an input (pixel, patch, or superpixel). In this work, we introduce novel salient object detection methods that consider the shape of the object. We claim that encoding spatial image content to facilitate the information of the object shape can result in more-accurate prediction of the salient object than the traditional binary classification-based approaches. We propose two deep learning-based salient object detection methods to detect the object. The first proposed method combines a shape-preserving saliency prediction driven by a convolutional neural network (CNN) with pre-defined saliency shapes. Our model learns a saliency shape dictionary, which is subsequently used to train a CNN to predict the salient class of a target region and estimate the full, but coarse, saliency map of the target image. The map is then refined using image-specific, low- to mid-level information. In the second method, we explicitly predict the shape of the salient object using a specially designed CNN model. The proposed CNN model facilitates both global and local context of the image to produce better prediction than that obtained by considering only the local information. We train our models with pixel-wise annotated training data. Experimental results show that the proposed methods outperform previous state-of-the-art methods in salient object detection. Next, we propose novel methods to find characteristic landmarks and recognize ancient Roman imperial coins. The Roman coins play an important role in understanding the Roman Empire because they convey rich information about key historical events of the time. Moreover, as large amounts of coins are traded daily over the Internet, it becomes necessary to develop automatic coin recognition systems to prevent illegal trades. Because the coin images do not have the pixel-wise annotations, we use a weakly-supervised approach to discover the characteristic landmarks on the coin images instead of using the previously mentioned models. For this purpose, we first propose a spatial-appearance coin recognition system to visualize the contribution of the image regions on the Roman coins using Fisher vector representation. Next, we formulate an optimization task to discover class-specific salient coin regions using CNNs. Analysis of discovered salient regions confirms that they are largely consistent with human expert annotations. Experimental results show that the proposed methods can effectively recognize the ancient Roman coins as well as successfully identify landmarks in the coin images and in a general fine-grained classification problem. For this research, we have collected new Roman coin datasets in which all coin images are annotated.
Subject (authority = RUETD)
Topic
Computer Science
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_7844
PhysicalDescription
Form (authority = gmd)
electronic resource
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
1 online resource (xii, 93 p. : ill.)
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
Subject (authority = ETD-LCSH)
Topic
Computer vision
Subject (authority = ETD-LCSH)
Topic
Image analysis
Note (type = statement of responsibility)
by Jongpil Kim
RelatedItem (type = host)
TitleInfo
Title
Graduate School - New Brunswick Electronic Theses and Dissertations
Identifier (type = local)
rucore19991600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/T3T15625
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Kim
GivenName
Jongpil
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2017-01-10 16:18:17
AssociatedEntity
Name
Jongpil Kim
Role
Copyright holder
Affiliation
Rutgers University. Graduate School - New Brunswick
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
CreatingApplication
Version
1.5
ApplicationName
pdfTeX-1.40.14
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2017-01-13T12:16:25
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2017-01-13T12:16:25
Back to the top
Version 8.3.13
Rutgers University Libraries - Copyright ©2020