Staff View
Learning-based methods for single image restoration and translation

Descriptive

TitleInfo
Title
Learning-based methods for single image restoration and translation
Name (type = personal)
NamePart (type = family)
Zhang
NamePart (type = given)
He
NamePart (type = date)
1992-
DisplayForm
He Zhang
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Patel
NamePart (type = given)
VIshal M
DisplayForm
VIshal M Patel
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Meer
NamePart (type = given)
Peter
DisplayForm
Peter Meer
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Dana
NamePart (type = given)
Kristin
DisplayForm
Kristin Dana
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Najafizadeh
NamePart (type = given)
Laleh
DisplayForm
Laleh Najafizadeh
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Zhou
NamePart (type = given)
Shaohua (Kevin)
DisplayForm
Shaohua (Kevin) Zhou
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
outside member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
School of Graduate Studies
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (qualifier = exact)
2019
DateOther (qualifier = exact); (type = degree)
2019-01
CopyrightDate (encoding = w3cdtf)
2019
Place
PlaceTerm (type = code)
xx
Language
LanguageTerm (authority = ISO639-2b); (type = code)
eng
Abstract (type = abstract)
In many applications such as drone-based video surveillance, self driving cars and recognition under night-time and low-light conditions, the captured images and videos contain undesirable degradations such as haze, rain, snow, and noise. Furthermore, the performance of many computer vision algorithms often degrades when they are presented with images containing such artifacts. Hence, it is important to develop methods that can automatically remove these artifacts. However, these are dicult problems to solve due to their inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert them into well-posed problems. In this thesis, rather than purely relying on prior-based models, we propose to combine them with data-driven models for image restoration and translation. In particular, we develop
new data-driven approaches for 1) single image de-raining, 2) single image dehazing, and 3) thermal-to-visible face synthesis.

In the first part of the thesis, we develop three didifferent methods for single image deraining. In the first approach, we develop novel convolutional coding-based methods for single image de-raining, where two different types of filters are learned via convolutional sparse and low-rank coding to characterize the background component and rain-streak component separately. These pre-trained lters are then used to separate the rain component from the image. In the second approach, to ensure that the restored de-rained
results are indistinguishable from their corresponding clear images, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN) which consists of a new rened perceptual loss function and a novel multi-scale discriminator. Finally, to deal with nonuniform rain densities, we present a novel density-aware multi-stream densely connected convolutional neural network-based algorithm that enables the network itself to automatically determine the rain-density information and then efficiently remove the corresponding rain-streaks guided by the estimated rain-density label.

In the second part of the thesis, we develop an end-to-end deep learning-based method to address the single image dehazing problem. We propose to combine the physics-based image formation model with data-driven approach for single image dehazing. In particular, a new end-to-end single image dehazing method, called Densely Connected Pyramid Dehazing Network (DCPDN), is proposed which can jointly estimate the transmission map, atmospheric light and dehazed image all together. The end-to-end learning is achieved by directly embedding the atmospheric scattering model into the network, thereby ensuring that the proposed method strictly follows the physical-driven scattering model for dehazing.

In the final part of the thesis, we develop an image-to-image translation method for generating high-quality visible images from polarimetric thermal faces. Since polarimetric images contain different stokes images containing various polarization state information, we propose a Generative Adversarial Network-based multi-stream feature-level fusion technique to synthesize high-quality visible images from polarimetric thermal images. An application of this approach is presented in polarimetric thermal-to-visible
cross-modal face recognition.
Subject (authority = RUETD)
Topic
Electrical and Computer Engineering
Subject (authority = ETD-LCSH)
Topic
Computer vision
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_9367
PhysicalDescription
Form (authority = gmd)
electronic resource
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
1 online resource (131 pages) : illustrations
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
Note (type = statement of responsibility)
by He Zhang
RelatedItem (type = host)
TitleInfo
Title
School of Graduate Studies Electronic Theses and Dissertations
Identifier (type = local)
rucore10001600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/t3-20x2-3c58
Genre (authority = ExL-Esploro)
ETD doctoral
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Zhang
GivenName
He
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2018-11-21 19:05:00
AssociatedEntity
Name
He Zhang
Role
Copyright holder
Affiliation
Rutgers University. School of Graduate Studies
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
CreatingApplication
Version
1.5
ApplicationName
pdfTeX-1.40.19
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2019-01-09T22:06:14
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2019-01-09T22:06:14
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024