Description
TitleComputational appearance models for quantitative dermatology
Date Created2017
Other Date2017-10 (degree)
Extent1 online resource (xii, 105 p. : ill.)
DescriptionSkin appearance modeling using high-resolution imaging has led to advances in recognition, rendering and analysis. In dermatology, workforce shortage and long patient wait time has motivated the need for computational methods to assist the dermatologists. In recent automated image recognition tasks, deep learning with convolutional neural nets (CNN) has achieved remarkable results. However in many clinical settings, training data is often limited and insufficient for CNN training. Furthermore, skin images have subtle differences and are very different from the typical images used for computer vision tasks. This moti- vates the need of developing methods that can be used for limited and unique datasets. In this research, we propose computational models using deep learning approaches for novel problems in quantitative dermatology. First, we develop a photo-realistic facial style transfer method (FaceTex), which trans- fers facial texture from a new style image while preserving most of the original facial structure and identity. FaceTex has implications in commercial applications and dermatol- ogy, such as visualizing the effects of age, sun exposure, or skin treatments (e.g. anti-aging, acne). We suppress the changes around the meso-structures (eyes, eyebrow, nose, lips and lower facial contour) by introducing the Facial Prior Regularization that smoothly slows down the updating. Additionally, we tackle the challenge of preserving facial shape by minimizing a Facial Structure Loss, which we define as an identity loss from a pre-trained face recognition network that implicitly preserves the facial structure. Our results demonstrate superior texture transfer than state-of-the-art methods because of the ability to maintain the identity of the original face image. Second, we develop a computational skin texture model to characterize image-based patterns from ultraviolet and blue fluorescence multimodal images and link them to distri- bution of microbes on the skin surface, i.e. the skin microbiome. The intersection of ap- pearance and microbiome clusters reveals a pattern of microbiome that is predictable with high accuracy based on skin appearance. We present a new approach, appearance-driven multiview co-clustering (AMCO), which incorporates both multiview and co-clustering in order to discover which microbiome parameters are linked to appearance. Finally, to measure the thickness of skin layers, we develop a hybrid deep learning method to classify reflectance confocal microscopy images. We also use CNNs to classify the images and demonstrate that smaller training datasets are insufficient for CNN training and feature extraction is essential in such cases. We compare our method with a suite of texture recognition methods for RCM images and show that hybrid deep learning outperforms the state-of-the-art with a test accuracy of 81.73%. Using a patch-based approach and pre- trained CNNs for feature extraction, we achieve a peak classification accuracy of 89.87%.
NotePh.D.
NoteIncludes bibliographical references
Noteby Parneet Kaur
Genretheses, ETD doctoral
Languageeng
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.