LanguageTerm (authority = ISO 639-3:2007); (type = text)
English
Abstract (type = abstract)
Traumatic brain injury (TBI) is a growing public health concern that can lead to various long-lasting physical disabilities and cognitive disorders. Depending on the severity level of the injury, TBI is categorized into mild, moderate, and severe. Patients suffering from moderate and severe cases of TBI often have limited motor or communication capabilities. On the other hand, early diagnosis of mild TBI (mTBI) has been a challenging problem, due to the rapid recovery of acute symptoms, the lack of a universally-agreed definition of mTBI, and the absence of evidence of injury in typical static neuroimaging scans. The work in this dissertation aims to address two main challenges for TBI patients. The first part of the work focuses on predicting intention using eye movement for the purpose of developing assistive devices for patients with moderate and severe TBI. We present new and data-driven frameworks that receive eye gaze as input and predict the user’s intention for performing daily activities. By employing the hidden Markov model, convolutional neural network (CNN), and long short-term memory (LSTM), the proposed frameworks use both spatial and temporal information of eye gaze to provide early prediction of users' intent. Results show that the proposed frameworks perform considerably better than existing gaze-based intent prediction techniques. The second part of the work targets patients with mTBI and focuses on the problem of early diagnosis of mTBI. We propose a series of data-driven frameworks to address various aspects of this problem. First, we use a customized CNN model that automatically extracts the spatial features of cortical images to differentiate healthy and injured brains. To further improve the performance of the model, a convolutional autoencoder (CAE) is used for extracting the most informative spatial features of cortical images. Furthermore, the bag of visual words (BoVW) technique is employed to represent images as histograms of local features derived from training data patches, and a pre-trained vision transformer (ViT) model is utilized to identify injured from healthy brains. Given the time-varying nature of the brain function, we then propose to use both temporal and spatial features of cortical images via two deep-learning architectures: a CNN-LSTM and a 3D-CNN. Results demonstrate the importance of utilizing spatial and temporal information for early detection of mTBI. Finally, we investigate the possibility of tracking the functionality of an injured brain over time and finding the markers that are representative of disease progression. To achieve this goal, we propose a Siamese CNN (SCNN) approach to compute the distances between healthy, sham, and injured brain images with anchor images, and use these distances to track injury-related changes in mTBI over time.
Subject (authority = RUETD)
Topic
Electrical engineering
Subject (authority = RUETD)
Topic
Computer science
Subject (authority = RUETD)
Topic
Computer engineering
Subject (authority = local)
Topic
Assistive devices
Subject (authority = local)
Topic
Contrastive learning
Subject (authority = local)
Topic
Convolutional auto-encoder
Subject (authority = local)
Topic
Intention prediction by eye-gaze
Subject (authority = local)
Topic
Mild traumatic brain injury
Subject (authority = local)
Topic
Visual behavior
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
http://dissertations.umi.com/gsnb.rutgers:12351
PhysicalDescription
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
108 pages : illustrations
Note (type = degree)
Ph.D.
Note (type = bibliography)
Includes bibliographical references
RelatedItem (type = host)
TitleInfo
Title
School of Graduate Studies Electronic Theses and Dissertations
Identifier (type = local)
rucore10001600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.