Koochaki, Fatemeh. Machine learning-based approaches for traumatic brain injury: from early diagnosis to implementation of smart assistive systems. Retrieved from https://doi.org/doi:10.7282/t3-jrf3-sx36
DescriptionTraumatic brain injury (TBI) is a growing public health concern that can lead to various long-lasting physical disabilities and cognitive disorders. Depending on the severity level of the injury, TBI is categorized into mild, moderate, and severe. Patients suffering from moderate and severe cases of TBI often have limited motor or communication capabilities. On the other hand, early diagnosis of mild TBI (mTBI) has been a challenging problem, due to the rapid recovery of acute symptoms, the lack of a universally-agreed definition of mTBI, and the absence of evidence of injury in typical static neuroimaging scans. The work in this dissertation aims to address two main challenges for TBI patients. The first part of the work focuses on predicting intention using eye movement for the purpose of developing assistive devices for patients with moderate and severe TBI. We present new and data-driven frameworks that receive eye gaze as input and predict the user’s intention for performing daily activities. By employing the hidden Markov model, convolutional neural network (CNN), and long short-term memory (LSTM), the proposed frameworks use both spatial and temporal information of eye gaze to provide early prediction of users' intent. Results show that the proposed frameworks perform considerably better than existing gaze-based intent prediction techniques. The second part of the work targets patients with mTBI and focuses on the problem of early diagnosis of mTBI. We propose a series of data-driven frameworks to address various aspects of this problem. First, we use a customized CNN model that automatically extracts the spatial features of cortical images to differentiate healthy and injured brains. To further improve the performance of the model, a convolutional autoencoder (CAE) is used for extracting the most informative spatial features of cortical images. Furthermore, the bag of visual words (BoVW) technique is employed to represent images as histograms of local features derived from training data patches, and a pre-trained vision transformer (ViT) model is utilized to identify injured from healthy brains. Given the time-varying nature of the brain function, we then propose to use both temporal and spatial features of cortical images via two deep-learning architectures: a CNN-LSTM and a 3D-CNN. Results demonstrate the importance of utilizing spatial and temporal information for early detection of mTBI. Finally, we investigate the possibility of tracking the functionality of an injured brain over time and finding the markers that are representative of disease progression. To achieve this goal, we propose a Siamese CNN (SCNN) approach to compute the distances between healthy, sham, and injured brain images with anchor images, and use these distances to track injury-related changes in mTBI over time.