Description
TitleComputational methods for predicting behavior from neuroimaging data
Date Created2018
Other Date2018-10 (degree)
Extent1 online resource (108 pages : illustrations)
DescriptionOne of the major goals in neuroscience is to understand the relationship between the brain function and the behavior. Inferring about the behavior, intent, or the engagement of a particular cognitive process from neuroimaging data finds applications in several domains including brain machine interfaces. To date, although a variety of imaging techniques have been developed and various computational techniques have been suggested, the estimation power has been limited to distinguishing very distinct classes of motor activities or cognitive processes. To improve the estimation power, there exist technical challenges that need to be addressed at the three stages of data acquisition (recording brain activities), data processing (processing brain recordings), and data analytics (inferring behavior from brain recordings).
The objective of the dissertation is to address technical challenges at the data processing and the data analytics stages, by leveraging tools from network science, machine learning and signal processing.
The first part of the dissertation focuses on data processing. In brain imaging experiments, typically, to reduce noise and to empower the signal strength associated with task-induced activities, recorded signals (e.g., in response to repeated stimuli or from a group of individuals) are averaged through a point-by-point conventional averaging technique. However, due to the existence of variable latencies in recorded activities, the use of the conventional averaging technique can lead to inaccuracies and loss of information in the averaged signal, which may result in inaccurate conclusions about the functionality of the brain. To improve the averaging accuracy in the presence of variable latencies, we present new averaging framework that employs dynamic time warping (DTW) algorithm to account for the temporal variation in the alignment of functional Near-Infrared Spectroscopy (fNIRS). As a proof of concept, we focus on the problem of localizing task-induced active brain regions. The proposed framework is extensively tested on experimental data (obtained from both block design and event-related design experiments) as well as on simulated data. The proposed framework is shown to improve the accuracy of the averaging operation compared to conventional averaging techniques and is expected to introduce significant impact in various fNIRS-based neuroscience and clinical research studies.
The second part of the dissertation focuses on data analytics. We first address the problem of inferring behavior from neuroimaging data, by extracting new features based on the temporal characteristics of brain recordings. We hypothesize that there exist characteristics in the time course of cortical activities that are specific to the corresponding behavior. We introduce a method based on visibility graph (VG) to reliably identify such discriminatory characteristics in cortical recordings. An extensive study considering different choice of features and machine learning algorithms is conducted based on recordings obtained via widefield transcranial calcium imaging under spontaneous whisking condition, and recordings obtained via fNIRS under resting state and task execution conditions. It is shown, for the first time, that the characteristics of calcium recordings and fNIRS signals identified by the proposed method, carry discriminatory information that are powerful enough to decode behavior. The proposed method will have applications in advancing the accuracy of brain machine interfaces, and can open up new opportunities to study various aspects of brain function and its relationship to behavior.
Next, we propose to use multilayer perceptron (MLP) to perform classification based on the graph measures of the VGs. We also build a predictive framework using convolutional neural networks (CNN) to perform classification from the constructed multi-channel VGs directly. Multi-channel VGs allows the CNN to learn the discriminatory features utilizing the full temporal information encoded in the VGs, and can, hence, strengthen the inferring power. We evaluate the performance of both approaches using the widefield transcranial calcium imaging data, and demonstrate improvement compared to classical machine learning methods.
NotePh.D.
NoteIncludes bibliographical references
Noteby Li Zhu
Genretheses, ETD doctoral
Languageeng
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.