Compressed sensing: weighted approach to compressed sensing with applications to EEG source localization and MIMO radar DOA estimation
Citation & Export
Hide
Simple citation
Al Hilli, Ahmed.
Compressed sensing: weighted approach to compressed sensing with applications to EEG source localization and MIMO radar DOA estimation. Retrieved from
https://doi.org/doi:10.7282/t3-1yp6-v583
Export
Description
TitleCompressed sensing: weighted approach to compressed sensing with applications to EEG source localization and MIMO radar DOA estimation
Date Created2019
Other Date2019-05 (degree)
Extent1 online resource (x, 94 pages) : illustrations
DescriptionSparse signals can be recovered based on fewer samples than suggested by the Nyquist Theorem. Those samples are obtained during a process referred to as sparse sampling, which amounts to collecting random projections of the signal on some basis functions. Using the collected projections, and under certain conditions, the sparse signal can be estimated using a non-linear estimation process. The idea is to estimate a vector with smallest number of non-zero entries, or equivalently nd the least l0-norm solution. In sparse signal recovery problems, l1-norm minimization is typically used as relaxation of the more complex l0-norm minimization problem. Conditions for strong equivalence between l0-norm and l1-norm include Mutual Coherence, Restricted Isometry Property (RIP), and Null Space Property (NSP). The Range Space Property (RSP) provides the conditions under which the least l1-norm solution is equal to at most one of the least l0-norm solutions. These conditions depend on the sensing matrix and the support of the underlying sparse solution.
The l1-norm minimization method has been applied successfully in many applications. However, l1-norm minimization method may not satisfy the RSP conditions. In this thesis, we first address the problem of recovering sparse signals which arise in scenarios that do not satisfy the RSP conditions. For such cases we propose to formulate and solve a weighted l1-norm minimization problem, in which the sensing matrix is post-multiplied by a diagonal weight matrix. We show that by appropriately choosing the weights, we can formulate an l1-norm minimization problem that satisfies the RSP, even if the original problem does not. By solving the weighted problem we can obtain the support of the original problem. We provide the conditions which the weights must satisfy, for both noise free and noisy cases. Although those conditions involve information about the support of the sparse vector, the class of good weights is very wide, and in most cases encompasses a low-resolution estimate of the underlying vector, for example, an estimate that is obtained via a simple method that does not encourage sparsity.
The proposed weighted approach is applied to the problem of the Electroencephalography (EEG) source localization, in which, the obtained measurements from sensors distributed around the head are used to localize sources inside the brain. Assuming sparse brain activity in response to simple tasks, one can formulate source localization problem as a sparse signal recovery problem, in which the support of the sparse vector is directly related to the coordinates of the sources inside the brain. However, the corresponding basis matrix, referred to as the lead field matrix, has high mutual coherence, and there is no guarantee that the corresponding least `1-norm solution will solve for the actual locations. Developing reliable EEG source localization techniques has potential applications in Brain Computer Interfaces (BCIs). Most of existing EEG-based BCIs rely on the scalp recorded signals, but the poor spatial resolution of EEG limits the number of actions to be discriminated. Source domain information can improve the discrimination of actions, which motivates the application of source localization in EEG-based BCIs. The proposed method, with weights equal to the Multiple Signal Classification (MUSIC) estimate of the brain activity, is used in an experiment eliciting auditory evoked potentials, and is shown to correctly localize brain activations.
The main issue with l1-norm minimization approaches is that the global minimum associated with l1-norm cost function may not coincide with the sparsest solution. The Sparse Bayesian Learning (SBL) method has been shown to have shown tighter approximation to the l0-norm function, and its global minimum for noise free case coincide with the sparsest solution. In the second part of this thesis, we propose a Weighted Sparse Bayesian Learning (WSBL). Unlike SBL, where all hyperparameter priors follow Gamma distributions with identical parameters, in WSBL, the hyperparameters are Gamma distributed with distinct parameters. These parameters, guided by some known weights, give more importance to some hyperparameters over others, thus introducing more degrees of freedom to the problem and leading to better recovery performance. The weights can be determined based on a low-resolution estimate of the sparse vector, for example an estimate obtained via a method that does not encourage sparsity. The choice of the MUSIC estimate as weight is analyzed. Unlike SBL, where the hypeparameters are not bounded, in WSBL there is an upper bound; this make it easy to select a threshold that distinguishes between zero and non-zero elements in the recovered sparse vector, which helps the iterative recovery process converge faster. Theoretical analysis based on variational approximation theory, and also simulation results demonstrate that WSBL results in substantial improvement in terms of probability of detection and probability of false alarm, as compared to SBL and support knowledgeaided sparse Bayesian (BSN), especially in the low signal to noise ratio regime. The performance of WSBL is evaluated for Direction of Arrival (DOA) in colocated Multiple Input Multiple Output (MIMO) radar.
While WSBL exhibits substantial improvement over SBL, its performance depends highly on MUSIC estimates, which may suffer when there is no adequate number of snapshots, for example, in cases in which the structure of the sparse vector changes with time. In the last part of this thesis, we propose Bernoulli Sparse Bayesian Learning (BSBL), in which, a machine learning approach is used to estimate the probability of each entry in the sparse vector to be non-zero. Unlike MUSIC, these probabilities are estimated based on a single snapshot. In BSBL, each rate parameter, bi, is modeled as a Bernoulli random variable, taking a high or a low value with probability pi and 1 - pi, respectively. The probability pi is estimated based on the observation, and a statistical model that describes how different rate parameters give rise to different outputs; given the sensing matrix, a specific signal-to-noise ratio level, and the degree of sparsity, the latter model can be obtained during a training phase. In particular, a Gaussian Naive Bayesian Classifier (NBC) is used to assign each bi to the high or low value class, corresponding to active or non-active elements of the sparse vector, based on the computed probability. Based on the estimated rate parameters, BSBL estimates the hyperparameters along the lines of SBL. The proposed approach shows significant improvement in probability of detection and false alarm, as compared to SBL-type methods at low Signal to Noise Ratio (SNR) and various sparsity levels.
NotePh.D.
NoteIncludes bibliographical references
Genretheses
LanguageEnglish
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.