Abstract
(type = abstract)
The last few years have witnessed the rise of the big data era, which features the prevalence of data sets that are high-dimensional, noisy, and dynamically generated. As a consequence, the gap between the limited availability of computational resources and the rapid pace of data generation has become ubiquitous in real-world applications, and has in turn made it indispensable to develop provable learning algorithms with efficient computation, economic memory usage, and noise-tolerant mechanisms.
Our work is driven inherently by practical large-scale problems, and the goal is to understand the fundamental limits imposed by the characteristics of the problems (e.g., high-dimensional, noisy, sequential), explore the benefits of geometric structures (e.g., sparsity, low rank), and offer scalable optimization tools to balance the trade-off between model accuracy, computational efficiency and sample complexity.
In the dissertation, we mainly investigate three important problems, as stated below.
High-dimensional statistics. The large demand of learning from high-dimensional data where the number of attributes (i.e., data dimension) is of the same order as the number of samples or even larger has stimulated a large body of novel statistical paradigms, in which typically a low-dimensional structure is presumed to make the estimation possible. As an example, for rare diseases there is few patient data for research but usually they are caused by a small portion of factors such as the physical environment. It is henceforth crucial to distinguish the determinants from a large pool of possible attributes, namely the problem of variable selection (also known as support recovery). While it has been established that many convex programs consistently select the desired variables under certain conditions, the computational bottleneck has arguably hindered the application of these estimators to modern data analytics. In this regard, we study a family of non-convex approaches and show that it admits a superior time complexity, a near-optimal sample size, and a broader range of applications.
Online and stochastic optimization. It has been recognized that the challenges of the big data root not only in the high dimensionality, but also in the rapid pace of data generation. For example, there are millions of tweets per day, for which even storing the data becomes expensive. In order to process the huge volume of data, any practical algorithm has to be scalable, in the sense that the model updating and evaluation have to be online and efficient. This motivates us to examine problems from the perspective of optimization theory. In particular, we focus on those involving a low-rank or sparse structure, which has a variety of applications such as recommender systems and image de-noising. We develop novel algorithms whose time complexity is linear with the sample size, allowing a real-time response. Another salient feature is that the memory cost is a constant, i.e., independent of the sample size, making them an appealing mitigation to large-scale machine learning systems. Theoretically, we prove that the solution produced by our algorithms is accurate and is robust to various types of noise.
Estimation from quantized data. While a large body of early work emphasizes on the observation model with real values, in practical applications the observations are often extremely {quantized}. Such low-bit observations not only save the storage, but also ease the process of data acquisition. For instance, ratings at Netflix are changed to either ``thumbs up'' or ``thumbs down'', since it is sometimes hard for an user to rate one star or two stars if he dislikes a movie. We study the problem of binary matrix completion, where many entries of the matrix are missing and the goal is to predict them. We present efficient and robust algorithms, together with a rigorous analysis and a comprehensive empirical illustration.