DescriptionThis thesis focuses on some fundamental problems in machine learning that are posed as nonconvex matrix factorizations. More specifically we investigate theoretical and algorithmic aspects of the following problems: i) inductive matrix completion (IMC), ii) structured dictionary learning (DL) from tensor data, iii) tensor linear regression and iv) principal component analysis (PCA). The theoretical contributions of this thesis include providing recovery guarantees for IMC and structured DL by characterizing the local minima and other geometric properties of these problems. The recovery results are stated in terms of upper bounds on the number of observations required to recover the true matrix (dictionary in the case of DL) underlying the data. Another major theoretical contribution of this work is providing fundamental limits on the performance of tensor linear regression solvers by deriving a lower bound on the worst case mean squared error of any estimator. On the algorithmic side, this thesis proposes novel online and batch algorithms for solving structured dictionary learning problem as well as a novel multi-stage accelerated stochastic PCA algorithm that achieves near optimal results.