TY - JOUR TI - Stochastic alternating optimization methods for solving large-scale machine learning problems DO - https://doi.org/doi:10.7282/T3VM4GPB PY - 2018 AB - In this dissertation, we propose two stochastic alternating optimization methods for solving structured regularization problems, which have been widely used in machine learning and data mining. The rst algorithm is called Stochastic Alternating Linearization (SALIN), which is an stochastic extension of the Alternating Linearization (ALIN) in solving convex optimization problems with complex non-smooth regularization term. SALIN linearizes the loss function and penalty function alternatively at each iteration, based on the stochastic approximation of sub-gradients. By applying a special update test at each iteration, and a carefully designed sub-gradients update scheme, the algorithm achieves fast and stable convergence. The update test just relies on a xed pre-de ned set, and we show that the choice of the test set has little influence on the overall performance of the algorithm. Therefore SALIN is a robust method. The other algorithm is called preconditioned stochastic Alternating Direction Method of Multipliers, which is specially designed to handle structured regularized regression problems such as Fused LASSO, but with the design matrix being ill-conditioned. We prove its O(1/sqrt{t}) convergence rate for general convex functions and O(log t/t) for strongly convex functions, and show that the constant depends only on the lower dimension of the data matrix. We present results of extensive numerical experiments for structured regularization problems such as Fused LASSO and graph-guided SVM, with both synthetic and real-world datasets. The numerical results demonstrate the efficacy and accuracy of our methods. KW - Management KW - Machine learning KW - Mathematical optimization LA - eng ER -