TY - JOUR
TI - Codes for private distributed computation with applications to machine learning
DO - https://doi.org/doi:10.7282/t3-rgka-m350
PY - 2020
AB - We consider the problem of private distributed computation. Our main interest in this problem stems from machine learning applications. A master node (referred to as master) possesses a tremendous amount of {em confidential} data (e.g., personal, genomic or medical data) and wants to perform intensive computations on it. The master divides these computations into smaller computational tasks and distribute them to {em untrusted} workers that perform these tasks in parallel. The workers return their results to the master, who processes them to obtain its original task. In large scale systems, the appearance of slow and unresponsive workers, called stragglers, is inevitable. Stragglers incur large delays on the computation if they are not accounted for. Our goal is to construct codes that maintain the privacy of the data and allow flexible straggler mitigation, ie tolerate the presence of a varying number of stragglers.
We propose the use of communication efficient secret sharing (CE-SS) in private {em linear} coded computation with straggler mitigation. A CE-SS scheme satisfies the properties of threshold secret sharing. Moreover, it allows the master to reconstruct the secret by reading and communicating the minimum amount of information from a variable number of workers, up to a given threshold. We introduce three explicit constructions of CE-SS codes called {em Staircase codes}. The first construction achieves optimal communication and read costs for a given number of workers $d$. The second construction achieves optimal costs universally for all possible values of $d$ between $k$ and $n$. The third construction, which is the most general, achieves optimal costs universally for all values of $d$ in any given set $Delta subseteq {k,dots,n}$.
We analyze the performance of Staircase codes in distributed computation systems where the workers have fixed resources. We model the workers service time by iid shifted exponential random variables. We derive upper and lower bounds on the Master's mean waiting time. We derive the distribution of the Master's waiting time, and its mean, for systems with up to two stragglers. We show that Staircase codes always outperform existing solutions based on classical secret sharing codes. We validate our results with extensive implementation on Amazon EC2.
We consider the case where the workers have time-varying resources. We develop a private and rateless adaptive coded computation (PRAC) algorithm for distributed matrix-vector multiplication. PRAC is based on the use of Fountain codes coupled with MDS codes to maintain privacy and to adaptively assign tasks to the workers. The assigned tasks are proportional to the estimated resources at the workers. We provide theoretical guarantees on the performance of PRAC and compare it to baselines. Moreover, we validate our theoretical results through simulations and implementation on Android devices.
We go beyond linear coded computation and tackle the problem of distributed gradient descent for general convex loss functions in the presence of stragglers. However, we drop the privacy constraints on the master's data. We propose an approximate gradient coding scheme called em Stochastic Gradient Coding em (SGC), which works when the stragglers are random. SGC distributes data points redundantly to workers according to a pair-wise balanced design, and then simply ignores the stragglers. We prove that the convergence rate of SGC mirrors that of batched Stochastic Gradient Descent (SGD) for the $ell_2$ loss function, and show how the convergence rate can improve with the redundancy. We also provide bounds for more general convex loss functions. We show empirically that SGC requires a small amount of redundancy to handle a large number of stragglers and that it can outperform existing approximate gradient codes when the number of stragglers is large.
We study private information retrieval (PIR) which is an extension of private distributed computation. The data is public and stored (replicated) at the workers. The master wants to retrieve a file without revealing its identity to the workers. Robust PIR capacity is the minimum amount of information downloaded by the master to retrieve the file in the presence of a fixed number of stragglers. We show that communication efficient secret sharing achieve robust PIR capacity simultaneously for all number of stragglers, up to a given threshold. As a result, we construct a family of universally robust PIR called Staircase-PIR.
KW - Electrical and Computer Engineering
LA - English
ER -