DescriptionWhen datasets are distributed over a network and a central server is infeasible, machine learning has to be performed in a decentralized fashion. The dissertation introduces new methods that solve decentralized machine learning problems in the presence of Byzantine failures. Classic decentralized learning methods require nodes communicate with each other by communicating over the network. When a node engages in arbitrary or malicious behavior, it is termed as having Byzantine failure. Without any Byzantine-resilient modification, classic learning methods cannot complete machine learning tasks as intended in the presence of Byzantine failure. Byzantine-resilient decentralized learning methods are discussed in this dissertation. Both theoretical guarantees and experiments are given to justify the usefulness of the methods under Byzantine settings.