DescriptionBig, distributed data create a bottleneck for storage and computation in machine learn- ing. Principal Component Analysis (PCA) is a dimensionality reduction tool to resolve the issue. This thesis considers how to estimate the principal subspace in a loosely connected network for data in a distributed setting. The goal for PCA is to extract the essential structure of the dataset. The traditional PCA requires a data center to aggregate all data samples and proceed with calculation. However, in real-world settings, where memory, storage, and communication constraints are an issue, it is sometimes impossible to gather all the data in one place. The intuitive approach is to compute the PCA in a decentralized manner. The focus of this thesis is to find a lower-dimensional representation of the distributed data with the well-known orthogonal iteration algorithm. The proposed distributed PCA algorithm estimates the subspace representation from sample covariance matrices in a decentralized network while preserving the privacy of the local data.