Abstract
(type = abstract)
Today and for the foreseeable future, demand for datacenter capacity is constantly increasing worldwide, primarily driven by cloud computing. Large cloud providers such as Amazon, Microsoft and Google are currently deploying multi-gigawatt facilities annually. This staggering growth is costly. Just for datacenter infrastructure, excluding IT equipment, energy and maintenance costs, these companies spend approximately ten dollars to house each usefulWatt of IT equipment. Due to the huge monetary incentives, cloud providers have made significant strides in reducing both capital (provisioning) and operations costs of their datacenters primarily via the adaptation of cost-efficient cooling technologies. However, significant opportunities still exist. Thus, this dissertation is dedicated to cost-efficient methods for provisioning the datacenter cooling infrastructure and the implications to server reliability. In the first part of this dissertation, we propose a method to reduce cooling costs by underprovisioning the cooling infrastructure of datacenters. Cooling costs still represent a signi cant capital and operational expense, mainly because cloud providers typically provision their cooling infrastructure for the worst-case scenario (i.e., very high load and outside temperature at the same time). Since extreme conditions occur very rarely, it is cost efficient to provision for less capacity (under-provision) and manage the rare instances with workload management policies. To determine the ideal type and amount of cooling, we introduce CoolProvision, an optimization and simulation framework for selecting the cheapest provisioning within performance constraints de ned by the datacenter operator. CoolProvision leverages an abstract trace of the expected workload, as well as cooling, performance, power, reliability, and cost models to explore the space of potential provisionings. Using data from a real small free-cooled datacenter, our results suggest that CoolProvision can reduce the capital cooling costs by up to 55%. We extrapolate our experience and results to larger cloud datacenters as well. Using cheap (and/or under-provisioned) cooling techniques (e.g., free-cooling) lowers datacenter costs significantly, but may also expose servers to higher and more variable temperatures and relative humidities. The question naturally arises whether these environmental conditions have a significant impact on hardware component reliability. To answer this question, we use data from nine hyperscale datacenters to study the impact of environmental conditions on the reliability of server hardware, with a particular focus on disk drives and free cooling. Based on this study, we derive and validate a new model of disk lifetime as a function of environmental conditions in modern datacenters. Furthermore, we quantify the tradeoffs between energy consumption, environmental conditions, component reliability, and datacenter costs. Finally, based on our analyses and model, we derive server and datacenter design lessons. Our main observations are (1) relative humidity seems to have a dominant impact on component failures; (2) disk failures increase significantly when operating at high relative humidity, due to controller/adaptor malfunction; and (3) though higher relative humidity increases component failures, software availability techniques can mask them and enable free-cooled operation, resulting in significantly lower infrastructure and energy costs that far outweigh the cost of the extra component failures. In summary, the methods proposed in this dissertation allow datacenter operators to reduce their capital costs by provisioning reduced cooling capacity with minimal reliability and performance implications. The same methods can further be used to reduce their operational costs and environmental footprint by operating the energy demanding cooling equipment at lower levels and at the same time maintain similar reliability and performance levels. Finally, the reliability models developed can be used by the research community and industry to improve server reliability, understand the implications of various datacenter environmental conditions and improve application robustness.