Abstract
(type = abstract)
Executing multiple workflows to achieve scientific insight is needed in computational sciences such as biomolecular sciences[1,2], ecological sciences[3,4] and particle physics[5]. The aggregated set of workflows that must be executed to achieve a computational objective is defined as a computational campaign.
This dissertation addresses the problem of effectively and efficiently executing a computational campaign on High Performance Computing (HPC) resources. Specifically, the dissertation focuses on computational campaigns with data- and compute-intensive workflows that utilize HPC resources at scale. Data-intensive workflows processes large amounts of data and their execution time heavily depends on I/O and data management. Compute-intensive workflows perform large amounts of computations with minimal I/O and therefore require little data management.
Currently, the execution of data-intensive workflows is not well supported on HPC resources. MapReduce is one of the most successfully abstraction used to execute data-intensive workflows at scale on cloud resources. However, implementing MapReduce on HPC resources is non-trivial and requires the use of additional abstractions. We utilize the Pilot abstraction as an integrating concept between HPC and MapReduce. We extend RADICAL-Pilot - a framework implementing the Pilot abstraction - to support Hadoop- a framework implementing the MapReduce abstraction - on HPC resources. We experimentally characterize the execution time of data-intensive workflows and extension's overheads and show that Hadoop indeed reduces the execution time of such workflows on HPC resources.
MapReduce rests on task-parallelism to effectively and efficiently execute data-intensive workflows and several frameworks have emerged to support general-purpose task-parallelism. We experimentally investigate the suitability of three task-parallel frameworks for the execution of data-intensive workflows on HPC resources. Based on our experimental analysis, we provide a conceptual model to determine which framework is more suitable based on the characteristics of the selected data-intensive workflow. That conceptual model and the Pilot abstraction provide a methodology that application developers can use to maximize resource utilization while reducing the engineering effort needed to develop and execute data-intensive workflows on HPC resources.
In addition to data-intensive, workflows can also be compute-intensive, requiring different capabilities to effectively and efficiently utilize heterogeneous resources. Such capabilities can be implemented by multiple designs with different architectural and performance characteristics. Selecting a suitable design can significantly increase concurrency, resource utilization and reduce the overhead imposed by each design implementation. Nonetheless, choosing the right design can be challenging, especially in absence of an established methodology for characterizing and comparing alternative but functionally equivalent designs. We implement and experimentally characterize three functionally equivalent designs, showing which design approach is best suited for data- and compute-intensive workflows when executed on HPC resources at scale.
After establishing the methodology to effectively and efficiently execute data- and compute-intensive workflows on HPC, we investigate the support of computational campaigns. We elicit the requirements for a campaign manager from three scientific computational campaigns. Based on those requirements, we design and prototype a campaign manager. Our prototype is domain-agnostic and adheres to the building blocks design approach. The campaign manager prototype creates an execution plan and simulates the execution of a campaign on HPC resources.
As computational campaigns can utilize multiple HPC resources, they require an execution plan that efficiently and effectively map workflows on resources. In addition, computational campaigns may utilize HPC resources that may offer different computational capabilities, e.g., number of operations per second, and thus are heterogeneous. Further, HPC resources are dynamic as their performance changes over time for several reasons, and users offer a workflow runtime estimation which is uncertain at best.
Selecting a planning algorithm to derive an effective execution plan for a campaign on dynamic and heterogeneous HPC resources, while workflow runtime is uncertain, can be challenging. We investigate three algorithms to derive an execution plan for campaigns, characterizing their performance in terms of campaign makespan, and plan sensitivity to resource dynamism and workflow runtime estimation uncertainty. Based on our analysis, we provide a conceptual model for selecting a suitable planning algorithm based on characteristics of a computational campaign and HPC resources.
Our dissertation makes the following contributions: 1) Integrate MapReduce frameworks and HPC platforms to offer an unified environment for the effective and efficient execution of data-intensive workflows at scale. 2) Provide a conceptual model for selecting a task-parallel framework based on the workflow requirements, HPC resource capabilities and framework abstractions and performance. 3) Provide design guidelines to support the execution of data- and compute-intensive workflows on HPC resources at scale, alongside an experimental methodology to compare functionally equivalent designs. 4) Design and implementation of a campaign manager prototype to plan and simulate the execution of computational campaigns on multiple, heterogeneous and dynamic HPC resources. 5) Provide a conceptual model for selecting a planning algorithm to derive an execution plan based on campaign, resource heterogeneity and dynamism, workflow runtime uncertainty, and algorithm characteristics.