DescriptionAdvanced coupled scientific simulation workflows running at extreme scales are providing new capabilities and new opportunities for high fidelity modeling and insights in a wide range of application areas. These workflows compose multiple physical models along with visualization and analysis services that share and exchange large amounts of data at runtime. Due to the huge I/O overhead, traditional file-based coupling approaches become infeasible. Instead, recent simulation-time data management approaches using in-memory data-staging methods have been explored to address this challenge. However, due to the complexities of emerging coupled applications and the architecture of current and future systems, these data staging based solutions are also presenting several new challenges. First, many of these scientific workflows containing dynamically adaptive formulations, such as Adaptive Mesh Refinement (AMR), which exhibit dynamic runtime behaviors and result in dynamically changing data volumes and imbalanced data distributions. Such dynamic runtime behaviors increase the complexity of managing and processing simulation data. In addition, these behaviors introduce new challenges of managing the staging resources as well as scheduling in-memory data processing while satisfying constraints on (1) the amount of data movement, (2) the overhead on the simulation, and/or (3) the quality of the simulations/analysis. Second, architectural trends indicate that emerging systems will have increasing numbers of cores per node and correspondingly decreasing amounts of DRAM memory per core as well as decreasing memory bandwidth. These trends can significantly impact the effectiveness of the online data management approaches for runtime data processing pipelines, and especially their ability to support data intensive simulation workflows. To address the above dynamic data management challenges, this thesis explores an autonomic approach to enable efficient runtime data management, which can dynamically respond to the varying data management requirements. Specifically, it first formulates an abstraction that can be used to realize autonomic data management runtimes for coupled simulation workflows. To address the dynamic data management challenges in tightly coupled simulation workflows containing dynamically adaptive formulations, this thesis then presents a realization of this autonomic approach that uses runtime cross-layer adaptations. This realization explores autonomic runtime adaptations at application layer, middleware layer, and resource layer. It also exploits a coordinated approach that dynamically combines these adaptations in a cross-layer manner. This thesis also presents an autonomic multi-tiered data management runtime that leverages both DRAM and SSD to support autonomic data management for loosely coupled scientific workflows. It demonstrates how an autonomic data placement mechanism can dynamically manage and optimize data placement across the DRAM and SSD storage levels in this multi-tiered runtime realization. The research concepts and approaches have been prototyped and experimentally evaluated using real application workflows on current high end computing systems, including the Intrepid IBM BlueGene/P system at Argonne National Laboratory and the Titan Cray-XK7 system at Oak Ridge National Laboratory.