Description
TitleComputing with big spatial disaster data for coastal resilience decision support
Date Created2018
Other Date2018-05 (degree)
Extent1 online resource (xv, 168 p. : ill.)
DescriptionSevere weather events such as hurricanes, ice storms, surge, and flooding have been occurring across the U.S and around the world, threatening places where economic and industrial activities are heavily concentrated. These extreme events are now increasing observed and monitored with a loosely coupled network of geospatial sensors. Analysis of these datasets offers tremendous opportunities in improving the resilience and adaptability of coastal communities in the face of future natural disasters. Despite the high values in these data sets, the vast size and complex processing requirements of these new data sets make it challenging to effectively use them in coastal community management applications, in particular emergencies. Yet, unprocessed data are intangible and non-consumable, which is often resulting in ‘data-rich-but-information-poor” situation. The overarching goal of this research is to research, develop, and evaluate a data processing framework that is capable of efficiently processing the emerging large geospatial data sets and extract crucial information to enhance disaster management during large-scale extreme events. This research systematically studied the fundamental aspects of big spatial disaster data including the anatomy of big spatial disaster data, data processing patterns, data quality issues, uncertainty propagation along the analytics pipeline, and adaptive processing in time-sensitive environments. More specifically, this dissertation addresses the following research questions. 1. What is the basic anatomy of big spatial disaster data? 2. What are the core operation categories and processing patterns with big spatial disaster data? 3. How does the uncertainty associated with spatial disaster data sets propagate through a given processing pipeline? 4. How to adequately represent users’ dynamic and complex information needs and processing requirement during coastal resilience investigations in a unified framework? 5. How to dynamically adapt 3D disaster data analytics given user information needs and processing requirements and algorithm and dataset descriptions? In Chapter 2, I characterized the basic anatomy of big spatial disaster data to highlight the challenges and opportunities in using these emerging data sets in coastal community management applications during extreme events. I also characterized data processing patterns associated with the emerging big spatial disaster data sets and abstracted these patterns into core operation categories. These work laid the foundation for realizing cloud-based computing of these data sets for disaster response applications. In Chapter 3, I used a case study based approach to demonstrate approaches for quantifying uncertainty propagation in processing geospatial data sets. More specifically, I proposed a method to identify the optimal strategy for approximation parameter selection in interpolating Light Detection and Ranging (LiDAR) data into Digital Elevation Models (DEMs). The method is developed to address the need to model accuracy loss in rapid generation of DEMs, which are essential pieces of information used in disaster response and flooding simulation. In Chapter 3, I proposed a DEA based information salience model to prioritize the sequence of the information processing tasks. The model provides a unified way of representing user information needs and balancing these needs to realize optimized data processing sequences. More specifically, this model integrates the DEA efficiency score with linguistic group decision process. The proposed model is tested against a hurricane sandy based case study in the Barnegat Peninsula, New Jersey. The results indicate that the proposed model prototyped a framework for information articulation between decision-makers and the data processing team. The proposed model will help to accelerate the data-information transliteration and reduce the possible ‘data-rich-but-information-poor” situation Based on Chapter 3, I proposed in Chapter 4 a stream data processing approach that realized accelerated information extraction from large quantities of geospatial data given various user information needs. The approach is capable of representing complex spatial data analytics into a workflow centric data analysis representation and levering the flexible computing resources in the cloud and at the edge to improve information extraction from these large data sets. Throughout this dissertation research, I used extensively Hurricane Sandy related data sets as use cases to evaluate the proposed approaches. The results demonstrated the proposed approaches provide a scalable approach for information extraction from spatial disaster data within a realistic time bound. It is important to recognize that this research does not focus on developing algorithms for data processing tasks such as segmentation and object recognition. Instead, it focuses on formulating mechanisms to integrate existing spatial data analytics into the emerging big data processing frameworks and to address the particular challenges in using the big spatial disaster data for coastal resilience decision support. In terms of future research, it is beneficial to investigate the development of dedicated disaster data processing algorithms and integrate them into the framework developed in this research.
NotePh.D.
NoteIncludes bibliographical references
Noteby Xuan Hu
Genretheses, ETD doctoral
Languageeng
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.