Real-time autonomic decision making under uncertain environments for UAV-based search-and-rescue missions
Citation & Export
Hide
Simple citation
Sadhu, Vidyasagar.
Real-time autonomic decision making under uncertain environments for UAV-based search-and-rescue missions. Retrieved from
https://doi.org/doi:10.7282/t3-ms6y-wr38
Export
Description
TitleReal-time autonomic decision making under uncertain environments for UAV-based search-and-rescue missions
Date Created2020
Other Date2020-10 (degree)
Extent1 online resource (xiii, 144 pages) : illustrations
DescriptionReal-time smart and autonomous decision making in an Intelligent Physical System (IPS), e.g., an autonomous car or Unmanned Aerial Vehicle (UAV), involves two main stages---sensing (collection and then transformation of sensor data into actionable knowledge by giving semantic meaning to the raw data) and planning (making real-time decisions using this knowledge). The challenges faced by an IPS during these two stages include coping with the various forms of uncertainty caused by the (non-stationary) changing environment in which the IPS acts. These sources of uncertainty can be broadly classified into the following categories---data-quantity and -quality uncertainty, model and parameter uncertainty, environmental uncertainty, multi-agent non-stationarity, communication uncertainty, partial observability, and computing-resource uncertainty. Further, the degree of uncertainty in all these sources changes with time, introducing yet another challenge.
In this dissertation, I propose novel solutions to deal with environmental, multi-agent non-stationarity, partial observability, and communication uncertainties, and present advanced techniques for real-time and autonomous operation of an IPS---or a group of IPSs---acting in a dynamic and unknown environment, primarily targeting UAV-based real-time autonomic search of objects/victims in a post-disaster scenario. To this end, I propose the following techniques needed to address the challenges mentioned above.
1.Multi-IPS coordination---making high-level decisions in coordination with other UAVs acting in the environment in order to maximize the mission objective; this is challenging as the environment appears non-stationary from the view of any one UAV as other UAVs are taking actions independently. For this, I propose an actor-critic based Multi-Agent Deep Reinforcement Learning (MADRL) framework where the critic is trained in a centralized manner and the actor is decentralized and is used during deployment (testing).
2.Imperfect communication and partial observability---the communication among the UAVs may not be perfect resulting in packet drops and delays; moreover, each UAV may not be able to observe the underlying state fully due to restricted field of view of the camera. For this, I propose to enhance the MADRL framework by augmenting it with Recurrent Neural Networks (RNNs) such as Long Short-Term Memory (LSTM) Neural Networks (NNs) to maintain state over time and to solve the partial observability/limited communication problem.
3.Context-awareness and validation---context is a generic term used to denote the operating conditions e.g., environmental light/weather conditions, remaining operational time of a UAV, time, location, etc. It is well-known that context-aware decision-making yields better results than decision making done without taking context into account; this means that it is important to validate the context (obtained from sensors) especially in some secure missions. For this, I propose a Hidden Markov Model (HMM) based context modeling and prediction engine leveraging the history of personal and group behavior of the UAV; the proposed model can be used to predict the current (and also future) context of the UAV, which can be used to validate the sensor context.
4.On-board proactive real-time monitoring and anomaly detection---as the UAVs are operating in a dynamic/uncertain environment, it is paramount to monitor proactively the operation of the UAV. For this, I propose a Convolutional Neural Network (CNN) and bi-directional LSTM (bi-LSTM) NN-based multi-task learning framework for real-time anomaly detection using UAV scalar sensor (non-image) data, e.g., accelerometer and gyroscope obtained from the on-board Inertial Measurement Unit (IMU).
5.On-board diagnosis and anomaly identification---after detecting that an anomaly has happened, it is important to also classify/identify/diagnose the type of anomaly so as to determine which (sequence of) corrective actions need to be performed. For this, I propose a CNN-biLSTM based deep network classifier for classification of (pre-known) anomalies using IMU data; I also profile the performance of these techniques when run on drone-mountable/comparable hardware such as NVIDIA Jetson TX2 Graphical Processing Unit (GPU), Raspberry Pi B, etc.
Finally, I thoroughly validate and assess the performance of all the techniques proposed in this dissertation towards enabling the application of real-time autonomous search missions using a group of UAVs operating under uncertain environments via computer simulations, hardware-in-the-loop emulations, and real-world experiments.
NotePh.D.
NoteIncludes bibliographical references
Genretheses, ETD doctoral
LanguageEnglish
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.