An autonomous system for crop inspections: 3D hyperspectral reconstructions, multi-robot planning and control
Citation & Export
Hide
Simple citation
Edmonds, Merrill.
An autonomous system for crop inspections: 3D hyperspectral reconstructions, multi-robot planning and control. Retrieved from
https://doi.org/doi:10.7282/t3-cjbb-3q47
Export
Description
TitleAn autonomous system for crop inspections: 3D hyperspectral reconstructions, multi-robot planning and control
Date Created2022
Other Date2022-01 (degree)
Extent179 pages : illustrations
DescriptionCrop inspections are a critical part of modern agriculture. On a fundamental level, timely inspections allow farmers to identify problems with the crop and take preventative measures where necessary. More generally, inspections provide a wealth of information for downstream precision agriculture applications, which typically optimize yields using large-scale crop monitoring data captured via satellites or drones. However, the top-down images generated by satellites and drones are ill-suited for leaf-level analyses, so farmers still perform manual inspections at certain points in the growth cycle to ensure yield targets are met. Agricultural robots bridge this gap by allowing farmers to automate close-up inspections, thus capturing the benefits of both manual inspections and automated toolchains. On the other hand, aggregating the multi-modal data generated by these robots requires expert knowledge and several proprietary software packages, and the inspection routes for large robot groups are still planned by human operators. There are many similar technical challenges that prevent robotic inspections from being included in the agricultural value chain, and new systems and algorithms must be developed to overcome these challenges.
The goal of this dissertation is to develop a new autonomous system and algorithms for crop inspections. In addition to developing a hardware and software architecture needed for high-throughput inspections, the dissertation also addresses the following technological gaps: (i) three-dimensional (3D) hyperspectral reconstruction methods necessary for efficient and low-cost inspections using robot-mounted sensors and cameras, (ii) site-specific multi-robot allocation and planning algorithms to enable high-throughput inspections, (iii) control algorithms that allow unmanned aerial vehicles (UAVs) to land on unmanned ground vehicles (UGVs) for continuous inspection missions, and (iv) the human-robot interaction (HRI) methods to provide farmers with greater control over the inspection process. When combined, the proposed methods allow the robots to autonomously select which plants to inspect as distributed UGV-UAV teams, while also giving farmers intuitive tools to preempt the inspection process. The contributions of the thesis are therefore all aimed towards creating a single, cohesive autonomous crop inspection framework.
One of the central contributions of this thesis is a novel 3D hyperspectral reconstruction method that captures the spatial and spectral properties of the plant. Unlike similar methods that use bulky, expensive, and slow hyperspectral cameras and laser scanners to capture data, the proposed method leverages the spectral and intrinsic properties of a heterogeneous set of commodity cameras to convert standard RGB images into full-spectrum data embedded onto a 3D point cloud. Since practical considerations require the robot to collect a limited number of source images per plant, a spectrally-optimal next-best-view (SONBV) problem is formulated and a method is proposed to produce the most complete 3D hyperspectral reconstruction from the source images. Novel algorithms to determine these angles from incomplete knowledge are demonstrated, and the applicability to crop inspections is further investigated by fabricating and testing a tabletop 3D hyperspectral scanner.
The proposed reconstruction method naturally leads to the multi-robot task allocation problem of selecting a representative set of scan targets, and distributing the associated scanning tasks among the team of UGVs. An equivalent optimization problem is proposed to select sampling points across a continuous metric field mapped onto the crop field such that the cumulative posterior covariance of a Gaussian process representation of the metric field is maximized. Augmented robot-centered geodesic Voronoi graphs of the plot are then constructed to find minimum-cost hyperedges on a multi-partite graph induced by these Voronoi regions. The planning problem is proven to be NP-hard, and an approximate set of assignments are determined greedily as the minimal-cost finite-horizon combinations of scan targets. The proposed algorithm is also extended to continuous data collection by UGV/UAV groups. Simulation studies show that the proposed methods outperform state-of-the-art methods in prediction quality and average resolution, while also observing the energy constraints.
Another particularly challenging control problem for the proposed UGV/UAV inspection system is the execution of coordinated UGV/UAV landing maneuvers (e.g., when the UAV needs to land on the UGV to recharge). The UGV/UAV pair is taken in isolation, and the interactions between the UGV and the UAV are modeled as a paired model identification and trajectory optimization problem. The near-surface aerodynamic effects caused by the interaction of the multi-rotor downwash and the platform's geometry are modeled using a novel neural network architecture. Linearized disturbance models are then used to iteratively plan landing trajectories based on the on-board model predictive controller. Cooperative landing trajectories are then calculated to satisfy the multi-robot coverage problem, UAV and UGV dynamics, and any local constraints. The proposed methods are validated experimentally and by simulation.
Finally, we ensure that the farmer is able to take control of the inspection process by introducing a human-robot interaction framework. This gives the human agency in an otherwise autonomous process, and allows them to preempt the selection of inspection targets via pointing gestures (e.g., to tell the robot to scan a certain plant). To achieve this, a human pose and gesture estimation module is proposed. Previous results from human-following robots are used to provide the multi-robot system with safe and robust human-aware path planning and obstacle avoidance behaviors. Pointing gestures are then used as inspection suggestions. These methods constitute one of the first human-robot interaction models for autonomous crop inspections. Experimental results are provided to show the effectiveness and intuitiveness of the approach.
NotePh.D.
NoteIncludes bibliographical references
Genretheses
LanguageEnglish
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.