TY - JOUR TI - Performance comparison of stereo and RGB sensors for UAV collision avoidance DO - https://doi.org/doi:10.7282/t3-azq3-s478 PY - 2020 AB - Over the past years there have been many different approaches that have shown significant progress towards solving the challenging problem of collision avoidance for UAVs. These approaches range from SLAM to machine learning. Machine learning approaches are promising because the model learns to perform a complex task using training data instead of someone having to develop a complex and task-specific controller. In machine learning approaches we can choose whether to train in simulation or on real data. Collecting real-world UAV collision data is very time consuming and can result in a damaged UAV. On the other hand, synthetic data and real-world data come from different distributions so training using synthetic data introduces a gap between the learned distribution and the actual one, which can result in poor performance. Even though this distribution gap exists, training in simulation saves time and cost of training, making these approaches the focus of this study. Usually due to UAV size and weight constrains we can only choose one sensor for performing obstacle avoidance. Therefore, we need to select a sensor that would give the best performance when a model is trained in simulation. Many different sensors can be chosen for performing UAV collision avoidance, such as RGB, stereo, LIDAR, among others. Even if a sensor can be accurately simulated, the data it produces might not contain sufficient information for performing collision avoidance well. For instance, a sonar can be very accurately simulated but it does not contain sufficient information about the state of the environment required to avoid complex shapes. The hypothesis is that a model that is trained entirely in simulation is going to perform differently in the real-world depending on what simulated sensor was used for training. In this thesis we train using different simulated sensors to demonstrate the hypothesis that real-world performance with a model trained entirely in simulation improves when an appropriate sensor is chosen for training. Even though we cannot confirm that one sensor outperforms others for every single machine learning approach, we obtain experimental data for a few methods to support our claim. RGB cameras are one of the simplest and most widely used sensors for drone sense and avoid. On the other hand, stereo sensors are bulky and require high computing power to produce real-time results useful for collision avoidance in drones. This has changed with recent advances in stereo sensors and computing, which has made it possible to use them in micro-aerial vehicles for real-time operation [1]. Therefore, these two sensors are the most suitable for our study. In this thesis we compare how much performance do we gain, if any, by training on a simulated stereo system instead of a simulated RGB camera for obstacle avoidance using machine learning approaches. KW - Drone aircraft -- Collision avoidance systems KW - Electrical and Computer Engineering LA - English ER -