Staff View
Performance comparison of stereo and RGB sensors for UAV collision avoidance

Descriptive

TitleInfo
Title
Performance comparison of stereo and RGB sensors for UAV collision avoidance
Name (type = personal)
NamePart (type = family)
Hlyvko
NamePart (type = given)
Andrii
NamePart (type = date)
1993-
DisplayForm
Andrii Hlyvko
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Diez
NamePart (type = given)
F. Javier
DisplayForm
F. Javier Diez
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Bajwa
NamePart (type = given)
Waheed
DisplayForm
Waheed Bajwa
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Spasojevic
NamePart (type = given)
Predrag
DisplayForm
Predrag Spasojevic
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
School of Graduate Studies
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (encoding = w3cdtf); (keyDate = yes); (qualifier = exact)
2020
DateOther (encoding = w3cdtf); (qualifier = exact); (type = degree)
2020-05
CopyrightDate (encoding = w3cdtf); (qualifier = exact)
2020
Language
LanguageTerm (authority = ISO 639-3:2007); (type = text)
English
Abstract (type = abstract)
Over the past years there have been many different approaches that have shown significant progress towards solving the challenging problem of collision avoidance for UAVs. These approaches range from SLAM to machine learning. Machine learning approaches are promising because the model learns to perform a complex task using training data instead of someone having to develop a complex and task-specific controller. In machine learning approaches we can choose whether to train in simulation or on real data. Collecting real-world UAV collision data is very time consuming and can result in a damaged UAV. On the other hand, synthetic data and real-world data come from different distributions so training using synthetic data introduces a gap between the learned distribution and the actual one, which can result in poor performance. Even though this distribution gap exists, training in simulation saves time and cost of training, making these approaches the focus of this study. Usually due to UAV size and weight constrains we can only choose one sensor for performing obstacle avoidance. Therefore, we need to select a sensor that would give the best performance when a model is trained in simulation. Many different sensors can be chosen for performing UAV collision avoidance, such as RGB, stereo, LIDAR, among others. Even if a sensor can be accurately simulated, the data it produces might not contain sufficient information for performing collision avoidance well. For instance, a sonar can be very accurately simulated but it does not contain sufficient information about the state of the environment required to avoid complex shapes. The hypothesis is that a model that is trained entirely in simulation is going to perform differently in the real-world depending on what simulated sensor was used for training. In this thesis we train using different simulated sensors to demonstrate the hypothesis that real-world performance with a model trained entirely in simulation improves when an appropriate sensor is chosen for training. Even though we cannot confirm that one sensor outperforms others for every single machine learning approach, we obtain experimental data for a few methods to support our claim. RGB cameras are one of the simplest and most widely used sensors for drone sense and avoid. On the other hand, stereo sensors are bulky and require high computing power to produce real-time results useful for collision avoidance in drones. This has changed with recent advances in stereo sensors and computing, which has made it possible to use them in micro-aerial vehicles for real-time operation [1]. Therefore, these two sensors are the most suitable for our study. In this thesis we compare how much performance do we gain, if any, by training on a simulated stereo system instead of a simulated RGB camera for obstacle avoidance using machine learning approaches.
Subject (authority = LCSH)
Topic
Drone aircraft -- Collision avoidance systems
Subject (authority = RUETD)
Topic
Electrical and Computer Engineering
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_10646
PhysicalDescription
Form (authority = gmd)
InternetMediaType
application/pdf
InternetMediaType
text/xml
Note
Supplementary File: UAV collision avoidance using reinforcement learning in simulation
Extent
1 online resource (viii, 42 pages) : illustrations
Note (type = degree)
M.S.
Note (type = bibliography)
Includes bibliographical references
RelatedItem (type = host)
TitleInfo
Title
School of Graduate Studies Electronic Theses and Dissertations
Identifier (type = local)
rucore10001600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/t3-azq3-s478
Genre (authority = ExL-Esploro)
ETD graduate
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Hlyvko
GivenName
Andrii
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2020-03-25 18:10:24
AssociatedEntity
Name
Andrii Hlyvko
Role
Copyright holder
Affiliation
Rutgers University. School of Graduate Studies
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
CreatingApplication
Version
1.5
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2020-04-09T21:05:27
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2020-04-09T21:05:27
ApplicationName
pdfTeX-1.40.20
RULTechMD (ID = TECHNICAL2)
ContentModel
ETD
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2020-06-12T13:10:55
Back to the top
Version 8.5.5
Rutgers University Libraries - Copyright ©2024