Toward Real-World Implementation of Deep Reinforcement Learning for Vision-Based Autonomous Drone Navigation with Mission
Loading...
Links to Files
Permanent Link
Author/Creator
Navardi, Mozhgan
Dixit, Prakhar
Manjunath, Tejaswini
Waytowich, Nicholas R.
Mohsenin, Tinoosh
Oates, Tim
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Rights
This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
Public Domain Mark 1.0
Public Domain Mark 1.0
Subjects
Abstract
Though high fidelity simulators for drones and other aerial vehicles look exceptionally realistic, using simulators to train control policies and then transferring them to the real world does not work well. One reason is that real images, especially on low-power drones, produce output that look different from simulated images, ignoring for the moment that simulated worlds themselves look rather different from real ones at the level that matters for machine learning. To overcome this limitation, we focus on using object detectors that tend to transfer well from simulation to the real world, and extract features of detected objects to serve as input to reinforcement learning algorithms. Empirical results with a low-power drone show promising results.