Towards Deployment of Computer Vision Neural Networks for Scene Understanding
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Computer Science and Electrical Engineering
Program
Computer Science
Citation of Original Publication
Rights
This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Distribution Rights granted to UMBC by the author.
Distribution Rights granted to UMBC by the author.
Abstract
Scene understanding is a cornerstone of autonomous operation for robotics and edge computing platforms. However, deploying advanced computer vision neural networks on these platforms presents two central challenges: the need for vast amounts of meticulously labeled training data, and the stringent energy and compute constraints imposed by embedded hardware. Meeting these requirements demands models that achieve both high accuracy and efficiency, balancing performance with limited latency, memory, and power budgets. This thesis addresses both of these barriers to real-world deployment. First, we propose a novel synthetic-to-real domain adaptation framework that substantially reduces the need for large volumes of labeled real-world data, enabling effective image segmentation and robust scene understanding with minimal annotation effort. Second, we introduce Squeezed Edge YOLO, a lightweight object detector architecture specifically designed to operate within the tight latency and energy budgets of edge computing platforms. Both the domain adaptation framework and the object detector demonstrate strong empirical performance. Our domain adaptation approach is validated on the challenging synthetic-to-real ”SYNTHIA → Cityscapes” and ”GTAV → Cityscapes” benchmarks, where we outperform the previous state of the art, HALO. To evaluate Squeezed Edge YOLO, we deploy it on a nano-UAV and collect real-world measurements, achieving real-time object detection at approximately 8 inferences per second with low power consumption. Together, these contributions advance the deployment of deep neural scene understanding on resource constrained robotic and edge platforms.
