I Can See the Light: Attacks on Autonomous Vehicles Using Invisible Lights

Author/Creator ORCID

Date

2021-11-15

Department

Program

Citation of Original Publication

Wang, Wei et al.; I Can See the Light: Attacks on Autonomous Vehicles Using Invisible Lights; CCS '21: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, Pages 1930–1944, 15 November, 2021; https://doi.org/10.1145/3460120.3484766

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Abstract

The camera is one of the most important sensors for an autonomous vehicle (AV) to perform Environment Perception and Simultaneous Localization and Mapping (SLAM). To secure the camera, current autonomous vehicles not only utilize the data gathered from multiple sensors (e.g., Camera, Ultrasonic Sensor, Radar, or LiDAR) for environment perception and SLAM but also require the human driver to always realize the driving situation, which can effectively defend against previous attack approaches (i.e., creating visible fake objects or introducing perturbations to the camera by using advanced deep learning techniques). Different from their work, in this paper, we in-depth investigate the features of Infrared light and introduce a new security challenge called I-Can-See-the-Light- Attack (ICSL Attack) that can alter environment perception results and introduce SLAM errors to the AV. Specifically, we found that the invisible infrared lights (IR light) can successfully trigger the image sensor while human eyes cannot perceive IR lights. Moreover, the IR light appears magenta color in the camera, which triggers different pixels from the ambient visible light and can be selected as key points during the AV's SLAM process. By leveraging these features, we explore to i) generate invisible traffic lights, ii) create fake invisible objects, iii) ruin the in-car user experience, and iv) introduce SLAM errors to the AV. We implement the ICSL Attack by using off-the-shelf IR light sources and conduct an extensive evaluation on Tesla Model 3 and an enterprise-level autonomous driving platform under various environments and settings. We demonstrate the effectiveness of the ICSL Attack and prove that current autonomous vehicle companies have not yet considered the ICSL Attack, which introduces severe security issues. To secure the AV, by exploring unique features of the IR light, we propose a software-based detection module to defend against the ICSL Attack.