Research

Secure and Trustworthy AI for Autonomous Systems

Research 1

[CCS'25] Towards Real-Time Defense against Object-Based LiDAR Attacks in Autonomous Driving

Yan Zhang, Zihao Liu, Yi Zhu, and Chenglin Miao. 2025 ACM SIGSAC Conference on Computer and Communications Security.

PDF
Research 2

[CCS'25] Asymmetry Vulnerability and Physical Attacks on Online Map Construction for Autonomous Driving

Yang Lou, Haibo Hu, Qun Song, Qian Xu, Yi Zhu, Rui Tan, Wei-Bin Lee, and Jianping Wang. 2025 ACM SIGSAC Conference on Computer and Communications Security.

PDF
Research 3

[MobiSys'25] Dynamic Defense against Adversarial Attacks on Car-Borne LiDAR-Based Object Detection

Yihan Xu, Dongfang Guo, Qun Song, Yang Lou, Yi Zhu, Jianping Wang, Chunming Qiao, and Rui Tan. 23rd ACM International Conference on Mobile Systems, Applications, and Services.

PDF
Research 5

[SenSys'24] An Online Defense against Object-Based LiDAR Attacks in Autonomous Driving

Yan Zhang*, Zihao Liu*, Chongliu Jia, Yi Zhu, Chenglin Miao. The 22nd ACM Conference on Embedded Networked Sensor Systems.

PDF
LiDAR (Light Detection and Ranging) has been widely used in autonomous driving to perceive the surrounding environment of self-driving cars. Advanced LiDAR perception systems typically leverage deep neural networks (DNNs) to achieve high performance. However, the vulnerability of DNNs to malicious attacks provides attackers with the means to compromise the LiDAR perception system, potentially causing traffic accidents. Recently, object-based attacks against LiDAR perception systems have drawn significant attention. In such attacks, the attacker can easily fool the LiDAR perception system by placing physical objects within the driving environment. Despite the practicality of these attacks and their potential catastrophic consequences in autonomous driving, there is currently no effective and practical defense against them. To address this issue, we propose a novel online defense mechanism against object-based LiDAR attacks. This mechanism operates in an online manner, aiming to identify and remove the adversarial LiDAR points generated by the objects used by attackers before the data is fed into the perception module of autonomous driving systems. It is not only effective and efficient for real-world autonomous driving but also attack-agnostic and capable of identifying adversarial objects used by attackers. Extensive experiments in both simulated environments and real-world scenarios using a LiDAR perception testbed demonstrate the effectiveness and practicability of the proposed defense.
Research 6

[USENIX Security'24] A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving

Yang Lou*, Yi Zhu* (equal contribution), Qun Song*, Rui Tan, Chunming Qiao, Wei-Bin Lee, and Jianping Wang. The 33rd USENIX Security Symposium.

PDF Video
Trajectory prediction forecasts nearby agents' moves based on their historical trajectories. Accurate trajectory prediction is crucial for autonomous vehicles (AVs). Existing attacks compromise the prediction model of a victim AV by directly manipulating the historical trajectory of an attacker AV, which has limited real-world applicability. This paper, for the first time, explores an indirect attack approach that induces prediction errors via attacks against the perception module of a victim AV. Although it has been shown that physically realizable attacks against LiDAR-based perception are possible by placing a few objects at strategic locations, it is still an open challenge to find an object location from the vast search space in order to launch effective attacks against prediction under varying victim AV velocities. Through analysis, we observe that a prediction model is prone to an attack focusing on a single point in the scene. Consequently, we propose a novel two-stage attack framework to realize the single-point attack. The first stage efficiently identifies, guided by the distribution of detection results under object-based attacks against perception, the state perturbations for the prediction model that are effective and velocity-insensitive. In the second stage of location matching, we match the feasible object locations with the found state perturbations. Our evaluation using a public autonomous driving dataset shows that our attack causes a collision rate of up to 63% and various hazardous responses of the victim AV.
Research 7

[MobiCom'24] Malicious Attacks against Multi-Sensor Fusion in Autonomous Driving

Yi Zhu, Chenglin Miao, Hongfei Xue, Yunnan Yu, Lu Su, and Chunming Qiao. The 30th Annual International Conference on Mobile Computing and Networking.

PDF
Multi-sensor fusion has been widely used by autonomous vehicles (AVs) to integrate the perception results from different sensing modalities including LiDAR, camera and radar. Despite the rapid development of multi-sensor fusion systems in autonomous driving, their vulnerability to malicious attacks have not been well studied. Although some prior works have studied the attacks against the perception systems of AVs, they only consider a single sensing modality or a camera-LiDAR fusion system, which can not attack the sensor fusion system based on LiDAR, camera, and radar. To fill this research gap, in this paper, we present the first study on the vulnerability of multi-sensor fusion systems that employ LiDAR, camera, and radar. Specifically, we propose a novel attack method that can simultaneously attack all three types of sensing modalities using a single type of adversarial object. The adversarial object can be easily fabricated at low cost, and the proposed attack can be easily performed with high stealthiness and flexibility in practice. Extensive experiments based on a real-world AV testbed show that the proposed attack can continuously hide a target vehicle from the perception system of a victim AV using only two small adversarial objects.
Research 8

[CCS'23] TileMask: A Passive-Reflection-based Attack against mmWave Radar Object Detection in Autonomous Driving

Yi Zhu, Chenglin Miao, Hongfei Xue, Zhengxiong Li, Yunnan Yu, Wenyao Xu, Lu Su, and Chunming Qiao. 2023 ACM SIGSAC Conference on Computer and Communications Security. (Accept Rate: 19.87%)

PDF Video
In autonomous driving, millimeter wave (mmWave) radar has been widely adopted for object detection because of its robustness and reliability under various weather and lighting conditions. For radar object detection, deep neural networks (DNNs) are becoming increasingly important because they are more robust and accurate, and can provide rich semantic information about the detected objects. However, recent studies have shown that DNNs are vulnerable to adversarial attacks. Although some spoofing attack methods are proposed to attack the radar sensor by actively transmitting specific signals using some special devices, these attacks require sub-nanosecond-level synchronization and are very costly, which limits their practicability. In this paper, we investigate the possibility of using a few adversarial objects to attack the DNN-based radar object detection models through passive reflection. These objects can be easily fabricated using 3D printing and metal foils at low cost. By placing these adversarial objects at some specific locations on a target vehicle, we can easily fool the victim AV's radar object detection model. The experimental results demonstrate that the attacker can achieve the attack goal by using only two adversarial objects and conceal them as car signs.
Research 9

[NDSS'23] MetaWave: Attacking mmWave Sensing with Meta-material-enhanced Tags

Xingyu Chen, Zhengxiong Li, Baicheng Chen, Yi Zhu, Chris Xiaoxuan Lu, Zhengyu Peng, Feng Lin, Wenyao Xu, Kui Ren, and Chunming Qiao. 2023 Network and Distributed System Security (NDSS) Symposium.

PDF
Millimeter-wave (mmWave) sensing has been applied in many critical applications, serving millions of thousands of people around the world. However, it is vulnerable to attacks in the real world. These attacks are based on expensive and professional radio frequency (RF) modulator-based instruments. In this paper, we propose and design a novel passive mmWave attack, called MetaWave, with low-cost and easily obtainable meta-material tags for both vanish and ghost attack types. These meta-material tags are made of commercial off-the-shelf (COTS) materials with customized tag designs to attack various goals, which considerably lower the attack bar on mmWave sensing. Specifically, we demonstrate that tags made of ordinal material can be leveraged to precisely tamper the mmWave echo signal and spoof the range, angle, and speed sensing measurements. We evaluate MetaWave in both simulation and real-world experiments (20 different environments) with various attack settings. Experimental results demonstrate that MetaWave can achieve up to 97% Top-1 attack accuracy on range estimation, 96% on angle estimation, and 91% on speed estimation in actual practice, 10–100× cheaper than existing mmWave attack methods.
Research 10

[SenSys'22] Towards Backdoor Attacks against LiDAR Object Detection in Autonomous Driving

Yan Zhang*, Yi Zhu* (equal contribution), Zihao Liu, Chenglin Miao, Foad Hajiaghajani, Lu Su, and Chunming Qiao. 2022 ACM Conference on Embedded Networked Sensor Systems.

PDF
Due to the great advantage of LiDAR sensors in perceiving complex driving environments, LiDAR-based 3D object detection has recently drawn significant attention in autonomous driving. Although many advanced LiDAR object detection models have been developed, their designs are mainly based on deep learning approaches, which are usually data-hungry and expensive to train. Thus, it is common to collect training data from different sources or outsource the training work to a third party. However, these practices provide opportunities for backdoor attacks, where the attacker aims to inject a hidden trigger pattern into the victim detection model by poisoning its training set. Although backdoor attacks have posed serious security concerns, the vulnerability of LiDAR object detection to such attacks has not yet been studied. In this paper, we present the first study on backdoor attacks against LiDAR object detection in autonomous driving. We propose a novel backdoor attack strategy based on which the attacker can achieve the attack goal by poisoning a small number of point cloud samples. The proposed attack strategy is physically realizable and allows the attacker to easily perform the attack using some common objects as the triggers.
Research 13

[SenSys'21] Adversarial Attacks against LiDAR Semantic Segmentation in Autonomous Driving

Yi Zhu, Chenglin Miao, Foad Hajiaghajani, Mengdi Huai, Lu Su, and Chunming Qiao. 2021 ACM Conference on Embedded Networked Sensor Systems. (Accept Rate: 17.9%)

PDF Video
Today, most autonomous vehicles (AVs) rely on LiDAR perception to acquire accurate information about their immediate surroundings. In LiDAR-based perception systems, semantic segmentation plays a critical role as it can divide LiDAR point clouds into meaningful regions according to human perception and provide AVs with semantic understanding of the driving environments. However, an implicit assumption for existing semantic segmentation models is that they are performed in a reliable and secure environment, which may not be true in practice. In this paper, we investigate adversarial attacks against LiDAR semantic segmentation in autonomous driving. Specifically, we propose a novel adversarial attack framework based on which the attacker can easily fool LiDAR semantic segmentation by placing some simple objects (e.g., cardboard and road signs) at some locations in the physical space. We conduct extensive real-world experiments to evaluate the performance of our proposed attack framework. The experimental results show that our attack can achieve more than 90% success rate in real-world driving environments.
Research 14

[CCS'21] Can We Use Arbitrary Objects to Attack LiDAR Perception in Autonomous Driving?

Yi Zhu, Chenglin Miao, Tianhang Zheng, Foad Hajiaghajani, Lu Su, and Chunming Qiao. 2021 ACM SIGSAC Conference on Computer and Communications Security. (Accept Rate: 22.3%)

PDF Video
As an effective way to acquire accurate information about the driving environment, LiDAR perception has been widely adopted in autonomous driving. The state-of-the-art LiDAR perception systems mainly rely on deep neural networks (DNNs) to achieve good performance. However, DNNs have been demonstrated vulnerable to adversarial attacks. In this paper, we investigate an easier way to perform effective adversarial attacks with high flexibility and good stealthiness against LiDAR perception in autonomous driving. Specifically, we propose a novel attack framework based on which the attacker can identify a few adversarial locations in the physical space. By placing arbitrary objects with reflective surface around these locations, the attacker can easily fool the LiDAR perception systems. Extensive experiments show that our proposed attack can achieve more than 90% success rate. In addition, our real-world study demonstrates that the proposed attack can be easily performed using only two commercial drones.

Generative AI for Smart Manufacturing

Research 15

IntelliMake Autonomous Factory, in collaboration with Dr. Ratna Babu Chinnam

Builing a generative AI-driven manufacturing system where planning agents coordinate robotics, edge controls, live telemetry, scheduling, and inline quality checks to autonomously move production from raw materials to packed goods.

Multi-Modal and Collaborative Sensing

Research 4

[BigData'24] Towards Robust mmWave-based Human Activity Recognition using Large Simulated Dataset for Model Pretraining

Vinay Joshi (Undergraduate Student), Shengkai Xu, Qiming Cao, Yi Zhu, Pu Wang, and Hongfei Xue. 2024 IEEE International Conference on Big Data.

PDF
Research 15

[TOSN] Driver Behavior-aware Parking Availability Crowdsensing System Using Truth Discovery

Yi Zhu, Abhishek Gupta, Shaohan Hu, Weida Zhong, Lu Su, and Chunming Qiao. ACM Transactions on Sensor Networks.

PDF
Spot-level parking availability information (the availability of each spot in a parking lot) is in great demand, as it can help reduce time and energy waste while searching for a parking spot. In this article, we propose a crowdsensing system called SpotE that can provide spot-level availability in a parking lot using drivers' smartphone sensors. SpotE only requires the sensor data from drivers' smartphones, which avoids the high cost of installing additional sensors and enables large-scale outdoor deployment. We propose a new model that can use the parking search trajectory and final destination of a single driver in a parking lot to generate the probability profile that contains the probability of each spot being occupied. A novel aggregation approach SpotE-TD is proposed based on truth discovery techniques that can handle the variety in Quality of Information of different vehicles. Results show that SpotE-TD can efficiently provide spot-level parking availability information with a 20% higher accuracy than the state-of-the-art.