Physical Interference Attacks on Autonomous Driving

Main Article Content

Chuanxiang Bi
Jian Qu

Abstract

Recent studies have revealed that there are serious security risks to autonomous driving, despite the notable advancements made by deep neural networks in this field. Simple sticker jamming has little experimental validation, despite recent proposals for physical attacks successfully implementing jamming in the real world and misleading autonomous driving recognition. This study focuses on the practicality of various sticker-based physical jammers, such as background noise, colorful stickers, smiley face stickers, and QR code stickers. To boost the study’s actual impartiality, we replace the genuine self-driving car in this work with a smart car that performs similar activities. We then utilize three models to train our dataset and carry out five sets of tests. Based on the results, it can be concluded that the QR code sticker has the most potential to interfere with the smart car. This interference causes the smart car’s accuracy in recognizing road signs to be between 30% and 40%, whereas the accuracy of the other interferences is over 50%. Furthermore, it demonstrated that, out of the three models, Resnet18 had the best anti-interference capability.

Article Details

How to Cite
Bi, C., & Qu, J. (2025). Physical Interference Attacks on Autonomous Driving. INTERNATIONAL SCIENTIFIC JOURNAL OF ENGINEERING AND TECHNOLOGY (ISJET), 9(2), 64–72. retrieved from https://ph02.tci-thaijo.org/index.php/isjet/article/view/252635
Section
Research Article
Author Biographies

Chuanxiang Bi, Faculty of Engineering and Technology, Panyapiwat Institute of Management, Nonthaburi, Thailand

Chuanxiang Bi is currently studying for the Master of Engineering Technology, Faculty of Engineering and Technology, Panyapiwat Institute of Management, Thailand. He received B.B.A from Nanjing Tech University Pujiang Institute, China, in 2022. His research interests are Research direction is artificial intelligence, image processing, and autonomous driving.

Jian Qu, Faculty of Engineering and Technology, Panyapiwat Institute of Management, Nonthaburi, Thailand

Jian Qu is an Assistant Professor at the Faculty of Engineering and Technology, Panyapiwat Institute of Management. He received a Ph.D. with an Outstanding Performance award from the Japan Advanced Institute of Science and Technology in 2013. He received a B.B.A with Summa Cum Laude honors from the Institute of International Studies of Ramkhamhaeng University, Thailand, in 2006, and M.S.I.T from Sirindhorn International Institute of Technology, Thammasat University, Thailand, in 2010. He has been serving on a house committee for the Thai Superai project since 2020. His research interests are natural language processing, intelligent algorithms, machine learning, machine translation, information retrieval, image processing, and autonomous driving.

References

A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Deforges, “Adversarial example detection for DNN models: A review and experimental comparison,” Artif Intell Rev., vol. 55, no. 6, pp. 4403-4462, May 2022.

J. Li, X. Chen, E. Hovy, and D. Jurafsky, “Visualizing and understanding neural models in NLP,” arXiv, 2015. [Online]. Available: https://arxiv.org/abs/1506.01066 [Accessed: Jan. 14, 2024].

J. Qu and S. Shi, “Multi-Task in autonomous driving through RDNet18-CA with LiSHTL-S loss function,” ECTI-CIT, vol. 18, no. 2, pp. 158-173, Apr. 2024.

Y. Li and J. Qu, “A novel neural network architecture and cross-model transfer learning for multi-task autonomous driving,” Data Technologies and Applications, vol. 58, no. 5, pp. 693-717, Jan. 2024, https://doi.org/10.1108/DTA-08-2022-0307

S. Ding and J. Qu, “Automatic driving for road tracking and traffic sign recognition,” STA, vol. 27, no. 4, pp. 343-362, Dec. 2022.

Y. Li and J. Qu, “Intelligent road tracking and real-time acceleration-deceleration for autonomous driving using modified convolutional neural networks,” Curr. Appl. Sci. Technol., vol. 22, no. 6, pp. 1-26, Mar. 2022.

S. Ding and J. Qu, “Research on multi-tasking smart cars based on autonomous driving systems,” SN Computer Science, vol. 4, no. 3, p. 292, Mar. 2023.

T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, “Adversarial patch,” arXiv, 2017. [Online]. Available: https://arxiv.org/abs/1712.09665 [Accessed: Jan. 14, 2024].

Y. Dong, F. Liao, T. Pang et al., “Boosting adversarial attacks with momentum,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 9185-9193.

A. Madry, A. Makelov, L. Schmidt et al., “Towards deep learning models resistant to adversarial attacks,” arXiv, 2017. [Online]. Available: https://arxiv.org/abs/1706.06083 [Accessed: Jan. 14, 2024].

N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in Proc. IEEE Symp. Secur. Priv. (SP), 2017, pp. 39-57.

S. -M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “DeepFool: A simple and accurate method to fool deep neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2574-2582.

R. Duan, X. Ma, Y. Wang et al., “Adversarial camouflage: Hiding physical-world attacks with natural styles,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 1000-1008.

N. Akhtar and A. Mian, “Threat of adversarial attacks on deep learning in computer vision: A survey,” IEEE Access, vol. 6, pp. 14410-14430, Jan. 2018.

A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Proc. Artif. Intell. Safety Security, 2018, pp. 99-112.

S. T. Chen, C. Cornelius, J. Martin, and D. Horng Chau, “Shapeshifter: Robust physical adversarial attack on Faster R-CNN object detector,” in Proc. Mach. Learn. Knowl. Discovery Databases: European Conf., 2019, pp. 52-68.

S. Chow, P. Eisen, H. Johnson, and P. C. van Oorschot, “White-box cryptography and an AES implementation,” in Proc. 9th Annu. Workshop Sel. Areas Cryptogr. (SAC), 2002, pp. 250-270.

J. Lu, H. Sibai, and E. Fabry, “Adversarial examples that fool detectors,” arXiv, 2017. [Online]. Available: https://arxiv.org/ abs/1712.02494 [Accessed: Apr. 14, 2024].

S. Nidhra and J. Dondeti, “Black box and white box testing techniques-A literature review,” Int. J. Eng. Sci. Appl., vol. 2, no. 2, pp. 29-50, Jun. 2012.

W. Hui, “Physical adversarial attack meets computer vision: A decade survey,” arXiv, 2022. [Online]. Available: https://arxi.org/abs/2209.15179 [Accessed: Jan. 20, 2024].

C. Sitawarin, A. N. Bhagoji, A. Mosenia, and P. Mettal, “Rogue signs: Deceiving traffic sign recognition with malicious ads and logos,” arXiv, 2018. [Online]. Available: https://arxiv.org/abs/1801.02780 [Accessed: Apr. 15, 2024].

I. Evtimov, K. Eykholt, E. Fernandes et al., “Robust physical-world attacks on machine learning models,” arXiv, 2017. [Online]. Available: https://arxiv.org/abs/1707.08945 [Accessed: Jan. 20, 2024].

Z. Zhang and M. Sabuncu, “Generalized cross entropy loss for training deep neural networks with noisy labels,” arXiv, 2018. [Online]. Available: https://arxiv.org/abs/1805.07836 [Accessed: Jan. 20, 2024].