Physical Interference Attacks on Autonomous Driving
Main Article Content
Abstract
Recent studies have revealed that there are serious security risks to autonomous driving, despite the notable advancements made by deep neural networks in this field. Simple sticker jamming has little experimental validation, despite recent proposals for physical attacks successfully implementing jamming in the real world and misleading autonomous driving recognition. This study focuses on the practicality of various sticker-based physical jammers, such as background noise, colorful stickers, smiley face stickers, and QR code stickers. To boost the study’s actual impartiality, we replace the genuine self-driving car in this work with a smart car that performs similar activities. We then utilize three models to train our dataset and carry out five sets of tests. Based on the results, it can be concluded that the QR code sticker has the most potential to interfere with the smart car. This interference causes the smart car’s accuracy in recognizing road signs to be between 30% and 40%, whereas the accuracy of the other interferences is over 50%. Furthermore, it demonstrated that, out of the three models, Resnet18 had the best anti-interference capability.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
เนื้อหาข้อมูล
References
A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Deforges, “Adversarial example detection for DNN models: A review and experimental comparison,” Artif Intell Rev., vol. 55, no. 6, pp. 4403-4462, May 2022.
J. Li, X. Chen, E. Hovy, and D. Jurafsky, “Visualizing and understanding neural models in NLP,” arXiv, 2015. [Online]. Available: https://arxiv.org/abs/1506.01066 [Accessed: Jan. 14, 2024].
J. Qu and S. Shi, “Multi-Task in autonomous driving through RDNet18-CA with LiSHTL-S loss function,” ECTI-CIT, vol. 18, no. 2, pp. 158-173, Apr. 2024.
Y. Li and J. Qu, “A novel neural network architecture and cross-model transfer learning for multi-task autonomous driving,” Data Technologies and Applications, vol. 58, no. 5, pp. 693-717, Jan. 2024, https://doi.org/10.1108/DTA-08-2022-0307
S. Ding and J. Qu, “Automatic driving for road tracking and traffic sign recognition,” STA, vol. 27, no. 4, pp. 343-362, Dec. 2022.
Y. Li and J. Qu, “Intelligent road tracking and real-time acceleration-deceleration for autonomous driving using modified convolutional neural networks,” Curr. Appl. Sci. Technol., vol. 22, no. 6, pp. 1-26, Mar. 2022.
S. Ding and J. Qu, “Research on multi-tasking smart cars based on autonomous driving systems,” SN Computer Science, vol. 4, no. 3, p. 292, Mar. 2023.
T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, “Adversarial patch,” arXiv, 2017. [Online]. Available: https://arxiv.org/abs/1712.09665 [Accessed: Jan. 14, 2024].
Y. Dong, F. Liao, T. Pang et al., “Boosting adversarial attacks with momentum,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 9185-9193.
A. Madry, A. Makelov, L. Schmidt et al., “Towards deep learning models resistant to adversarial attacks,” arXiv, 2017. [Online]. Available: https://arxiv.org/abs/1706.06083 [Accessed: Jan. 14, 2024].
N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in Proc. IEEE Symp. Secur. Priv. (SP), 2017, pp. 39-57.
S. -M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “DeepFool: A simple and accurate method to fool deep neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2574-2582.
R. Duan, X. Ma, Y. Wang et al., “Adversarial camouflage: Hiding physical-world attacks with natural styles,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 1000-1008.
N. Akhtar and A. Mian, “Threat of adversarial attacks on deep learning in computer vision: A survey,” IEEE Access, vol. 6, pp. 14410-14430, Jan. 2018.
A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Proc. Artif. Intell. Safety Security, 2018, pp. 99-112.
S. T. Chen, C. Cornelius, J. Martin, and D. Horng Chau, “Shapeshifter: Robust physical adversarial attack on Faster R-CNN object detector,” in Proc. Mach. Learn. Knowl. Discovery Databases: European Conf., 2019, pp. 52-68.
S. Chow, P. Eisen, H. Johnson, and P. C. van Oorschot, “White-box cryptography and an AES implementation,” in Proc. 9th Annu. Workshop Sel. Areas Cryptogr. (SAC), 2002, pp. 250-270.
J. Lu, H. Sibai, and E. Fabry, “Adversarial examples that fool detectors,” arXiv, 2017. [Online]. Available: https://arxiv.org/ abs/1712.02494 [Accessed: Apr. 14, 2024].
S. Nidhra and J. Dondeti, “Black box and white box testing techniques-A literature review,” Int. J. Eng. Sci. Appl., vol. 2, no. 2, pp. 29-50, Jun. 2012.
W. Hui, “Physical adversarial attack meets computer vision: A decade survey,” arXiv, 2022. [Online]. Available: https://arxi.org/abs/2209.15179 [Accessed: Jan. 20, 2024].
C. Sitawarin, A. N. Bhagoji, A. Mosenia, and P. Mettal, “Rogue signs: Deceiving traffic sign recognition with malicious ads and logos,” arXiv, 2018. [Online]. Available: https://arxiv.org/abs/1801.02780 [Accessed: Apr. 15, 2024].
I. Evtimov, K. Eykholt, E. Fernandes et al., “Robust physical-world attacks on machine learning models,” arXiv, 2017. [Online]. Available: https://arxiv.org/abs/1707.08945 [Accessed: Jan. 20, 2024].
Z. Zhang and M. Sabuncu, “Generalized cross entropy loss for training deep neural networks with noisy labels,” arXiv, 2018. [Online]. Available: https://arxiv.org/abs/1805.07836 [Accessed: Jan. 20, 2024].