2024 Symposium Posters

Posters > 2024

XAI-ADS: An Explainable Artificial Intelligence Framework for Enhancing Anomaly Detection in Autonomous Driving Systems


PDF

Primary Investigator:
Research Independant

Project Members
Sazid Nazat, Lingxi Li, PI: Mustafa Abdallah
Abstract
The advent of autonomous driving systems has given rise to pressing cybersecurity issues regarding the vulnerability of vehicular ad hoc networks (VANETs) to potential attacks. This critical security problem necessitates the application of artificial intelligence (AI) models for anomaly detection in VANETs of autonomous vehicles (AVs). However, the lack of explainability of such AI-based anomaly detection models presents challenges. This motivates an emerging research direction of utilizing explainable AI (XAI) techniques to elucidate the behaviors of anomaly detection models in AV networks. In this work, we propose an end-to-end XAI framework to interpret and visualize the anomaly detection classifications made by AI models securing VANETs. We evaluate the framework on two real-world autonomous driving datasets. The framework furnishes both global and local explanations for the black-box AI models using two XAI methods. Moreover, we introduce two novel feature selection techniques to identify the salient features contributing to anomaly detection, derived from the popular SHAP XAI method and the accuracy of six different black-box AI models. We compare our proposed feature selection approaches with six state-of-the-art feature selection techniques (including two wrapper-based feature selection methods), demonstrating superior performance on various evaluation metrics. To generalize the impact of our feature selection methods, we apply three independent classifiers to evaluate our proposed feature selection approaches. The novel feature selection methods effectively distill the most explanatory features, enhancing model interpretability. Finally, we assess the efficiency (how quickly the XAI models can yield explanatory findings) for each of the six black-box AI models we employed on our two datasets, identifying the most efficient model. By furnishing explanations and visualizations of anomaly detection by AI models, our XAI framework can help in enabling trust and transparency for securing vehicular networks.