Explainable AI for Intrusion Detection Systems: A Model Development and Experts’ Evaluation

Authors: Durojaye, H. and Naiseh, M.

Journal: Lecture Notes in Networks and Systems

Volume: 1066 LNNS

Pages: 301-318

eISSN: 2367-3389

ISSN: 2367-3370

DOI: 10.1007/978-3-031-66428-1_18

Abstract:

This study sought to develop a transparent machine learning model for network intrusion detection that domain experts would trust for security decision-making. Intrusion detection systems using machine learning have shown promise but often lack interpretability, undermining user trust and deployment. A hybrid Random Forest/XGBoost classifier achieved over 99% accuracy and F1 score, outperforming previous literature. Post-hoc LIME explanations provided feature effect transparency. Nine domain experts from technical roles then evaluated the model’s reliability, explainability, and trustworthiness through a standardised process. While over half found the model reliable, one-third expressed uncertainty. Responses on performance explanations and trustworthiness assessments also varied thus suggesting opportunities to strengthen reliability communications and consolidate diverse perspectives. To optimise user confidence and model deployment, refinements targeting consistent explainability across audiences were proposed. Overall, high predictive performance validated effectiveness, but variable viewpoints from evaluations indicated the need to bolster reliability and trust explanations. With continued iterative evaluation and enhancements, this research framework holds promise for developing interpretable machine learning solutions trusted for complex security decision-making.

https://eprints.bournemouth.ac.uk/40308/

Source: Scopus

Explainable AI for intrusion detection systems: A model development and experts’ evaluation

Authors: Durojaye, H. and Naiseh, M.

Conference: IntelliSys 2024

Volume: 1066 L

Pages: 301-318

Publisher: Springer

ISSN: 2367-3370

Abstract:

This study sought to develop a transparent machine learning model for network intrusion detection that domain experts would trust for security decision-making. Intrusion detection systems using machine learning have shown promise but often lack interpretability, undermining user trust and deployment. A hybrid Random Forest/XGBoost classifier achieved over 99% accuracy and F1 score, outperforming previous literature. Post-hoc LIME explanations provided feature effect transparency. Nine domain experts from technical roles then evaluated the model’s reliability, explainability, and trustworthiness through a standardised process. While over half found the model reliable, one-third expressed uncertainty. Responses on performance explanations and trustworthiness assessments also varied thus suggesting opportunities to strengthen reliability communications and consolidate diverse perspectives. To optimise user confidence and model deployment, refinements targeting consistent explainability across audiences were proposed. Overall, high predictive performance validated effectiveness, but variable viewpoints from evaluations indicated the need to bolster reliability and trust explanations. With continued iterative evaluation and enhancements, this research framework holds promise for developing interpretable machine learning solutions trusted for complex security decision-making.

https://eprints.bournemouth.ac.uk/40308/

https://saiconference.com/IntelliSys

Source: BURO EPrints