Basheer, Nihala and Pranggono, Bernardi and Islam, Shareeful and Papastergiou, Spyridon and Mouratidis, Haralambos (2024) Enhancing Malware Detection Through Machine Learning Using XAI with SHAP Framework. In: AIAI 2024 IFIP International Conference on Artificial Intelligence Applications and Innovations, 2024-06-27 - 2024-06-30, Corfu, Greece.
Basheer, Nihala and Pranggono, Bernardi and Islam, Shareeful and Papastergiou, Spyridon and Mouratidis, Haralambos (2024) Enhancing Malware Detection Through Machine Learning Using XAI with SHAP Framework. In: AIAI 2024 IFIP International Conference on Artificial Intelligence Applications and Innovations, 2024-06-27 - 2024-06-30, Corfu, Greece.
Basheer, Nihala and Pranggono, Bernardi and Islam, Shareeful and Papastergiou, Spyridon and Mouratidis, Haralambos (2024) Enhancing Malware Detection Through Machine Learning Using XAI with SHAP Framework. In: AIAI 2024 IFIP International Conference on Artificial Intelligence Applications and Innovations, 2024-06-27 - 2024-06-30, Corfu, Greece.
Abstract
Malware represents a significant cyber threat that can potentially disrupt any activities within an organization. There is a need to devise effective proactive methods for malware detection, thereby minimizing the associated risks. However, this task is challenging due to the ever-growing volume of malware data and the continuously evolving techniques employed by malicious actors. In this context, machine learning models offer a promising approach to identify key malware features and facilitate accurate detection. Machine learning has proven to be effective in detecting malware and has recently gained widespread attention from both the academic and research sectors. Despite their effectiveness, current research on machine learning (ML) models for malware detection often lacks necessary explanations for the selection of key features. This opacity of ML models can complicate the understanding of the outputs, errors, and decision-making processes. To address this challenge, this research uses Explainable AI (XAI), particularly the SHAP framework, to enhance transparency and interpretability. By providing extensive insights into how each feature contributes to the model’s conclusions, the approach further improves the model’s accountability. An experiment was conducted to demonstrate the applicability of the proposed method, beginning with the training of the chosen machine learning models, including Random Forest, Adaboost, Support Vector Machine and Artificial Neural Network, for detecting malware, and concluding with the explanation of the decision-making process using XAI techniques. The results showed high accuracy in malware detection, along with comprehensive explanations of the feature contributions, which justifies the outputs produced by the models.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Uncontrolled Keywords: | Explainable Artificial Intelligence; Cyber Security; SHAP; Malware Detection; Artificial Neural Network; Random Forest |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 02 Oct 2024 13:11 |
Last Modified: | 30 Oct 2024 21:24 |
URI: | http://repository.essex.ac.uk/id/eprint/39174 |
Available files
Filename: Paper ID 24_MalwareAIAI_Final.pdf
Embargo Date: 21 June 2025