Basheer, Nihala and Islam, Shareeful and Papastergiou, Spyridon and Mouratidis, Haralambos and Papagiannopoulos, Nikolaos (2025) Vulnerability Patch Prediction Using LLM Based Bert Model with Trustworthy AI Practice for Cyber Security Enhancement. In: 21st IFIP WG 12.5 International Conference, AIAI 2025, 2025-06-26 - 2025-06-29, Limassol, Cyprus.
Basheer, Nihala and Islam, Shareeful and Papastergiou, Spyridon and Mouratidis, Haralambos and Papagiannopoulos, Nikolaos (2025) Vulnerability Patch Prediction Using LLM Based Bert Model with Trustworthy AI Practice for Cyber Security Enhancement. In: 21st IFIP WG 12.5 International Conference, AIAI 2025, 2025-06-26 - 2025-06-29, Limassol, Cyprus.
Basheer, Nihala and Islam, Shareeful and Papastergiou, Spyridon and Mouratidis, Haralambos and Papagiannopoulos, Nikolaos (2025) Vulnerability Patch Prediction Using LLM Based Bert Model with Trustworthy AI Practice for Cyber Security Enhancement. In: 21st IFIP WG 12.5 International Conference, AIAI 2025, 2025-06-26 - 2025-06-29, Limassol, Cyprus.
Abstract
The regular patch update of the security vulnerabilities is crucial for an organization to mitigate the possibilities of their potential exploitations for cyber-attacks. Despite their importance, timely updates are not always guaranteed, and many vulnerabilities remain unpatched for extended period my increase the security risks to the organizations. Organizations generally update patches manually, which introduces delays towards mitigation of potential exploitation and requires huge effort and resources. In this context, we propose a novel approach that uses Large Language Model (LLM)-based CodeBERT model to predict the availability of an update or a patch relevant for the vulnerabilities. The approach adopts key trustworthy AI characteristics, including biasness and explainability, to operationalize trustworthy AI practice for the LLM-based CodeBERT model. The work has been evaluated on a real-world use case scenario from Athens International Airport to demonstrate the applicability of the approach through a test environment that emulates the airport′s critical operating systems. Assets from key systems such as flight information display and access control have been considered and linked with vulnerabilities. The results from the study show that the update is predicated for the key vulnerabilities such as CVE-2017–8464 and CVE-2020–1472 which link with Windows 7-based access control system and Oracle-based AODB database server of the use case scenario, respectively. Also, model explainability is improved by the feature importance using SHAP and correlation using Heatmap technique. The key features for the model decision making are exploitability_score, epss, and attack_complexity. Trustworthy AI practice is also operationalized through bias mitigating techniques such as class balancing and equalized odds to ensure fair and balanced training of the model.
| Item Type: | Conference or Workshop Item (Paper) |
|---|---|
| Uncontrolled Keywords: | Asset; Bias; CodeBERT; Cybersecurity; Explainable AI; Large Language Model; Patch; Trustworthy AI; Vulnerability |
| Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
| SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
| Depositing User: | Unnamed user with email elements@essex.ac.uk |
| Date Deposited: | 31 Mar 2026 10:13 |
| Last Modified: | 31 Mar 2026 10:13 |
| URI: | http://repository.essex.ac.uk/id/eprint/42402 |