Ahmed, Shafiq and Obaidat, Mohammad S and ANISI, Mohammad Hossein and Mahmood, Khalid (2025) Trust-Aware Reinforcement Selection for Robust Federated Learning under Adaptive Adversaries. In: 2025 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI), 2025-10-15 - 2025-10-17, Hangzhou, China.
Ahmed, Shafiq and Obaidat, Mohammad S and ANISI, Mohammad Hossein and Mahmood, Khalid (2025) Trust-Aware Reinforcement Selection for Robust Federated Learning under Adaptive Adversaries. In: 2025 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI), 2025-10-15 - 2025-10-17, Hangzhou, China.
Ahmed, Shafiq and Obaidat, Mohammad S and ANISI, Mohammad Hossein and Mahmood, Khalid (2025) Trust-Aware Reinforcement Selection for Robust Federated Learning under Adaptive Adversaries. In: 2025 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI), 2025-10-15 - 2025-10-17, Hangzhou, China.
Abstract
Federated learning (FL) has emerged as a promising framework for privacy-preserving collaborative training, yet the presence of Byzantine clients poses a critical challenge for robust aggregation. Existing defenses such as FedAvg, Krum, Trimmed Mean/Median, FLTrust, and SARA exhibit significant performance drops in dynamic or mixed-attack environments. In this paper, we propose TARS, a Trust-Aware Reinforcement Selector for robust FL aggregation under adversarial and non-IID conditions. TARS leverages a trust-regularized Q learning strategy to dynamically select the optimal aggregation rule in each round, accounting for model performance and trustworthiness signals. Experimental results on MNIST and CIFAR-10 with 20% Byzantine clients demonstrate that TARS consistently outperforms all baselines, achieving a final test accuracy of 97.7% on MNIST and 80.5% on CIFAR-10, surpassing FLTrust and SARA by at least 2.5% and 3.6%, respectively, as shown in Table IV. TARS also achieves the highest optimal rule selection rate (93.6% on MNIST, 89.8% on CIFAR-10), robust convergence, and resilience against both label flipping and Gaussian attacks. These results establish TARS as a mathematically principled and empirically validated solution for trustworthy federated learning in adversarial settings.
| Item Type: | Conference or Workshop Item (Paper) |
|---|---|
| Uncontrolled Keywords: | Federated Learning; Poisoning Attacks; Trust Aware Reinforcement; BRAR; Machine Learning |
| Subjects: | Z Bibliography. Library Science. Information Resources > ZR Rights Retention |
| Divisions: | Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
| SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
| Depositing User: | Unnamed user with email elements@essex.ac.uk |
| Date Deposited: | 14 Nov 2025 15:02 |
| Last Modified: | 14 Nov 2025 15:04 |
| URI: | http://repository.essex.ac.uk/id/eprint/41963 |
Available files
Filename: IEEE_CCCI_Conference.pdf
Licence: Creative Commons: Attribution 4.0