Yang, Han and Gu, Dongbing and He, Jianhua (2024) A Robust and Efficient Federated Learning Algorithm Against Adaptive Model Poisoning Attacks. IEEE Internet of Things Journal, 11 (9). pp. 16289-16302. DOI https://doi.org/10.1109/jiot.2024.3351371
Yang, Han and Gu, Dongbing and He, Jianhua (2024) A Robust and Efficient Federated Learning Algorithm Against Adaptive Model Poisoning Attacks. IEEE Internet of Things Journal, 11 (9). pp. 16289-16302. DOI https://doi.org/10.1109/jiot.2024.3351371
Yang, Han and Gu, Dongbing and He, Jianhua (2024) A Robust and Efficient Federated Learning Algorithm Against Adaptive Model Poisoning Attacks. IEEE Internet of Things Journal, 11 (9). pp. 16289-16302. DOI https://doi.org/10.1109/jiot.2024.3351371
Abstract
With the undetectable characteristic, adaptive model poisoning attacks can combine with any other attacks, bypassing the detection and violating the availability of federated learning systems. Existing defences are vulnerable to adaptive model poisoning attacks, as model poisoning-related features are tailored to these methods and compromise the accuracy of the FL model. We first present a unified reformulation of existing adaptive model poisoning attacks. Analyzing the reformulated attacks, we find that the detectors should reduce the attacker’s optimization cost functions to defeat adaptive attacks. However, existing defences do not consider the causes of model parameters’ high dimensionality and data heterogeneity. We propose a novel robust FL algorithm, FedDet, to tackle the problems. By splitting the local models into layers for robust aggregation, FedDet can overcome the issue with high dimensionality while keeping the functionality of layers. During the robust aggregation, FedDet normalizes every slice of local models by the median norm value instead of excluding some clients, which can avoid deviation from the optimal model. Furthermore, we conduct a comprehensive security analysis of FedDet and an existing robust aggregation method. We propose the upper bounds on the perturbations disturbed by these adaptive attacks. It is found that FedDet can be more robust than Krum with a smaller perturbation upper bound under attacks. We evaluate the performance of FedDet and four baseline methods against these attacks under two classic datasets. It demonstrates that FedDet significantly outperforms the existing compared methods against adaptive attacks. FedDet can achieve 60.72% accuracy against min-max attacks.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Federated Learning; Model poisoning attacks; Deep Learning |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 26 Feb 2024 17:58 |
Last Modified: | 04 May 2024 11:40 |
URI: | http://repository.essex.ac.uk/id/eprint/37557 |
Available files
Filename: A_Robust_and_Efficient_Federated_Learning_Algorithm_against_Adaptive_Model_Poisoning_Attacks__Revision_version_without_highlighted_final.pdf
Licence: Creative Commons: Attribution 4.0