Yang, Han and Gu, Dongbing and He, Jianhua (2023) DeMAC: Towards detecting model poisoning attacks in federated learning system. Internet of Things, 23. p. 100875. DOI https://doi.org/10.1016/j.iot.2023.100875
Yang, Han and Gu, Dongbing and He, Jianhua (2023) DeMAC: Towards detecting model poisoning attacks in federated learning system. Internet of Things, 23. p. 100875. DOI https://doi.org/10.1016/j.iot.2023.100875
Yang, Han and Gu, Dongbing and He, Jianhua (2023) DeMAC: Towards detecting model poisoning attacks in federated learning system. Internet of Things, 23. p. 100875. DOI https://doi.org/10.1016/j.iot.2023.100875
Abstract
Federated learning (FL) is an efficient distributed machine learning paradigm for the collaborative training of neural network models by many clients with the assistance of a central server. Currently, the main challenging issue is that malicious clients can send poisoned model updates to the central server, making FL vulnerable to model poisoning attacks. In this paper, we propose a new system named DeMAC to improve the detection and defence against model poisoning attacks by malicious clients. The main idea behind the new system is based on an observation that, as malicious clients need to reduce the poisoning task learning loss, there will be obvious increases in the norm of gradients. We define a metric called GradScore to measure this norm of clients. It is shown through experiments that the GradScores of malicious and benign clients are distinguishable in all training stages. Therefore, DeMAC can detect malicious clients by measuring the GradScore of clients. Furthermore, a historical record of the contributed global model updates is utilized to enhance the DeMAC system, which can spontaneously detect malicious behaviours without requiring manual settings. Experiment results over two benchmark datasets show that DeMAC can reduce the attack success rate under various attack strategies. In addition, DeMAC can eliminate model poisoning attacks under heterogeneous environments.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Deep learning (DL); Federated learning; Backdoor attacks; Model poisoning |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 31 Jul 2023 14:42 |
Last Modified: | 07 Aug 2024 15:35 |
URI: | http://repository.essex.ac.uk/id/eprint/36043 |
Available files
Filename: DeMAC__Towards_Detecting_Model_Poisoning_Attacks_in_Federated_Learning_System.pdf
Licence: Creative Commons: Attribution 4.0