Chen, Zhixiong and Yi, Wenqiang and Shin, Hyundong and Nallanathan, Arumugam (2023) Adaptive Model Pruning for Communication and Computation Efficient Wireless Federated Learning. IEEE Transactions on Wireless Communications, 23 (7). pp. 7582-7598. DOI https://doi.org/10.1109/TWC.2023.3342626
Chen, Zhixiong and Yi, Wenqiang and Shin, Hyundong and Nallanathan, Arumugam (2023) Adaptive Model Pruning for Communication and Computation Efficient Wireless Federated Learning. IEEE Transactions on Wireless Communications, 23 (7). pp. 7582-7598. DOI https://doi.org/10.1109/TWC.2023.3342626
Chen, Zhixiong and Yi, Wenqiang and Shin, Hyundong and Nallanathan, Arumugam (2023) Adaptive Model Pruning for Communication and Computation Efficient Wireless Federated Learning. IEEE Transactions on Wireless Communications, 23 (7). pp. 7582-7598. DOI https://doi.org/10.1109/TWC.2023.3342626
Abstract
Most existing wireless federated learning (FL) studies focused on homogeneous model settings where devices train identical local models. In this setting, the devices with poor communication and computation capabilities may delay the global model update and degrade the performance of FL. Moreover, in the homogenous model settings, the scale of the global model is restricted by the device with the lowest capability. To tackle these challenges, this work proposes an adaptive model pruning-based FL (AMP-FL) framework, where the edge server dynamically generates sub-models by pruning the global model for devices’ local training to adapt their heterogeneous computation capabilities and time-varying channel conditions. Since the involvement of diverse structures of devices’ sub-models in the global model updating may negatively affect the training convergence, we propose compensating for the gradients of pruned model regions by devices’ historical gradients. We then introduce an age of information (AoI) metric to characterize the staleness of local gradients and theoretically analyze the convergence behaviour of AMP-FL. The convergence bound suggests scheduling devices with large AoI of gradients and pruning the model regions with small AoI for devices to improve the learning performance. Inspired by this, we define a new objective function, i.e., the average AoI of local gradients, to transform the inexplicit global loss minimization problem into a tractable one for device scheduling, model pruning, and resource block (RB) allocation design. Through detailed analysis, we derive the optimal model pruning strategy and transform the RB allocation problem into equivalent linear programming that can be effectively solved. Experimental results demonstrate the effectiveness and superiority of the proposed approaches. The proposed AMP-FL is capable of achieving 1.9x and 1.6x speed up for FL on MNIST and CIFAR-10 datasets in comparison with the FL schemes with homogeneous model settings.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Device scheduling; federated learning; model pruning; resource management |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 10 Jan 2024 15:55 |
Last Modified: | 30 Oct 2024 21:33 |
URI: | http://repository.essex.ac.uk/id/eprint/37150 |
Available files
Filename: Adaptive Model Pruning for Communication and Computation Efficient Wireless Federated Learning.pdf