Giménez, Nil Llisterri and Lee, JunKyu and Freitag, Felix and Vandierendonck, Hans (2024) The Effects of Weight Quantization on Online Federated Learning for the IoT: A Case Study. IEEE Access, 12. pp. 5490-5502. DOI https://doi.org/10.1109/access.2024.3349557
Giménez, Nil Llisterri and Lee, JunKyu and Freitag, Felix and Vandierendonck, Hans (2024) The Effects of Weight Quantization on Online Federated Learning for the IoT: A Case Study. IEEE Access, 12. pp. 5490-5502. DOI https://doi.org/10.1109/access.2024.3349557
Giménez, Nil Llisterri and Lee, JunKyu and Freitag, Felix and Vandierendonck, Hans (2024) The Effects of Weight Quantization on Online Federated Learning for the IoT: A Case Study. IEEE Access, 12. pp. 5490-5502. DOI https://doi.org/10.1109/access.2024.3349557
Abstract
Many weight quantization approaches were explored to save the communication bandwidth between the clients and the server in federated learning using high-end computing machines. However, there is a lack of weight quantization research for online federated learning using TinyML devices which are restricted by the mini-batch size, the neural network size, and the communication method due to their severe hardware resource constraints and power budgets. We name Tiny Online Federated Learning (TinyOFL) for online federated learning using TinyML devices in the Internet of Things (IoT). This paper performs a comprehensive analysis of the effects of weight quantization in TinyOFL in terms of accuracy, stability, overfitting, communication efficiency, energy consumption, and delivery time, and extracts practical guidelines on how to apply the weight quantization to TinyOFL. Our analysis is supported by a TinyOFL case study with three Arduino Portenta H7 boards running federated learning clients for a keyword spotting task. Our findings include that in TinyOFL, a more aggressive weight quantization can be allowed than in online learning without FL, without affecting the accuracy thanks to TinyOFL’s quasi-batch training property. For example, using 7-bit weights achieved the equivalent accuracy to 32-bit floating point weights, while saving communication bandwidth by 4.6× . Overfitting by increasing network width rarely occurs in TinyOFL, but may occur if strong weight quantization is applied. The experiments also showed that there is a design space for TinyOFL applications by compensating for the accuracy loss due to weight quantization with an increase of the neural network size.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | TinyML; approximate computing; federated learning |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 24 Jan 2024 12:21 |
Last Modified: | 30 Oct 2024 21:38 |
URI: | http://repository.essex.ac.uk/id/eprint/37521 |
Available files
Filename: The_Effects_of_Weight_Quantization_on_Online_Federated_Learning_for_the_IoT_A_Case_Study.pdf
Licence: Creative Commons: Attribution 4.0