Fayaz, Muhammad and Yi, Wenqiang and Liu, Yuanwei and Thayaparan, Subramaniam and Nallanathan, Arumugam (2024) Toward Autonomous Power Control in Semi-Grant-Free NOMA Systems: A Power Pool-Based Approach. IEEE Transactions on Communications, 72 (6). pp. 3273-3289. DOI https://doi.org/10.1109/TCOMM.2024.3361535
Fayaz, Muhammad and Yi, Wenqiang and Liu, Yuanwei and Thayaparan, Subramaniam and Nallanathan, Arumugam (2024) Toward Autonomous Power Control in Semi-Grant-Free NOMA Systems: A Power Pool-Based Approach. IEEE Transactions on Communications, 72 (6). pp. 3273-3289. DOI https://doi.org/10.1109/TCOMM.2024.3361535
Fayaz, Muhammad and Yi, Wenqiang and Liu, Yuanwei and Thayaparan, Subramaniam and Nallanathan, Arumugam (2024) Toward Autonomous Power Control in Semi-Grant-Free NOMA Systems: A Power Pool-Based Approach. IEEE Transactions on Communications, 72 (6). pp. 3273-3289. DOI https://doi.org/10.1109/TCOMM.2024.3361535
Abstract
In this paper, we design a resource block (RB) oriented power pool (PP) for semi-grant-free non-orthogonal multiple access (SGF-NOMA) in the presence of residual errors resulting from imperfect successive interference cancellation (SIC). In the proposed method, the BS allocates one orthogonal RB to each grant-based (GB) user, and determines the acceptable received power from grant-free (GF) users and calculates a threshold against this RB for broadcasting. Each GF user as an agent, tries to find the optimal transmit power and RB without affecting the quality-of-service (QoS) and ongoing transmission of the GB user. To this end, we formulate the transmit power and RB allocation problem as a stochastic Markov game to design the desired PPs and maximize the long-term system throughput. The problem is then solved using multi-agent (MA) deep reinforcement learning algorithms, such as double deep Q networks (DDQN) and Dueling DDQN due to their enhanced capabilities in value estimation and policy learning, with the latter performing optimally in environments characterized by extensive states and action spaces. The agents (GF users) undertake actions, specifically adjusting power levels and selecting RBs, in pursuit of maximizing cumulative rewards (throughput). Simulation results indicate computational scalability and minimal signaling overhead of the proposed algorithm with notable gains in system throughput compared to existing SGF-NOMA systems. We examine the effect of SIC error levels on sum rate and user transmit power, revealing a decrease in sum rate and an increase in user transmit power as QoS requirements and error variance escalate. We demonstrate that PPs can benefit new (untrained) users joining the network and outperform conventional SGF-NOMA without PPs in spectral efficiency.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Distributed power control; Internet of things; multi-agent reinforcement learning; non-orthogonal multiple access; semi-grant-free transmission |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 09 Feb 2024 16:19 |
Last Modified: | 30 Oct 2024 21:33 |
URI: | http://repository.essex.ac.uk/id/eprint/37642 |
Available files
Filename: DRL_Assisted_Intelligent_Power_Control_for_IoT_Networks_with_Semi_Grant_Free_NOMA__1_.pdf
Licence: Creative Commons: Attribution 4.0