Goudarzi, Shidrokh and Anisi, Mohammad Hossein and Ahmadi, Hamed and Musavian, Leila (2021) Dynamic Resource Allocation Model for Distribution Operations using SDN. IEEE Internet of Things Journal, 8 (2). pp. 976-988. DOI https://doi.org/10.1109/jiot.2020.3010700
Goudarzi, Shidrokh and Anisi, Mohammad Hossein and Ahmadi, Hamed and Musavian, Leila (2021) Dynamic Resource Allocation Model for Distribution Operations using SDN. IEEE Internet of Things Journal, 8 (2). pp. 976-988. DOI https://doi.org/10.1109/jiot.2020.3010700
Goudarzi, Shidrokh and Anisi, Mohammad Hossein and Ahmadi, Hamed and Musavian, Leila (2021) Dynamic Resource Allocation Model for Distribution Operations using SDN. IEEE Internet of Things Journal, 8 (2). pp. 976-988. DOI https://doi.org/10.1109/jiot.2020.3010700
Abstract
In vehicular ad-hoc networks, autonomous vehicles generate a large amount of data prior to support in-vehicle applications. So, a big storage and high computation platform is needed. On the other hand, the computation for vehicular networks at the cloud platform requires low latency. Applying edge computation (EC) as a new computing paradigm has potentials to provide computation services while reducing the latency and improving the total utility. We propose a three-tier EC framework to set the elastic calculating processing capacity and dynamic route calculation to suitable edge servers for real-time vehicle monitoring. This framework includes the cloud computation layer, EC layer, and device layer. The formulation of resource allocation approach is similar to an optimization problem. We design a new reinforcement learning (RL) algorithm to deal with resource allocation problem assisted by cloud computation. By integration of EC and software defined networking (SDN), this study provides a new software defined networking edge (SDNE) framework for resource assignment in vehicular networks. The novelty of this work is to design a multi-agent RL-based approach using experience reply. The proposed algorithm stores the users’ communication information and the network tracks’ state in real-time. The results of simulation with various system factors are presented to display the efficiency of the suggested framework. We present results with a real-world case study.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Cloud computing; Monitoring; Real-time systems; Vehicle dynamics; Quality of service; Task analysis; Dynamic scheduling; Reinforcement learning (RL); resource allocation; transportation |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 04 Aug 2020 11:50 |
Last Modified: | 30 Oct 2024 20:46 |
URI: | http://repository.essex.ac.uk/id/eprint/28411 |
Available files
Filename: IEEE IoT-1.pdf