Yang, Yi and Ma, Wenqiang and Sun, Wen and He, Jianhua and Fu, Yaru and Yuen, Chau and Zhang, Yan (2025) Diffusion-Based Multi-Agent Reinforcement Learning for Semantic Vehicular Edge Computing. IEEE Transactions on Services Computing, 18 (6). pp. 3668-3681. DOI https://doi.org/10.1109/tsc.2025.3618082
Yang, Yi and Ma, Wenqiang and Sun, Wen and He, Jianhua and Fu, Yaru and Yuen, Chau and Zhang, Yan (2025) Diffusion-Based Multi-Agent Reinforcement Learning for Semantic Vehicular Edge Computing. IEEE Transactions on Services Computing, 18 (6). pp. 3668-3681. DOI https://doi.org/10.1109/tsc.2025.3618082
Yang, Yi and Ma, Wenqiang and Sun, Wen and He, Jianhua and Fu, Yaru and Yuen, Chau and Zhang, Yan (2025) Diffusion-Based Multi-Agent Reinforcement Learning for Semantic Vehicular Edge Computing. IEEE Transactions on Services Computing, 18 (6). pp. 3668-3681. DOI https://doi.org/10.1109/tsc.2025.3618082
Abstract
Vehicular edge computing (VEC) is critical for the safe and efficient driving of intelligent vehicles, by which they can offload computation-intensive tasks (such as driving environment perception) to edge servers to overcome the limitations of onboard computational resources and cooperate with others. One of the major challenges faced by VEC is that the offloaded intelligent driving tasks generally generate large amounts of data, which can easily stretch and congest the vehicle communication channels. To address the above challenges, we first propose a novel semantic VEC (SVEC) architecture, which can extract the semantic information of tasks and offload them to edge servers, thereby achieving reliable and efficient offloaded task communication and computation adaptively. Considering the scarce channel resources of vehicles and the intelligent tasks with different priorities and modalities, we define a novel user utility model for SVEC and transform the problem of maximizing user utility into a joint optimization problem of semantic feature extraction, task offloading and resource allocation. Furthermore, to cope with the complexity of the solution space of the optimization problem, we propose a diffusion-based multi-agent reinforcement learning algorithm, which improves the ability of agents to explore the solution space through the diffusion process, thereby achieving optimal decisions for semantic feature extraction, task offloading and resource allocation. Simulation results show that the proposed scheme improves the overall performance of SVEC while reducing offload latency and average system cost.
| Item Type: | Article |
|---|---|
| Uncontrolled Keywords: | Vehicular edge computing (VEC), semantic communication, task offloading, resource allocation, diffusion model, deep reinforcement learning |
| Subjects: | Z Bibliography. Library Science. Information Resources > ZR Rights Retention |
| Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
| SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
| Depositing User: | Unnamed user with email elements@essex.ac.uk |
| Date Deposited: | 08 Jan 2026 13:23 |
| Last Modified: | 17 Jan 2026 13:04 |
| URI: | http://repository.essex.ac.uk/id/eprint/42466 |
Available files
Filename: 20251231-Diffusion_based_Multi_agent_Reinforcement_Learning_for_Semantic_Vehicular_Edge_Computing_final.pdf
Licence: Creative Commons: Attribution 4.0