Deep Reinforcement Learning-Based Computation Computational Offloading for Space–Air–Ground Integrated Vehicle Networks
dc.contributor.author | Xie, Wenxuan | |
dc.contributor.author | Chen, Chen | |
dc.contributor.author | Ju, Ying | |
dc.contributor.author | Shen, Jun | |
dc.contributor.author | Pei, Qingqi | |
dc.contributor.author | Song, Houbing | |
dc.date.accessioned | 2025-06-05T14:03:49Z | |
dc.date.available | 2025-06-05T14:03:49Z | |
dc.date.issued | 2025 | |
dc.description.abstract | In remote or disaster areas, where terrestrial networks are difficult to cover and Terrestrial Edge Computing (TEC) infrastructures are unavailable, solving the computation computational offloading for Internet of Vehicles (IoV) scenarios is challenging. Current terrestrial networks have high data rates, great connectivity, and low delay, but global coverage is limited. Space–Air–Ground Integrated Networks (SAGIN) can improve the coverage limitations of terrestrial networks and enhance disaster resistance. However, the rising complexity and heterogeneity of networks make it difficult to find a robust and intelligent computational offload strategy. Therefore, joint scheduling of space, air, and ground resources is needed to meet the growing demand for services. In light of this, we propose an integrated network framework for Space-Air Auxiliary Vehicle Computation (SA-AVC) and build a system model to support various IoV services in remote areas. Our model aims to maximize delay and fair utility and increase the utilization of satellites and Autonomous aerial vehicles (AAVs). To this end, we propose a Deep Reinforcement Learning algorithm to achieve real-time computational computational offloading decisions. We utilize the Rank-based Prioritization method in Prioritized Experience Replay (PER) to optimize our algorithm. We designed simulation experiments for validation and the results show that our proposed algorithm reduces the average system delay by 17.84%, 58.09%, and 58.32%, and the average variance of the task completion delay will be reduced by 29.41%, 48.74%, and 49.58% compared to the Deep Q Network (DQN), Q-learning and RandomChoose algorithms. | |
dc.description.sponsorship | This work was supported in part by the National Natural Science Foundation of China under Grant 62072360 and Grant 62172438 in part by the Key Research and Development Plan of Shaanxi Province under Grant 2021ZDLGY02 09 Grant 2023 GHZD 44 and Grant 2023 ZDLGY 54 in part by the National Key Laboratory Foundation under Grant 2023 JCJQ LB 007 in part by the Natural Science Foundation of Guangdong Province of China under Grant 2022A1515010988 in part by the Key Project on Artificial Intelligence of Xi an Science and Technology Plan under Grant 23ZDCYJSGG0021 2022 Grant 23ZDCYYYCJ0008 and Grant 23ZDCYJSGG0002 2023 and in part by the Proof of Concept Fund from Hangzhou Research Institute of Xidian University under Grant GNYZ2023QC0201 Grant GNYZ2024QC004 and Grant GNYZ2024QC015 | |
dc.description.uri | https://ieeexplore.ieee.org/abstract/document/10947633 | |
dc.format.extent | 12 pages | |
dc.genre | journal articles | |
dc.genre | postprints | |
dc.identifier | doi:10.13016/m2zxhj-vfqs | |
dc.identifier.citation | Xie, Wenxuan, Chen Chen, Ying Ju, Jun Shen, Qingqi Pei, and Houbing Song. “Deep Reinforcement Learning-Based Computation Computational Offloading for Space–Air–Ground Integrated Vehicle Networks.” IEEE Transactions on Intelligent Transportation Systems, 2025, 1–12. https://doi.org/10.1109/TITS.2025.3551636. | |
dc.identifier.uri | https://doi.org/10.1109/TITS.2025.3551636 | |
dc.identifier.uri | http://hdl.handle.net/11603/38761 | |
dc.language.iso | en_US | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Information Systems Department | |
dc.rights | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | |
dc.subject | Delays | |
dc.subject | mobile edge computing (MEC) | |
dc.subject | Disasters | |
dc.subject | Low earth orbit satellites | |
dc.subject | Space-Air-Ground integrated network (SAGIN) | |
dc.subject | deep reinforcement learning | |
dc.subject | Processor scheduling | |
dc.subject | Autonomous aerial vehicles | |
dc.subject | Resource management | |
dc.subject | Computational modeling | |
dc.subject | Space-air-ground integrated networks | |
dc.subject | Satellites | |
dc.subject | Optimization | |
dc.subject | deep Q network (DQN) | |
dc.subject | UMBC Security and Optimization for Networked Globe Laboratory (SONG Lab) | |
dc.title | Deep Reinforcement Learning-Based Computation Computational Offloading for Space–Air–Ground Integrated Vehicle Networks | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0003-2631-9223 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- DeepReinforcementLearningBased.pdf
- Size:
- 1.41 MB
- Format:
- Adobe Portable Document Format