Deep Reinforcement Learning-Based Computation Computational Offloading for Space–Air–Ground Integrated Vehicle Networks

dc.contributor.authorXie, Wenxuan
dc.contributor.authorChen, Chen
dc.contributor.authorJu, Ying
dc.contributor.authorShen, Jun
dc.contributor.authorPei, Qingqi
dc.contributor.authorSong, Houbing
dc.date.accessioned2025-06-05T14:03:49Z
dc.date.available2025-06-05T14:03:49Z
dc.date.issued2025
dc.description.abstractIn remote or disaster areas, where terrestrial networks are difficult to cover and Terrestrial Edge Computing (TEC) infrastructures are unavailable, solving the computation computational offloading for Internet of Vehicles (IoV) scenarios is challenging. Current terrestrial networks have high data rates, great connectivity, and low delay, but global coverage is limited. Space–Air–Ground Integrated Networks (SAGIN) can improve the coverage limitations of terrestrial networks and enhance disaster resistance. However, the rising complexity and heterogeneity of networks make it difficult to find a robust and intelligent computational offload strategy. Therefore, joint scheduling of space, air, and ground resources is needed to meet the growing demand for services. In light of this, we propose an integrated network framework for Space-Air Auxiliary Vehicle Computation (SA-AVC) and build a system model to support various IoV services in remote areas. Our model aims to maximize delay and fair utility and increase the utilization of satellites and Autonomous aerial vehicles (AAVs). To this end, we propose a Deep Reinforcement Learning algorithm to achieve real-time computational computational offloading decisions. We utilize the Rank-based Prioritization method in Prioritized Experience Replay (PER) to optimize our algorithm. We designed simulation experiments for validation and the results show that our proposed algorithm reduces the average system delay by 17.84%, 58.09%, and 58.32%, and the average variance of the task completion delay will be reduced by 29.41%, 48.74%, and 49.58% compared to the Deep Q Network (DQN), Q-learning and RandomChoose algorithms.
dc.description.sponsorshipThis work was supported in part by the National Natural Science Foundation of China under Grant 62072360 and Grant 62172438 in part by the Key Research and Development Plan of Shaanxi Province under Grant 2021ZDLGY02 09 Grant 2023 GHZD 44 and Grant 2023 ZDLGY 54 in part by the National Key Laboratory Foundation under Grant 2023 JCJQ LB 007 in part by the Natural Science Foundation of Guangdong Province of China under Grant 2022A1515010988 in part by the Key Project on Artificial Intelligence of Xi an Science and Technology Plan under Grant 23ZDCYJSGG0021 2022 Grant 23ZDCYYYCJ0008 and Grant 23ZDCYJSGG0002 2023 and in part by the Proof of Concept Fund from Hangzhou Research Institute of Xidian University under Grant GNYZ2023QC0201 Grant GNYZ2024QC004 and Grant GNYZ2024QC015
dc.description.urihttps://ieeexplore.ieee.org/abstract/document/10947633
dc.format.extent12 pages
dc.genrejournal articles
dc.genrepostprints
dc.identifierdoi:10.13016/m2zxhj-vfqs
dc.identifier.citationXie, Wenxuan, Chen Chen, Ying Ju, Jun Shen, Qingqi Pei, and Houbing Song. “Deep Reinforcement Learning-Based Computation Computational Offloading for Space–Air–Ground Integrated Vehicle Networks.” IEEE Transactions on Intelligent Transportation Systems, 2025, 1–12. https://doi.org/10.1109/TITS.2025.3551636.
dc.identifier.urihttps://doi.org/10.1109/TITS.2025.3551636
dc.identifier.urihttp://hdl.handle.net/11603/38761
dc.language.isoen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Information Systems Department
dc.rights© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.subjectDelays
dc.subjectmobile edge computing (MEC)
dc.subjectDisasters
dc.subjectLow earth orbit satellites
dc.subjectSpace-Air-Ground integrated network (SAGIN)
dc.subjectdeep reinforcement learning
dc.subjectProcessor scheduling
dc.subjectAutonomous aerial vehicles
dc.subjectResource management
dc.subjectComputational modeling
dc.subjectSpace-air-ground integrated networks
dc.subjectSatellites
dc.subjectOptimization
dc.subjectdeep Q network (DQN)
dc.subjectUMBC Security and Optimization for Networked Globe Laboratory (SONG Lab)
dc.titleDeep Reinforcement Learning-Based Computation Computational Offloading for Space–Air–Ground Integrated Vehicle Networks
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0003-2631-9223

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
DeepReinforcementLearningBased.pdf
Size:
1.41 MB
Format:
Adobe Portable Document Format