Deep Reinforcement Learning-Based Computation Computational Offloading for Space–Air–Ground Integrated Vehicle Networks
Loading...
Links to Files
Author/Creator
Author/Creator ORCID
Date
2025
Type of Work
Department
Program
Citation of Original Publication
Xie, Wenxuan, Chen Chen, Ying Ju, Jun Shen, Qingqi Pei, and Houbing Song. “Deep Reinforcement Learning-Based Computation Computational Offloading for Space–Air–Ground Integrated Vehicle Networks.” IEEE Transactions on Intelligent Transportation Systems, 2025, 1–12. https://doi.org/10.1109/TITS.2025.3551636.
Rights
© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Subjects
Delays
mobile edge computing (MEC)
Disasters
Low earth orbit satellites
Space-Air-Ground integrated network (SAGIN)
deep reinforcement learning
Processor scheduling
Autonomous aerial vehicles
Resource management
Computational modeling
Space-air-ground integrated networks
Satellites
Optimization
deep Q network (DQN)
UMBC Security and Optimization for Networked Globe Laboratory (SONG Lab)
mobile edge computing (MEC)
Disasters
Low earth orbit satellites
Space-Air-Ground integrated network (SAGIN)
deep reinforcement learning
Processor scheduling
Autonomous aerial vehicles
Resource management
Computational modeling
Space-air-ground integrated networks
Satellites
Optimization
deep Q network (DQN)
UMBC Security and Optimization for Networked Globe Laboratory (SONG Lab)
Abstract
In remote or disaster areas, where terrestrial networks are difficult to cover and Terrestrial Edge Computing (TEC) infrastructures are unavailable, solving the computation computational offloading for Internet of Vehicles (IoV) scenarios is challenging. Current terrestrial networks have high data rates, great connectivity, and low delay, but global coverage is limited. Space–Air–Ground Integrated Networks (SAGIN) can improve the coverage limitations of terrestrial networks and enhance disaster resistance. However, the rising complexity and heterogeneity of networks make it difficult to find a robust and intelligent computational offload strategy. Therefore, joint scheduling of space, air, and ground resources is needed to meet the growing demand for services. In light of this, we propose an integrated network framework for Space-Air Auxiliary Vehicle Computation (SA-AVC) and build a system model to support various IoV services in remote areas. Our model aims to maximize delay and fair utility and increase the utilization of satellites and Autonomous aerial vehicles (AAVs). To this end, we propose a Deep Reinforcement Learning algorithm to achieve real-time computational computational offloading decisions. We utilize the Rank-based Prioritization method in Prioritized Experience Replay (PER) to optimize our algorithm. We designed simulation experiments for validation and the results show that our proposed algorithm reduces the average system delay by 17.84%, 58.09%, and 58.32%, and the average variance of the task completion delay will be reduced by 29.41%, 48.74%, and 49.58% compared to the Deep Q Network (DQN), Q-learning and RandomChoose algorithms.