Deep Reinforcement Learning-based Energy Efficiency Optimization for RIS-aided Integrated Satellite-Aerial-Terrestrial Relay Networks
| dc.contributor.author | Wu, Min | |
| dc.contributor.author | Guo, Kefeng | |
| dc.contributor.author | Li, Xingwang | |
| dc.contributor.author | Lin, Zhi | |
| dc.contributor.author | Wu, Yongpeng | |
| dc.contributor.author | Tsiftsis, Theodoros A. | |
| dc.contributor.author | Song, Houbing | |
| dc.date.accessioned | 2024-03-13T17:13:49Z | |
| dc.date.available | 2024-03-13T17:13:49Z | |
| dc.date.issued | 2024-02-26 | |
| dc.description.abstract | Integrated satellite-aerial-terrestrial relay networks (ISATRNs) have been considered as a promising architecture for next-generation networks, where high altitude platform (HAP) is pivotal in these integrated networks. In this paper, we introduce a novel model for HAP-based ISATRNs with mixed FSO/RF transmission mode, which incorporates unmanned aerial vehicles (UAVs) equipped with reconfigurable intelligent surfaces (RISs) to dynamically reconfigure the propagation environment and fulfill the massive access requirements of ground users. Our aim is to maximize the system ergodic rate by joint optimizing the UAV trajectory, RIS phase shift, and active transmit beamforming matrix under the constraint of UAV energy consumption. To solve this intractable problem, a deep reinforcement learning (DRL)-based energy efficient optimization scheme by utilizing an improved long short-term memory (LSTM)-double deep Q-network (DDQN) framework is proposed. Numerical results demonstrate the superiority of our proposed algorithm over the traditional DDQN algorithm, on single-step exploration average reward values and other evaluation metrics. | |
| dc.description.sponsorship | This work was supported by the National Natural Science Foundation of China under Grants 62001517 and 62071202, the Research Foundation of the Key Laboratory of Spaceborne Information Intelligent Interpretation 2022-ZZKY-JJ-20-02, the Electronic Information Equipment System Research National Defense Science and Technology Key Laboratory Fund 2023-HT-04 and 2023-HT-07, in part by the Key Research and Development Project of Henan Province under Grant 231111210500, and the ?Double First-Class? Discipline Creation Project of Surveying Science and Technology under Grant GCCRC202306. The work of Zhi Lin is supported by the National Natural Science Foundation of China under Grants 62201592, the Research Plan Project of NUDT under Grant ZK21-33, in part by the Young Elite Scientist Sponsorship Program of CAST under Grant 2021-JCJQ-QT-048. The work of Y. Wu is supported in part by the Fundamental Research Funds for the Central Universities, National Science Foundation (NSFC) under Grant 62122052 and 62071289111 project BP0719010, and STCSM 22DZ2229005. | |
| dc.description.uri | https://ieeexplore.ieee.org/abstract/document/10445520 | |
| dc.format.extent | 16 pages | |
| dc.genre | journal articles; postprints | |
| dc.identifier | doi:10.13016/m2dbe2-bdic | |
| dc.identifier.citation | Wu, Min, Kefeng Guo, Xingwang Li, Zhi Lin, Yongpeng Wu, Theodoros A. Tsiftsis, and Houbing Song. "Deep Reinforcement Learning-Based Energy Efficiency Optimization for RIS-Aided Integrated Satellite-Aerial-Terrestrial Relay Networks." IEEE Transactions on Communications, 2024, 1-1. https://doi.org/10.1109/TCOMM.2024.3370618. | |
| dc.identifier.uri | https://doi.org/10.1109/TCOMM.2024.3370618 | |
| dc.identifier.uri | http://hdl.handle.net/11603/31989 | |
| dc.language.iso | en_US | |
| dc.publisher | IEEE | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Faculty Collection | |
| dc.relation.ispartof | UMBC Information Systems Department | |
| dc.rights | © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | |
| dc.subject | Array signal processing | |
| dc.subject | Autonomous aerial vehicles | |
| dc.subject | deep reinforcement learning (DRL) | |
| dc.subject | Heuristic algorithms | |
| dc.subject | Integrated satellite-aerial-terrestrial relay networks (ISATRNs) | |
| dc.subject | mixed FSO/RF mode | |
| dc.subject | NOMA | |
| dc.subject | non-orthogonal multiple access (NOMA) | |
| dc.subject | Optimization | |
| dc.subject | reconfigurable intelligent surface (RIS) | |
| dc.subject | Relay networks | |
| dc.subject | Satellites | |
| dc.title | Deep Reinforcement Learning-based Energy Efficiency Optimization for RIS-aided Integrated Satellite-Aerial-Terrestrial Relay Networks | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0000-0003-2631-9223 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Deep_Reinforcement_Learning-based_Energy_Efficiency_Optimization_for_RIS-aided_Integrated_Satellite-Aerial-Terrestrial_Relay_Networks.pdf
- Size:
- 2.96 MB
- Format:
- Adobe Portable Document Format
