Reinforcement Learning-Based Offloading for RIS-Aided Cloud-Edge Computing in IoT Networks: Modeling, Analysis and Optimization
dc.contributor.author | Zhang, Tiantian | |
dc.contributor.author | Xu, Dongyang | |
dc.contributor.author | Tolba, Amr | |
dc.contributor.author | Yu, Keping | |
dc.contributor.author | Song, Houbing | |
dc.contributor.author | Yu, Shui | |
dc.date.accessioned | 2024-03-27T13:26:14Z | |
dc.date.available | 2024-03-27T13:26:14Z | |
dc.date.issued | 2024-03-08 | |
dc.description.abstract | The rapid advancement of wireless communication and artificial intelligence (AI) has led to a plethora of emerging applications that require exceptional connectivity, minimal latency, and substantial computing resources. The widespread adoption of cloud-edge intelligence is propelling the development of future networks capable of supporting intelligent computing. Mobile edge computing (MEC) technology facilitates the movement of computing resources and storage to the network’s edge, enabling cost-effective offloading of computational tasks for related applications which needs for reduced latency and improved energy efficiency. However, the offloading efficiency is hindered by limitations of wireless transmission capacity. This paper aims to address this issue by integrating reconfigurable intelligent surfaces (RISs) into a cell-free network within an intelligent cloud-edge system. The core idea is to strategically deploy passive RISs around base stations (BSs) to reconstruct the transmission channel and improve the corresponding capacity. Subsequently, we formulate an optimal problem involving joint beamforming for RISs and BSs, which is characterized by non-convexity and complexity. To tackle this challenge, we employ an alternating optimization scheme to ensure the effectiveness of joint beamforming. In particular, deep reinforcement learning (DRL) is leveraged to reduce the computational complexity involved in optimizing task offloading. Additionally, Lyapunov optimization is utilized to model the latency queue and improve the learning efficiency of the offloading framework. We conduct comprehensive evaluations on the wireless system’s capacity, average latency, and energy consumption, considering the integration of RIS with the DRL offloading framework. Experimental results demonstrate that our proposed scheme achieves superior efficiency and robustness. | |
dc.description.sponsorship | This work was supported in part by the National Natural Science Foundation of China under the Grants No. 62001368, in part by the Key Research and Development Program of Shaanxi under Grant No. 2022GY-093, in part by the open research fund of National Mobile Communications Research Laboratory, Southeast University (No.2023D13) and in part by the Researchers Supporting Project No.(RSPD2024R681), King Saud University, Riyadh, Saudi Arabia. (Corresponding author: Dongyang Xu.) | |
dc.description.uri | https://ieeexplore.ieee.org/abstract/document/10460315 | |
dc.format.extent | 18 pages | |
dc.genre | journal articles | |
dc.genre | postprints | |
dc.identifier | doi:10.13016/m2p76v-vkcu | |
dc.identifier.citation | Zhang, Tiantian, Dongyang Xu, Amr Tolba, Keping Yu, Houbing Song, and Shui Yu. “Reinforcement Learning-Based Offloading for RIS-Aided Cloud-Edge Computing in IoT Networks: Modeling, Analysis and Optimization.” IEEE Internet of Things Journal (08 March 2024). https://doi.org/10.1109/JIOT.2024.3367791. | |
dc.identifier.uri | https://doi.org/10.1109/JIOT.2024.3367791 | |
dc.identifier.uri | http://hdl.handle.net/11603/32685 | |
dc.language.iso | en_US | |
dc.publisher | IEEE | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Information Systems Department | |
dc.rights | © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | |
dc.subject | Optimization | |
dc.subject | Cloud computing | |
dc.subject | cloud-edge offloading | |
dc.subject | Energy consumption | |
dc.subject | Internet of Things | |
dc.subject | reconfigurable intelligent surface | |
dc.subject | Reconfigurable intelligent surfaces | |
dc.subject | Reinforcement learning | |
dc.subject | resource allocation | |
dc.subject | Task analysis | |
dc.subject | Wireless communication | |
dc.title | Reinforcement Learning-Based Offloading for RIS-Aided Cloud-Edge Computing in IoT Networks: Modeling, Analysis and Optimization | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0003-2631-9223 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Reinforcement_Learning-Based_Offloading_for_RIS-Aided_Cloud-Edge_Computing_in_IoT_Networks_Modeling_Analysis_and_Optimization.pdf
- Size:
- 1.78 MB
- Format:
- Adobe Portable Document Format