Reinforcement Learning-Based Offloading for RIS-Aided Cloud-Edge Computing in IoT Networks: Modeling, Analysis and Optimization
Links to Files
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Zhang, Tiantian, Dongyang Xu, Amr Tolba, Keping Yu, Houbing Song, and Shui Yu. “Reinforcement Learning-Based Offloading for RIS-Aided Cloud-Edge Computing in IoT Networks: Modeling, Analysis and Optimization.” IEEE Internet of Things Journal (08 March 2024). https://doi.org/10.1109/JIOT.2024.3367791.
Rights
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Abstract
The rapid advancement of wireless communication and artificial intelligence (AI) has led to a plethora of emerging applications that require exceptional connectivity, minimal latency, and substantial computing resources. The widespread adoption of cloud-edge intelligence is propelling the development of future networks capable of supporting intelligent computing. Mobile edge computing (MEC) technology facilitates the movement of computing resources and storage to the network’s edge, enabling cost-effective offloading of computational tasks for related applications which needs for reduced latency and improved energy efficiency. However, the offloading efficiency is hindered by limitations of wireless transmission capacity. This paper aims to address this issue by integrating reconfigurable intelligent surfaces (RISs) into a cell-free network within an intelligent cloud-edge system. The core idea is to strategically deploy passive RISs around base stations (BSs) to reconstruct the transmission channel and improve the corresponding capacity. Subsequently, we formulate an optimal problem involving joint beamforming for RISs and BSs, which is characterized by non-convexity and complexity. To tackle this challenge, we employ an alternating optimization scheme to ensure the effectiveness of joint beamforming. In particular, deep reinforcement learning (DRL) is leveraged to reduce the computational complexity involved in optimizing task offloading. Additionally, Lyapunov optimization is utilized to model the latency queue and improve the learning efficiency of the offloading framework. We conduct comprehensive evaluations on the wireless system’s capacity, average latency, and energy consumption, considering the integration of RIS with the DRL offloading framework. Experimental results demonstrate that our proposed scheme achieves superior efficiency and robustness.