Reinforcement Learning Based Delay Line Design for Crosstalk Minimization
Loading...
Links to Files
Author/Creator
Author/Creator ORCID
Date
2024-10-31
Type of Work
Department
Program
Citation of Original Publication
Jung, Jaeho, Younggyun Yu, and Soobum Lee. “Reinforcement Learning Based Delay Line Design for Crosstalk Minimization.” IEEE Access, 2024, 1–1. https://doi.org/10.1109/ACCESS.2024.3488717.
Rights
Attribution 4.0 International CC BY 4.0
Abstract
Reinforcement learning (RL) is one of the artificial intelligence techniques that build an artificial neural network to achieve the most optimum decision. In this study, Deep Q-Network (DQN) RL is applied to the design of delay lines in electrical circuits for signal synchronization. The delay line is usually designed densely in a confined space that results in electrical noise named crosstalk. The challenges of delay line design root from the fact that the line should connect the start and end point using a given length, not to be entangled in a predefined two-dimensional space. The genetic algorithms (GA) or the random exploration method can be used, but their learning efficiency is very low and time-consuming. We propose and implement a novel connected exploration method to significantly expedites the design process. In each state, the direction of the line rendering (left, straight, or right) is considered as an action, and the artificial intelligence agent learns how to design a delay line of the desired length. As a result, we are able to obtain the optimal designs 3,000 times faster than the case of using the GA from our previous study. The proposed method can be applied to various routing design problems such as circuit routing or flow path configuration with seriously reduced design time, and can potentially lead to the discovery of new designs not relied on human intuition.