Reinforcement Learning Based Delay Line Design for Crosstalk Minimization

dc.contributor.authorJung, Jaeho
dc.contributor.authorYu, Younggyun
dc.contributor.authorLee, Soobum
dc.date.accessioned2024-12-11T17:02:07Z
dc.date.available2024-12-11T17:02:07Z
dc.date.issued2024-10-31
dc.description.abstractReinforcement learning (RL) is one of the artificial intelligence techniques that build an artificial neural network to achieve the most optimum decision. In this study, Deep Q-Network (DQN) RL is applied to the design of delay lines in electrical circuits for signal synchronization. The delay line is usually designed densely in a confined space that results in electrical noise named crosstalk. The challenges of delay line design root from the fact that the line should connect the start and end point using a given length, not to be entangled in a predefined two-dimensional space. The genetic algorithms (GA) or the random exploration method can be used, but their learning efficiency is very low and time-consuming. We propose and implement a novel connected exploration method to significantly expedites the design process. In each state, the direction of the line rendering (left, straight, or right) is considered as an action, and the artificial intelligence agent learns how to design a delay line of the desired length. As a result, we are able to obtain the optimal designs 3,000 times faster than the case of using the GA from our previous study. The proposed method can be applied to various routing design problems such as circuit routing or flow path configuration with seriously reduced design time, and can potentially lead to the discovery of new designs not relied on human intuition.
dc.description.sponsorshipThis work was supported by the research grant from the Korea Atomic Energy Research Institute (KAERI) R&D Program (No.KAERI-524540-24), Chungbuk National University, and UMBC Strategic Awards for Research Transitions (START).
dc.description.urihttps://ieeexplore.ieee.org/document/10740167/
dc.format.extent12 pages
dc.genrejournal articles
dc.identifierdoi:10.13016/m2eehe-oeae
dc.identifier.citationJung, Jaeho, Younggyun Yu, and Soobum Lee. “Reinforcement Learning Based Delay Line Design for Crosstalk Minimization.” IEEE Access, 2024, 1–1. https://doi.org/10.1109/ACCESS.2024.3488717.
dc.identifier.urihttps://doi.org/10.1109/ACCESS.2024.3488717
dc.identifier.urihttp://hdl.handle.net/11603/37030
dc.language.isoen_US
dc.publisherIEEE
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Mechanical Engineering Department
dc.rightsAttribution 4.0 International CC BY 4.0
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/deed.en
dc.subjectCircuit synthesis
dc.subjectRouting
dc.subjectGenetic algorithms
dc.subjectOptimization
dc.subjectCircuits
dc.subjectDelay line
dc.subjectMachine learning algorithms
dc.subjectArtificial intelligence
dc.subjectReinforcement learning
dc.subjectUMBC Energy Harvesting & Design Optimization Lab
dc.subjectCircuit design
dc.subjectLayout
dc.subjectCrosstalk
dc.subjectDelay lines
dc.subjectEvolutionary computation
dc.titleReinforcement Learning Based Delay Line Design for Crosstalk Minimization
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-6418-7527

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Reinforcement_LearningBased_Delay_Line_Design_for_Crosstalk_Minimization.pdf
Size:
1.98 MB
Format:
Adobe Portable Document Format