Reinforcement Learning Based Delay Line Design for Crosstalk Minimization
dc.contributor.author | Jung, Jaeho | |
dc.contributor.author | Yu, Younggyun | |
dc.contributor.author | Lee, Soobum | |
dc.date.accessioned | 2024-12-11T17:02:07Z | |
dc.date.available | 2024-12-11T17:02:07Z | |
dc.date.issued | 2024-10-31 | |
dc.description.abstract | Reinforcement learning (RL) is one of the artificial intelligence techniques that build an artificial neural network to achieve the most optimum decision. In this study, Deep Q-Network (DQN) RL is applied to the design of delay lines in electrical circuits for signal synchronization. The delay line is usually designed densely in a confined space that results in electrical noise named crosstalk. The challenges of delay line design root from the fact that the line should connect the start and end point using a given length, not to be entangled in a predefined two-dimensional space. The genetic algorithms (GA) or the random exploration method can be used, but their learning efficiency is very low and time-consuming. We propose and implement a novel connected exploration method to significantly expedites the design process. In each state, the direction of the line rendering (left, straight, or right) is considered as an action, and the artificial intelligence agent learns how to design a delay line of the desired length. As a result, we are able to obtain the optimal designs 3,000 times faster than the case of using the GA from our previous study. The proposed method can be applied to various routing design problems such as circuit routing or flow path configuration with seriously reduced design time, and can potentially lead to the discovery of new designs not relied on human intuition. | |
dc.description.sponsorship | This work was supported by the research grant from the Korea Atomic Energy Research Institute (KAERI) R&D Program (No.KAERI-524540-24), Chungbuk National University, and UMBC Strategic Awards for Research Transitions (START). | |
dc.description.uri | https://ieeexplore.ieee.org/document/10740167/ | |
dc.format.extent | 12 pages | |
dc.genre | journal articles | |
dc.identifier | doi:10.13016/m2eehe-oeae | |
dc.identifier.citation | Jung, Jaeho, Younggyun Yu, and Soobum Lee. “Reinforcement Learning Based Delay Line Design for Crosstalk Minimization.” IEEE Access, 2024, 1–1. https://doi.org/10.1109/ACCESS.2024.3488717. | |
dc.identifier.uri | https://doi.org/10.1109/ACCESS.2024.3488717 | |
dc.identifier.uri | http://hdl.handle.net/11603/37030 | |
dc.language.iso | en_US | |
dc.publisher | IEEE | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Mechanical Engineering Department | |
dc.rights | Attribution 4.0 International CC BY 4.0 | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/deed.en | |
dc.subject | Circuit synthesis | |
dc.subject | Routing | |
dc.subject | Genetic algorithms | |
dc.subject | Optimization | |
dc.subject | Circuits | |
dc.subject | Delay line | |
dc.subject | Machine learning algorithms | |
dc.subject | Artificial intelligence | |
dc.subject | Reinforcement learning | |
dc.subject | UMBC Energy Harvesting & Design Optimization Lab | |
dc.subject | Circuit design | |
dc.subject | Layout | |
dc.subject | Crosstalk | |
dc.subject | Delay lines | |
dc.subject | Evolutionary computation | |
dc.title | Reinforcement Learning Based Delay Line Design for Crosstalk Minimization | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0002-6418-7527 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Reinforcement_LearningBased_Delay_Line_Design_for_Crosstalk_Minimization.pdf
- Size:
- 1.98 MB
- Format:
- Adobe Portable Document Format