A Hardware Accelerator for Language Guided Reinforcement Learning

Author/Creator ORCID

Department

Program

Citation of Original Publication

A. Shiri, A. N. Mazumder, B. Prakash, H. Homayoun, N. R. Waytowich and T. Mohsenin, "A Hardware Accelerator for Language Guided Reinforcement Learning," in IEEE Design & Test, doi: 10.1109/MDAT.2021.3063363.

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Public Domain Mark 1.0
This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law

Subjects

Abstract

Reinforcement learning (RL) has shown great performance in solving sequential decision-making problems. This paper proposes a framework to train RL agents conditioned on constraints that are in the form of structured language to improve training efficiency. We implemented an energy-efficient hardware accelerator to receive both images and text inputs that allows RL agents understand human language and act in real-world environments. A scalable and parallel hardware with different number of processing elements is implemented on both FPGA and ASIC that provides a balance between power consumption and performance. The post-layout ASIC design consumes 9.2 mW while providing 361 fps throughput.