Exploring High-Level Neural Networks Architectures for Efficient Spiking Neural Networks Implementation

dc.contributor.authorIslam, Riadul
dc.contributor.authorMajurski, Patrick
dc.contributor.authorKwon, Jun
dc.contributor.authorTummala, Sri Ranga Sai Krishna
dc.date.accessioned2023-05-22T19:24:03Z
dc.date.available2023-05-22T19:24:03Z
dc.date.issued2023-03-21
dc.description.abstractThe microprocessor industry faces several challenges: total power consumption, processor speed, and increasing chip cost. It is visible that the processor speed in the last decade has not improved and saturated around 2 GHz to 5 GHz. Researchers believe that brain-inspired computing has great potential to resolve these problems. The spiking neural network (SNN) exhibits excellent power performance compared to the conventional design. However, we identified several key challenges to implementing large-scale neural networks (NNs) on silicon, such as nonexistent automated tools and requirements of many-domain expertise, and existing algorithms can not partition and place large-scale SNN computation efficiently on the hardware. In this research, we propose to develop an automated tool flow that can convert any NN to an SNN. In this process, we will develop a novel graph-partitioning algorithm and place SNN on a network-on-chip (NoC) to enable future energy-efficient and high-performance computing.en_US
dc.description.sponsorshipThis work was supported in part by Federal Work-Study (FWS) award, National Science Foundation (NSF) award number: 2138253, Rezonent Inc. award number: CORP0061, and the UMBC Startup grant.en_US
dc.description.urihttps://ieeexplore.ieee.org/abstract/document/10070080en_US
dc.format.extent5 pagesen_US
dc.genrejournal articlesen_US
dc.genrepostprintsen_US
dc.identifierdoi:10.13016/m2wqwq-yor5
dc.identifier.citationR. Islam, P. Majurski, J. Kwon and S. R. S. K. Tummala, "Exploring High-Level Neural Networks Architectures for Efficient Spiking Neural Networks Implementation," 2023 3rd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), Dhaka, Bangladesh, 2023, pp. 212-216, doi: 10.1109/ICREST57604.2023.10070080.en_US
dc.identifier.urihttps://doi.org/10.1109/ICREST57604.2023.10070080
dc.identifier.urihttp://hdl.handle.net/11603/28052
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rights© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.titleExploring High-Level Neural Networks Architectures for Efficient Spiking Neural Networks Implementationen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0002-4649-3467en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
main.pdf
Size:
803.71 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: