Benchmarking Artificial Neural Network Architectures for High-Performance Spiking Neural Networks

dc.contributor.authorIslam, Riadul
dc.contributor.authorMajurski, Patrick
dc.contributor.authorKwon, Jun
dc.contributor.authorSharma, Anurag
dc.contributor.authorTummala, Sri Ranga Sai Krishna
dc.date.accessioned2024-03-06T18:52:18Z
dc.date.available2024-03-06T18:52:18Z
dc.date.issued2024-02-19
dc.description.abstractOrganizations managing high-performance computing systems face a multitude of challenges, including overarching concerns such as overall energy consumption, microprocessor clock frequency limitations, and the escalating costs associated with chip production. Evidently, processor speeds have plateaued over the last decade, persisting within the range of 2 GHz to 5 GHz. Scholars assert that brain-inspired computing holds substantial promise for mitigating these challenges. The spiking neural network (SNN) particularly stands out for its commendable power efficiency when juxtaposed with conventional design paradigms. Nevertheless, our scrutiny has brought to light several pivotal challenges impeding the seamless implementation of large-scale neural networks (NNs) on silicon. These challenges encompass the absence of automated tools, the need for multifaceted domain expertise, and the inadequacy of existing algorithms to efficiently partition and place extensive SNN computations onto hardware infrastructure. In this paper, we posit the development of an automated tool flow capable of transmuting any NN into an SNN. This undertaking involves the creation of a novel graph-partitioning algorithm designed to strategically place SNNs on a network-on-chip (NoC), thereby paving the way for future energy-efficient and high-performance computing paradigms. The presented methodology showcases its effectiveness by successfully transforming ANN architectures into SNNs with a marginal average error penalty of merely 2.65%. The proposed graph-partitioning algorithm enables a 14.22% decrease in inter-synaptic communication and an 87.58% reduction in intra-synaptic communication, on average, underscoring the effectiveness of the proposed algorithm in optimizing NN communication pathways. Compared to a baseline graph-partitioning algorithm, the proposed approach exhibits an average decrease of 79.74% in latency and a 14.67% reduction in energy consumption. Using existing NoC tools, the energy-latency product of SNN architectures is, on average, 82.71% lower than that of the baseline architectures.
dc.description.sponsorshipThis research was funded, in part, by the Federal Work-Study (FWS) award, the UMBC start-up grant, the National Science Foundation (NSF) (award number 2138253), and Rezonent Inc. (award number CORP0061).
dc.description.urihttps://www.mdpi.com/1424-8220/24/4/1329
dc.format.extent14 pages
dc.genrejournal articles
dc.identifierdoi:10.13016/m21uro-j7ya
dc.identifier.citationIslam, Riadul, Patrick Majurski, Jun Kwon, Anurag Sharma, and Sri Ranga Sai Krishna Tummala. “Benchmarking Artificial Neural Network Architectures for High-Performance Spiking Neural Networks.” Sensors 24, no. 4 (January 2024): 1329. https://doi.org/10.3390/s24041329.
dc.identifier.urihttps://doi.org/10.3390/s24041329
dc.identifier.urihttp://hdl.handle.net/11603/31840
dc.language.isoen_US
dc.publisherMDPI
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rightsCreative Commons Attribution 4.0 International (CC BY 4.0)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectANN
dc.subjectANN-to-SNN conversion
dc.subjectartificial neural network
dc.subjectCNN
dc.subjectconvolutional neural network
dc.subjectlow energy
dc.subjectnetwork-on-chip
dc.subjectNoC
dc.subjectSNN
dc.subjectspiking neural network
dc.titleBenchmarking Artificial Neural Network Architectures for High-Performance Spiking Neural Networks
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-4649-3467

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
sensors-24-01329.pdf
Size:
956.95 KB
Format:
Adobe Portable Document Format