Cyclic Sparsely Connected Architectures for Compact Deep Convolutional Neural Networks

dc.contributor.authorHosseini, Morteza
dc.contributor.authorManjunath, Nitheesh
dc.contributor.authorPrakash, Bharat
dc.contributor.authorMazumder, Arnab
dc.contributor.authorChandrareddy, Vandana
dc.contributor.authorHomayoun, Houman
dc.contributor.authorMohsenin, Tinoosh
dc.date.accessioned2021-10-08T17:51:25Z
dc.date.available2021-10-08T17:51:25Z
dc.date.issued2021-09-15
dc.descriptionIEEE Transactions on Very Large Scale Integration (VLSI) Systemsen
dc.description.abstractIn deep convolutional neural networks (DCNNs), model size and computation complexity are two important factors governing throughput and energy efficiency when deployed to hardware for inference. Recent works on compact DCNNs as well as pruning methods are effective, yet with drawbacks. For instance, more than half the size of all MobileNet models lies in their last two layers, mainly because compact separable convolution (CONV) layers are not applicable to their last fully connected (FC) layers. Also, in pruning methods, the compression is gained at the expense of irregularity in the DCNN architecture, which necessitates additional indexing memory to address nonzero weights, thereby increasing memory footprint, decompression delays, and energy consumption. In this article, we propose cyclic sparsely connected (CSC) architectures, with memory/computation complexity of O(N logN) , where N is the number of nodes/channels given a DCNN layer that, contrary to compact depthwise separable layers, can be used as an overlay for both FC and CONV layers of O(N²) . Also, contrary to pruning methods, CSC architectures are structurally sparse and require no indexing due to their cyclic nature. We show that both standard convolution and depthwise convolution layers are special cases of the CSC layers, whose mathematical function, along with FC layers, can be unified into one single formulation and whose hardware implementation can be carried out under one arithmetic logic component. We examine the efficacy of the CSC architectures for compression of LeNet, AlexNet, and MobileNet DCNNs with precision ranging from 2 to 32 bits. More specifically, we surge upon the compact 8-bit quantized 0.5 MobileNet V1 and show that by compressing its last two layers with CSC architectures, the model is compressed by ∼1.5× with a size of only 873 kB and little accuracy loss. Finally, we design a configurable hardware that implements all types of DCNN layers including FC, CONV, depthwise, CSC-FC, and CSC-CONV indistinguishably within a unified pipeline. We implement the hardware on a tiny Xilinx field-programmable gate array (FPGA) for total on-chip processing of the compressed MobileNet that, compared to the related work, has the highest Inference/J while utilizing the smallest FPGA.en
dc.description.urihttps://ieeexplore.ieee.org/abstract/document/9537909en
dc.format.extent14 pagesen
dc.genreconference papers and proceedingsen
dc.genrepostprintsen
dc.identifierdoi:10.13016/m2synk-6sqz
dc.identifier.citationHosseini, Morteza et al.; Cyclic Sparsely Connected Architectures for Compact Deep Convolutional Neural Networks; IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Volume: 29, Issue: 10, 15 September, 2021; https://doi.org/10.1109/TVLSI.2021.3110250en
dc.identifier.urihttps://doi.org/10.1109/TVLSI.2021.3110250
dc.identifier.urihttp://hdl.handle.net/11603/23074
dc.language.isoenen
dc.publisherIEEEen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rights© 2021 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en
dc.titleCyclic Sparsely Connected Architectures for Compact Deep Convolutional Neural Networksen
dc.typeTexten
dcterms.creatorhttps://orcid.org/0000-0002-7218-7754en
dcterms.creatorhttps://orcid.org/0000-0003-1745-9072en
dcterms.creatorhttps://orcid.org/0000-0002-9550-7917en
dcterms.creatorhttps://orcid.org/0000-0001-5551-2124en

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Cyclic_Sparsely_Connected_Architectures_for_Compact_Deep_Convolutional_Neural_Networks (1).pdf
Size:
2.12 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: