Cyclic Sparsely Connected Architectures for Compact Deep Convolutional Neural Networks
Links to Fileshttps://ieeexplore.ieee.org/abstract/document/9537909
MetadataShow full item record
Type of Work14 pages
conference papers and proceedings
Citation of Original PublicationHosseini, Morteza et al.; Cyclic Sparsely Connected Architectures for Compact Deep Convolutional Neural Networks; IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Volume: 29, Issue: 10, 15 September, 2021; https://doi.org/10.1109/TVLSI.2021.3110250
RightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
In deep convolutional neural networks (DCNNs), model size and computation complexity are two important factors governing throughput and energy efficiency when deployed to hardware for inference. Recent works on compact DCNNs as well as pruning methods are effective, yet with drawbacks. For instance, more than half the size of all MobileNet models lies in their last two layers, mainly because compact separable convolution (CONV) layers are not applicable to their last fully connected (FC) layers. Also, in pruning methods, the compression is gained at the expense of irregularity in the DCNN architecture, which necessitates additional indexing memory to address nonzero weights, thereby increasing memory footprint, decompression delays, and energy consumption. In this article, we propose cyclic sparsely connected (CSC) architectures, with memory/computation complexity of O(N logN) , where N is the number of nodes/channels given a DCNN layer that, contrary to compact depthwise separable layers, can be used as an overlay for both FC and CONV layers of O(N²) . Also, contrary to pruning methods, CSC architectures are structurally sparse and require no indexing due to their cyclic nature. We show that both standard convolution and depthwise convolution layers are special cases of the CSC layers, whose mathematical function, along with FC layers, can be unified into one single formulation and whose hardware implementation can be carried out under one arithmetic logic component. We examine the efficacy of the CSC architectures for compression of LeNet, AlexNet, and MobileNet DCNNs with precision ranging from 2 to 32 bits. More specifically, we surge upon the compact 8-bit quantized 0.5 MobileNet V1 and show that by compressing its last two layers with CSC architectures, the model is compressed by ∼1.5× with a size of only 873 kB and little accuracy loss. Finally, we design a configurable hardware that implements all types of DCNN layers including FC, CONV, depthwise, CSC-FC, and CSC-CONV indistinguishably within a unified pipeline. We implement the hardware on a tiny Xilinx field-programmable gate array (FPGA) for total on-chip processing of the compressed MobileNet that, compared to the related work, has the highest Inference/J while utilizing the smallest FPGA.