Amenable Sparse Network Investigator

dc.contributor.authorDamadi, Saeed
dc.contributor.authorNouri, Erfan
dc.contributor.authorPirsiavash, Hamed
dc.date.accessioned2022-03-15T16:12:18Z
dc.date.available2022-03-15T16:12:18Z
dc.date.issued2023-09-01
dc.descriptionComputing Conference 2023; London, United Kingdom; 22-23 June 2023
dc.description.abstractAs the optimization problem of pruning a neural network is nonconvex and the strategies are only guaranteed to find local solutions, a good initialization becomes paramount. To this end, we present the Amenable Sparse Network Investigator ASNI algorithm that learns a sparse network whose initialization is compressed. The learned sparse structure found by ASNI is amenable since its corresponding initialization, which is also learned by ASNI, consists of only 2L numbers, where L is the number of layers. Requiring just a few numbers for parameter initialization of the learned sparse network makes the sparse network amenable. The learned initialization set consists of L signed pairs that act as the centroids of parameter values of each layer. These centroids are learned by the ASNI algorithm after only one single round of training. We experimentally show that the learned centroids are sufficient to initialize the nonzero parameters of the learned sparse structure in order to achieve approximately the accuracy of non-sparse network. We also empirically show that in order to learn the centroids, one needs to prune the network globally and gradually. Hence, for parameter pruning we propose a novel strategy based on a sigmoid function that specifies the sparsity percentage across the network globally. Then, pruning is done magnitude-wise and after each epoch of training. We have performed a series of experiments utilizing networks such as ResNets, VGG-style, small convolutional, and fully connected ones on ImageNet, CIFAR10, and MNIST datasets.en_US
dc.description.urihttps://link.springer.com/chapter/10.1007/978-3-031-37717-4_27en_US
dc.format.extent20 pagesen_US
dc.genrebook chaptersen_US
dc.genreconference papers and proceedingsen_US
dc.genrepostprints
dc.identifierdoi:10.13016/m2bmpe-k7pg
dc.identifier.citationDamadi, Saeed, Erfan nouri, and Hamed Pirsiavash. “Amenable Sparse Network Investigator.” In Intelligent Computing, edited by Kohei Arai, 408–27. Lecture Notes in Networks and Systems. Cham: Springer Nature Switzerland, 2023. https://doi.org/10.1007/978-3-031-37717-4_27.
dc.identifier.urihttps://doi.org/10.1007/978-3-031-37717-4_27
dc.identifier.urihttp://hdl.handle.net/11603/24390
dc.language.isoen_USen_US
dc.publisherSpringer
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.rightsAccess to this item will begin 09-01-2024.
dc.titleAmenable Sparse Network Investigatoren_US
dc.typeTexten_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2202.09284.pdf
Size:
1.78 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: