Compression of deep neural networks
dc.contributor.advisor | Pirsiavash, Hamed | |
dc.contributor.author | DAMADI, SEYYED MOHAMMAD SAEED | |
dc.contributor.department | Computer Science and Electrical Engineering | |
dc.contributor.program | Engineering, Electrical | |
dc.date.accessioned | 2024-01-10T20:04:10Z | |
dc.date.available | 2024-01-10T20:04:10Z | |
dc.date.issued | 2021-01-01 | |
dc.description.abstract | Compressing of deep neural networks aims at finding a sparse network that performs as well as a dense network but with significantly less parameters. This compression is called pruning. As a result of pruning, the energy consumption reduces, hardware requirements are relaxed, and responses to queries become faster. The pruning problem yields a constrained, stochastic, nonconvex, and non-differentiable optimization problem with a very large size. All these barriers can be bypassed by solving an approximate problem. To do so, we present ÒAmenable Sparse Network InvestigatorÓ (ASNI) algorithm that utilizes a novel pruning strategy based on a sigmoid function that induces sparsity level globally over the course of one single round of training. The ASNI algorithm fulfills both tasks that current state-of-the-art strategies can only do one of them. This algorithm has two subalgorithms: 1) ASNI-I, 2) ASNI-II. The first subalgorithmlearns an accurate sparse off-the-shelf network only in one single round of training. ASNI-II learns a sparse network and an initialization that is quantized, compressed, and from which the sparse network is trainable. | |
dc.format | application:pdf | |
dc.genre | thesis | |
dc.identifier.other | 12480 | |
dc.identifier.uri | http://hdl.handle.net/11603/31258 | |
dc.language | en | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Theses and Dissertations Collection | |
dc.relation.ispartof | UMBC Graduate School Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.rights | This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu | |
dc.source | Original File Name: DAMADI_umbc_0434M_12480.pdf | |
dc.subject | ASNI | |
dc.subject | Compressed quantized initialization | |
dc.subject | Compression of Deep neural network | |
dc.subject | Deep learning | |
dc.subject | Pruning | |
dc.title | Compression of deep neural networks | |
dc.type | Text | |
dcterms.accessRights | Distribution Rights granted to UMBC by the author. |