Nooralinejad, ParsaAbbasi, AliKolouri, SoheilPirsiavash, Hamed2022-11-142022-11-142022-06-16http://hdl.handle.net/11603/26316International Conference on Computer Vision; Paris, France; October 2 - 6, 2023Communication becomes a bottleneck in various distributed Machine Learning settings. Here, we propose a novel training framework that leads to highly efficient communication of models between agents. In short, we train our network to be a linear combination of many pseudo-randomly generated frozen models. For communication, the source agent transmits only the ‘seed’ scalar used to generate the pseudo-random ‘basis’ networks along with the learned linear mixture coefficients. Our method, denoted as PRANC, learns almost 100× fewer parameters than a deep model and still performs well on several datasets and architectures. PRANC enables 1) efficient communication of models between agents, 2) efficient model storage, and 3) accelerated inference by generating layer-wise weights on the fly. We test PRANC on CIFAR-10, CIFAR-100, tinyImageNet, and ImageNet-100 with various architectures like AlexNet, LeNet, ResNet18, ResNet20, and ResNet56 and demonstrate a massive reduction in the number of parameters while providing satisfactory performance on these benchmark datasets. The code is available https://github.com/UCDvision/PRANC11 pagesen-USThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.PRANC: Pseudo RAndom Networks for Compacting deep modelsText