ViT-Reg: Regression-Focused Hardware-Aware Fine-Tuning for ViT on tinyML Platforms

dc.contributor.authorShaharear, Md Ragib
dc.contributor.authorMazumder, Arnab Neelim
dc.contributor.authorMohsenin, Tinoosh
dc.date.accessioned2025-01-31T18:24:14Z
dc.date.available2025-01-31T18:24:14Z
dc.date.issued2024-12-23
dc.description.abstractVision Transformers (ViTs) have demonstrated significant improvements in image classification tasks. However, deploying them on resource-constrained tinyML platforms presents considerable challenges due to their high computational demands and dynamic power consumption. Current methods rely heavily on computationally intensive architecture search techniques to identify optimal configurations, which are not well-suited for tinyML devices. This paper introduces ViT-Reg, a regression-based hardware-aware fine-tuning approach that identifies suitable ViT architectures for tinyML platforms. The proposed method enables efficient exploration of the configuration space, drastically reducing the computational overhead typically associated with architecture searches. ViT-Reg is hardware-aware, utilizing polynomial regression to narrow the search space while treating accuracy as a constraint. In experiments conducted on the CIFAR-10 and Tiny-ImageNet datasets, ViT-Reg deployed on Nvidia Jetson Nano achieved a 55.6% and 37.4% reduction in dynamic power consumption, along with a 65% and 60% improvement in energy efficiency compared to baseline ViT models. Finally, ViT-Reg provides an 8× improvement in energy efficiency relative to recent hardware implementations of the VGG model.
dc.description.sponsorshipThis work was supported by the U.S. Army Contracting Command under Cooperative Agreement W911NF24-2-0222. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
dc.description.urihttps://ieeexplore.ieee.org/document/10811941
dc.format.extent8 pages
dc.genrejournal articles
dc.genrepostprints
dc.identifierdoi:10.13016/m2hzna-k10r
dc.identifier.citationShaharear, Md Ragib, Arnab Neelim Mazumder, and Tinoosh Mohsenin. "ViT-Reg: Regression-Focused Hardware-Aware Fine-Tuning for ViT on tinyML Platforms". IEEE Design & Test. (December 23, 2024): 1–1 https://doi.org/10.1109/MDAT.2024.3521320.
dc.identifier.urihttps://doi.org/10.1109/MDAT.2024.3521320
dc.identifier.urihttp://hdl.handle.net/11603/37574
dc.language.isoen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Student Collection
dc.rights© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.subjectreal-time and energy efficient deployment
dc.subjectComputer vision
dc.subjectComputer architecture
dc.subjectHardware
dc.subjecttinyML Hardware
dc.subjectPolynomials
dc.subjectHead
dc.subjectTiny machine learning
dc.subjectPower demand
dc.subjectAccuracy
dc.subjectRegression
dc.subjectVision Transformer
dc.subjectTraining
dc.subjectTransformers
dc.titleViT-Reg: Regression-Focused Hardware-Aware Fine-Tuning for ViT on tinyML Platforms
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-9550-7917

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ViTReg_RegressionFocused_HardwareAware_FineTuning_for_ViT_on_tinyML_Platforms.pdf
Size:
4.08 MB
Format:
Adobe Portable Document Format