ViT-Reg: Regression-Focused Hardware-Aware Fine-Tuning for ViT on tinyML Platforms

Department

Program

Citation of Original Publication

Shaharear, Md Ragib, Arnab Neelim Mazumder, and Tinoosh Mohsenin. "ViT-Reg: Regression-Focused Hardware-Aware Fine-Tuning for ViT on tinyML Platforms". IEEE Design & Test. (December 23, 2024): 1–1 https://doi.org/10.1109/MDAT.2024.3521320.

Rights

© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Abstract

Vision Transformers (ViTs) have demonstrated significant improvements in image classification tasks. However, deploying them on resource-constrained tinyML platforms presents considerable challenges due to their high computational demands and dynamic power consumption. Current methods rely heavily on computationally intensive architecture search techniques to identify optimal configurations, which are not well-suited for tinyML devices. This paper introduces ViT-Reg, a regression-based hardware-aware fine-tuning approach that identifies suitable ViT architectures for tinyML platforms. The proposed method enables efficient exploration of the configuration space, drastically reducing the computational overhead typically associated with architecture searches. ViT-Reg is hardware-aware, utilizing polynomial regression to narrow the search space while treating accuracy as a constraint. In experiments conducted on the CIFAR-10 and Tiny-ImageNet datasets, ViT-Reg deployed on Nvidia Jetson Nano achieved a 55.6% and 37.4% reduction in dynamic power consumption, along with a 65% and 60% improvement in energy efficiency compared to baseline ViT models. Finally, ViT-Reg provides an 8× improvement in energy efficiency relative to recent hardware implementations of the VGG model.