Rethinking Latency-Aware DNN Design With GPU Tail Effect Analysis
Links to Files
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Yu, Fuxun, Zirui Xu, Longfei Shangguan, Di Wang, Dimitrios Stamoulis, Rishi Madhok, Nikolaos Karianakis, et al. “Rethinking Latency-Aware DNN Design With GPU Tail Effect Analysis.” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2024. https://doi.org/10.1109/TCAD.2024.3404413.
Rights
© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Abstract
As the size of Deep Neural Networks (DNNs) continues to grow, their runtime latency also scales. While model pruning and Neural Architecture Search (NAS) can effectively reduce the computation workload, their effectiveness fails to consistently translate into runtime latency reduction. In this paper, we identify the root cause behind the mismatch between workload reduction and latency reduction is GPU tail effect – a classic system issue caused by resource under-utilization in the last processing wave of the GPU. We conduct detailed DNN workload characterization and demonstrate the prevalence of GPU tail effect across different DNN architectures, and meanwhile reveal that the unique deep structure and the light-weight layer workload of DNNs exacerbate the tail effect for DNN inference. We then propose a tail-awareness design space enhancement and DNN optimization algorithm to optimize existing NAS and pruning designs and achieve better runtime latency and model accuracy performance. Extensive experiments show 11%-27% latency reduction over SOTA DNN pruning and NAS methods.
