A Survey of Large-Scale Deep Learning Serving System Optimization: Challenges and Opportunities
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2021-11-28
Type of Work
Department
Program
Citation of Original Publication
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Attribution 4.0 International (CC BY 4.0)
Attribution 4.0 International (CC BY 4.0)
Subjects
Abstract
Deep Learning (DL) models have achieved superior performance in many application domains, including vision, language, medical, commercial ads, entertainment, etc. With the fast development,
both DL applications and the underlying serving hardware have demonstrated strong scaling trends, i.e., Model Scaling and Compute Scaling,
for example, the recent pre-trained model with hundreds of billions of
parameters with ∼TB level memory consumption, as well as the newest
GPU accelerators providing hundreds of TFLOPS. With both scaling
trends, new problems and challenges emerge in DL inference serving
systems, which gradually trends towards Large-scale Deep learning
Serving system (LDS). This survey aims to summarize and categorize
the emerging challenges and optimization opportunities for large-scale
deep learning serving systems. By providing a novel taxonomy, summarizing the computing paradigms, and elaborating the recent technique
advances, we hope that this survey could shed lights on new optimization perspectives and motivate novel works in large-scale deep learning
system optimization.