Heterogeneous Scheduling of Deep Neural Networks

Author/Creator

Author/Creator ORCID

Date

2021-01-01

Department

Computer Science and Electrical Engineering

Program

Engineering, Computer

Citation of Original Publication

Rights

This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Distribution Rights granted to UMBC by the author.
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.

Abstract

Deep neural networks (DNN) have become the readiest answer to a range of application challenges, including image recognition, stock analysis, natural language processing, and biomedical applications, all while outperforming prior leading solutions that relied heavily on hand-engineered techniques. However, deployment of these neural networks often requires high computational and memory-intensive solutions. These requirements make it challenging to deploy DNNs in embedded, real-time, low-power applications where classic architectures, CPUs and GPUs, still impose significant power burdens. Systems-on-chip with FPGAs can improve performance and allow more fine-grain control of resources than CPUs or GPUs, but it is challenging finding the optimal balance between hardware and software to improve DNN efficiency. There are few proposed solutions in the current research literature addressing optimized hardware and software deployments of DNNs. To address the computation resource restrictions and low-power needs for deploying these networks, we describe and implement a domain-specific metric model for optimizing task deployment on differing platforms, hardware and software. Next, we discuss our DNN hardware accelerator called SCALENet: a SCalable Low-power AccELerator for real-time DNNs that includes multithreaded software workers. Contained within the framework is a heterogeneous-aware scheduler that uses DNN-specific metric models, software-optimized kernels and the SCALENet accelerator to allocate a task to a resource based on solving a numerical cost for a series of domain objectives. To demonstrate our contribution's applicability, we deploy nine modern deep network architectures, each containing a different number of parameters within the context of two different neural network applications: image processing and biomedical seizure detection. Utilizing the metric modeling techniques integrated into the heterogeneous-aware scheduler, we show the ability to meet computational requirements, adapt multiple architectures, and lower power by providing an optimized task to resource allocation. Our heterogeneous-aware scheduler decreases total power consumption by 10% does not affect the accuracy of the networks, and meets real-time deadlines. We demonstrate the ability to achieve parity with or exceed the energy efficiency of Nvidia� GPUs when evaluated against Jetson� TK1 with embedded GPU SoC and a 4x power savings in a power envelope of 2.0 Watts. When evaluated with the CIFAR10 dataset and a batch size of 1 against the NVIDIA�Jetson� TX1 and TX2, SCALENet has a throughput improvement of 2.2x and 1.3x to the TX1 and TX2 respectively, while improving energy efficiency by 3.7x and 1.9x. Compared to existing FPGA-based accelerators, SCALENet's accelerator and heterogeneous-aware scheduler achieve a 1.3x improvement in energy efficiency.