Software-Defined Design Space Exploration for an Efficient DNN Accelerator Architecture
Author(s): Yu, Ye; Li, Yingmin; Che, Shuai; Jha, Niraj K; Zhang, Weifeng
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1t727g2d
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yu, Ye | - |
dc.contributor.author | Li, Yingmin | - |
dc.contributor.author | Che, Shuai | - |
dc.contributor.author | Jha, Niraj K | - |
dc.contributor.author | Zhang, Weifeng | - |
dc.date.accessioned | 2024-01-07T03:32:02Z | - |
dc.date.available | 2024-01-07T03:32:02Z | - |
dc.date.issued | 2020-04-20 | en_US |
dc.identifier.citation | Yu, Ye, Li, Yingmin, Che, Shuai, Jha, Niraj K, Zhang, Weifeng. (2021). Software-Defined Design Space Exploration for an Efficient DNN Accelerator Architecture. IEEE Transactions on Computers, 70 (1), 45 - 56. doi:10.1109/tc.2020.2983694 | en_US |
dc.identifier.issn | 0018-9340 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr1t727g2d | - |
dc.description.abstract | Deep neural networks (DNNs) have been shown to outperform conventional machine learning algorithms across a wide range of applications, e.g., image recognition, object detection, robotics, and natural language processing. However, the high computational complexity of DNNs often necessitates extremely fast and efficient hardware. The problem gets worse as the size of neural networks grows exponentially. As a result, customized hardware accelerators have been developed to accelerate DNN processing without sacrificing model accuracy. However, previous accelerator design studies have not fully considered the characteristics of the target applications, which may lead to sub-optimal architecture designs. On the other hand, new DNN models have been developed for better accuracy, but their compatibility with the underlying hardware accelerator is often overlooked. In this article, we propose an application-driven framework for architectural design space exploration of DNN accelerators. This framework is based on a hardware analytical model of individual DNN operations. It models the accelerator design task as a multi-dimensional optimization problem. We demonstrate that it can be efficaciously used in application-driven accelerator architecture design: we use the framework to optimize the accelerator configurations for eight representative DNNs and select the configuration with the highest geometric mean performance. The geometric mean performance improvement of the selected DNN configuration relative to the architectural configuration optimized only for each individual DNN ranges from 12.0 to 117.9 percent. Given a target DNN, the framework can generate efficient accelerator design solutions with optimized performance and area. Furthermore, we explore the opportunity to use the framework for accelerator configuration optimization under simultaneous diverse DNN applications. The framework is also capable of improving neural network models to best fit the underlying hardware resources. We demonstrate that it can be used to analyze the relationship between the operations of the target DNNs and the corresponding accelerator configurations, based on which the DNNs can be tuned for better processing efficiency on the given accelerator without sacrificing accuracy. | en_US |
dc.format.extent | 45 - 56 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | IEEE Transactions on Computers | en_US |
dc.rights | Author's manuscript | en_US |
dc.title | Software-Defined Design Space Exploration for an Efficient DNN Accelerator Architecture | en_US |
dc.type | Journal Article | en_US |
dc.identifier.doi | doi:10.1109/tc.2020.2983694 | - |
dc.identifier.eissn | 1557-9956 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/journal-article | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
1903.07676.pdf | 10.15 MB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.