Skip to main content

SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference

Author(s): Yu, Ye; Jha, Niraj K

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1k93162m
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYu, Ye-
dc.contributor.authorJha, Niraj K-
dc.date.accessioned2023-12-24T15:57:29Z-
dc.date.available2023-12-24T15:57:29Z-
dc.date.issued2020-06-18en_US
dc.identifier.citationYu, Ye, Jha, Niraj K. (2022). SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference. IEEE Transactions on Emerging Topics in Computing, 10 (1), 237 - 249. doi:10.1109/tetc.2020.3003328en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1k93162m-
dc.description.abstractConvolutional neural networks (CNNs) outperform traditional machine learning algorithms across a wide range of applications, such as object recognition, image segmentation, and autonomous driving. However, their ever-growing computational complexity makes it necessary to design efficient hardware accelerators. Most CNN accelerators focus on exploring various dataflow styles and designs that exploit computational parallelism. However, potential performance improvement (speedup) from sparsity has not been adequately addressed. The computation and memory footprint of CNNs can be significantly reduced if sparsity is exploited in network evaluations. To further improve performance and energy efficiency, some accelerators evaluate CNNs with limited precision. However, this is limited to the inference phase since reduced precision sacrifices network accuracy if used in training. In addition, CNN evaluation is usually memory-intensive, especially during training. The performance bottleneck arises from the fact that the memory cannot feed the computational units enough data, resulting in idling of these computational units and thus low utilization ratios. In this article, we propose SPRING, a SParsity-aware Reduced-precision Monolithic 3D CNN accelerator for trainING and inference. SPRING supports both CNN training and inference. It uses a binary mask scheme to encode sparsities in activations and weights. It uses the stochastic rounding algorithm to train CNNs with reduced precision without accuracy loss. To alleviate the memory bottleneck in CNN evaluation, especially during training, SPRING uses an efficient monolithic 3D nonvolatile memory interface to increase memory bandwidth. Compared to Nvidia GeForce GTX 1080 Ti, SPRING achieves 15.6×, 4.2×, and 66.0× improvements in performance, power reduction, and energy efficiency, respectively, for CNN training, and 15.5×, 4.5×, and 69.1× improvements in performance, power reduction, and energy efficiency, respectively, for inference.en_US
dc.format.extent237 - 249en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Transactions on Emerging Topics in Computingen_US
dc.rightsAuthor's manuscripten_US
dc.titleSPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inferenceen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1109/tetc.2020.3003328-
dc.identifier.eissn2168-6750-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
1909.00557.pdf6.66 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.