Skip to main content

SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference

Author(s): Yu, Ye; Jha, Niraj K

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1k93162m
Abstract: Convolutional neural networks (CNNs) outperform traditional machine learning algorithms across a wide range of applications, such as object recognition, image segmentation, and autonomous driving. However, their ever-growing computational complexity makes it necessary to design efficient hardware accelerators. Most CNN accelerators focus on exploring various dataflow styles and designs that exploit computational parallelism. However, potential performance improvement (speedup) from sparsity has not been adequately addressed. The computation and memory footprint of CNNs can be significantly reduced if sparsity is exploited in network evaluations. To further improve performance and energy efficiency, some accelerators evaluate CNNs with limited precision. However, this is limited to the inference phase since reduced precision sacrifices network accuracy if used in training. In addition, CNN evaluation is usually memory-intensive, especially during training. The performance bottleneck arises from the fact that the memory cannot feed the computational units enough data, resulting in idling of these computational units and thus low utilization ratios. In this article, we propose SPRING, a SParsity-aware Reduced-precision Monolithic 3D CNN accelerator for trainING and inference. SPRING supports both CNN training and inference. It uses a binary mask scheme to encode sparsities in activations and weights. It uses the stochastic rounding algorithm to train CNNs with reduced precision without accuracy loss. To alleviate the memory bottleneck in CNN evaluation, especially during training, SPRING uses an efficient monolithic 3D nonvolatile memory interface to increase memory bandwidth. Compared to Nvidia GeForce GTX 1080 Ti, SPRING achieves 15.6×, 4.2×, and 66.0× improvements in performance, power reduction, and energy efficiency, respectively, for CNN training, and 15.5×, 4.5×, and 69.1× improvements in performance, power reduction, and energy efficiency, respectively, for inference.
Publication Date: 18-Jun-2020
Citation: Yu, Ye, Jha, Niraj K. (2022). SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference. IEEE Transactions on Emerging Topics in Computing, 10 (1), 237 - 249. doi:10.1109/tetc.2020.3003328
DOI: doi:10.1109/tetc.2020.3003328
EISSN: 2168-6750
Pages: 237 - 249
Type of Material: Journal Article
Journal/Proceeding Title: IEEE Transactions on Emerging Topics in Computing
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.