Skip to main content

SCANN: Synthesis of Compact and Accurate Neural Networks

Author(s): Hassantabar, Shayan; Wang, Zeyu; Jha, Niraj K

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1ng4gr88
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHassantabar, Shayan-
dc.contributor.authorWang, Zeyu-
dc.contributor.authorJha, Niraj K-
dc.date.accessioned2024-01-07T15:44:41Z-
dc.date.available2024-01-07T15:44:41Z-
dc.date.issued2021-09-29en_US
dc.identifier.citationHassantabar, Shayan, Wang, Zeyu, Jha, Niraj K. (2022). SCANN: Synthesis of Compact and Accurate Neural Networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 41 (9), 3012 - 3025. doi:10.1109/tcad.2021.3116470en_US
dc.identifier.issn0278-0070-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1ng4gr88-
dc.description.abstractDeep neural networks (DNNs) have become the driving force behind recent artificial intelligence (AI) research. With the help of a vast amount of training data, neural networks can perform better than traditional machine learning algorithms in many applications. An important problem with implementing a neural network is the design of its architecture. Typically, such an architecture is obtained manually by exploring its hyperparameter space and kept fixed during training. This approach is both time-consuming and inefficient. Another issue is that modern neural networks often contain millions of parameters, whereas many applications require small inference models due to imposed resource constraints, such as energy constraints on battery-operated devices. However, efforts to migrate DNNs to such devices typically entail a significant loss of classification accuracy. To address these challenges, we propose a two-step neural network synthesis methodology, called DR+SCANN, that combines two complementary approaches to design compact and accurate DNNs. At the core of our framework is the SCANN methodology that uses three basic architecture-changing operations, namely connection growth, neuron growth, and connection pruning, to synthesize feed-forward architectures with arbitrary structure. These neural networks are not limited to the multilayer perceptron structure. SCANN encapsulates three synthesis methodologies that apply a repeated grow-and-prune paradigm to three architectural starting points. DR+SCANN combines the SCANN methodology with dataset dimensionality reduction to alleviate the curse of dimensionality. We demonstrate the efficacy of SCANN and DR+SCANN on various image and non-image datasets. We evaluate SCANN on MNIST and ImageNet benchmarks. Without any loss in accuracy, SCANN generates a 46.3× smaller network than the LeNet-5 Caffe model. We also compare SCANN-synthesized networks with a state-of-the-art fully-connected feed-forward model for MNIST, and show 20× (19.9×) reduction in number of parameters (floating-point operations) with little drop in accuracy. On the ImageNet dataset, for the VGG-16 and MobileNetV2 architectures, we reduce the network parameters by 8.0× and 1.3× with a similar performance or improvement over their respective baselines. We also evaluate the efficacy of using dimensionality reduction alongside SCANN (DR+SCANN) on nine small to medium-size datasets. Using this methodology enables us to reduce the number of connections in the network by up to 5078.7× (geometric mean: 82.1×), with little to no drop in accuracy. We also show that our synthesis methodology yields neural networks that are much better at navigating the accuracy vs. energy efficiency space. This would enable neural network-based inference even on Internet-of-Things sensors.en_US
dc.format.extent3012 - 3025en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systemsen_US
dc.rightsAuthor's manuscripten_US
dc.titleSCANN: Synthesis of Compact and Accurate Neural Networksen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1109/tcad.2021.3116470-
dc.identifier.eissn1937-4151-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
1904.09090.pdf4.39 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.