Fully Dynamic Inference with Deep Neural Networks
Author(s): Xia, Wenhan; Yin, Hongxu; Dai, Xiaoliang; Jha, Niraj K
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1m90232q
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xia, Wenhan | - |
dc.contributor.author | Yin, Hongxu | - |
dc.contributor.author | Dai, Xiaoliang | - |
dc.contributor.author | Jha, Niraj K | - |
dc.date.accessioned | 2023-12-24T15:12:50Z | - |
dc.date.available | 2023-12-24T15:12:50Z | - |
dc.date.issued | 2021-02-03 | en_US |
dc.identifier.citation | Xia, Wenhan, Yin, Hongxu, Dai, Xiaoliang, Jha, Niraj K. (2021). Fully Dynamic Inference with Deep Neural Networks. IEEE Transactions on Emerging Topics in Computing, 1 - 1. doi:10.1109/tetc.2021.3056031 | en_US |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr1m90232q | - |
dc.description.abstract | Modern deep neural networks are powerful and widely applicable models that extract task-relevant information through multi-level abstraction. Their cross-domain success, however, is often achieved at the expense of computational cost, high memory bandwidth, and long inference latency, which prevents their deployment in resource-constrained and time-sensitive scenarios, such as edge-side inference and self-driving cars. While recently developed methods for creating efficient deep neural networks are making their real-world deployment more feasible by reducing model size, they do not fully exploit input properties on a per-instance basis to maximize computational efficiency and task accuracy. In particular, most existing methods typically use a one-size-fits-all approach that identically processes all inputs. Motivated by the fact that different images require different feature embeddings to be accurately classified, we propose a fully dynamic paradigm that imparts deep convolutional neural networks with hierarchical inference dynamics at the level of layers and individual convolutional filters/channels. Two compact networks, called Layer-Net (L-Net) and Channel-Net (C-Net), predict on a per-instance basis which layers or filters/channels are redundant and therefore should be skipped. L-Net and C-Net also learn how to scale retained computation outputs to maximize task accuracy. By integrating L-Net and C-Net into a joint design framework, called LC-Net, we consistently outperform state-of-the-art dynamic frameworks with respect to both efficiency and classification accuracy. On the CIFAR-10 dataset, LC-Net results in up to 11.9× fewer floating-point operations (FLOPs) and up to 3.3 percent higher accuracy compared to other dynamic inference methods. On the ImageNet dataset, LC-Net achieves up to 1.4× fewer FLOPs and up to 4.6 percent higher Top-1 accuracy than the other methods. | en_US |
dc.format.extent | 962 - 972 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | IEEE Transactions on Emerging Topics in Computing | en_US |
dc.rights | Author's manuscript | en_US |
dc.title | Fully Dynamic Inference with Deep Neural Networks | en_US |
dc.type | Journal Article | en_US |
dc.identifier.doi | doi:10.1109/tetc.2021.3056031 | - |
dc.identifier.eissn | 2168-6750 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/journal-article | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2007.15151.pdf | 1.87 MB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.