Skip to main content

Towards Understanding Hierarchical Learning: Benefits of Neural Representations

Author(s): Chen, Minshuo; Bai, Yu; Lee, Jason D; Zhao, Tuo; Wang, Huan; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1pg4k
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, Minshuo-
dc.contributor.authorBai, Yu-
dc.contributor.authorLee, Jason D-
dc.contributor.authorZhao, Tuo-
dc.contributor.authorWang, Huan-
dc.contributor.authorXiong, Caiming-
dc.contributor.authorSocher, Richard-
dc.date.accessioned2021-10-08T19:51:29Z-
dc.date.available2021-10-08T19:51:29Z-
dc.date.issued2020en_US
dc.identifier.citationChen, Minshuo, Yu Bai, Jason D. Lee, Tuo Zhao, Huan Wang, Caiming Xiong, and Richard Socher. "Towards Understanding Hierarchical Learning: Benefits of Neural Representations." Advances in Neural Information Processing Systems (2020): pp. 22134–22145.en_US
dc.identifier.issn1049-5258-
dc.identifier.urihttps://proceedings.neurips.cc/paper/2020/file/fb647ca6672b0930e9d00dc384d8b16f-Paper.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1pg4k-
dc.description.abstractDeep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to shallow learners'' such as kernels. In this work, we demonstrate that intermediate \emph{neural representations} add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree- p polynomial ( p ≥ 4 ) in d dimension, neural representation requires only ˜ O ( d \ceil p / 2 ) samples, while the best-known sample complexity upper bound for the raw input is ˜ O ( d p − 1 ) . We contrast our result with a lower bound showing that neural representations do not improve over the raw input (in the infinite width limit), when the trainable network is instead a neural tangent kernel. Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.en_US
dc.format.extent22134 - 22145en_US
dc.language.isoen_USen_US
dc.relation.ispartofAdvances in Neural Information Processing Systemsen_US
dc.rightsFinal published version. Article is made available in OAR by the publisher's permission or policy.en_US
dc.titleTowards Understanding Hierarchical Learning: Benefits of Neural Representationsen_US
dc.typeConference Articleen_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
HierarchicalLearning.pdf733.9 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.