Skip to main content

Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks

Author(s): Arora, Sanjeev; Du, Simon; Hu, Wei; Li, Zhiyuan; Wang, Ruosong

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1bg29
Full metadata record
DC FieldValueLanguage
dc.contributor.authorArora, Sanjeev-
dc.contributor.authorDu, Simon-
dc.contributor.authorHu, Wei-
dc.contributor.authorLi, Zhiyuan-
dc.contributor.authorWang, Ruosong-
dc.date.accessioned2021-10-08T19:50:56Z-
dc.date.available2021-10-08T19:50:56Z-
dc.date.issued2019en_US
dc.identifier.citationArora, Sanjeev, Simon Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. "Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks." In Proceedings of the 36th International Conference on Machine Learning (2019): pp. 322-332.en_US
dc.identifier.urihttp://proceedings.mlr.press/v97/arora19a/arora19a.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1bg29-
dc.description.abstractRecent works have cast some light on the mystery of why deep nets fit any data and generalize despite being very overparametrized. This paper analyzes training and generalization for a simple 2-layer ReLU net with random initialization, and provides the following improvements over recent works: (i) Using a tighter characterization of training speed than recent papers, an explanation for why training a neural net with random labels leads to slower training, as originally observed in [Zhang et al. ICLR’17]. (ii) Generalization bound independent of network size, using a data-dependent complexity measure. Our measure distinguishes clearly between random labels and true labels on MNIST and CIFAR, as shown by experiments. Moreover, recent papers require sample complexity to increase (slowly) with the size, while our sample complexity is completely independent of the network size. (iii) Learnability of a broad class of smooth functions by 2-layer ReLU nets trained via gradient descent. The key idea is to track dynamics of training and generalization via properties of a related kernel.en_US
dc.format.extent322 - 332en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of the 36th International Conference on Machine Learningen_US
dc.rightsFinal published version. Article is made available in OAR by the publisher's permission or policy.en_US
dc.titleFine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networksen_US
dc.typeConference Articleen_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
OptGeneralOverparameterizedNets.pdf1.14 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.