Skip to main content

A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks

Author(s): Arora, Sanjeev; Golowich, Nadav; Cohen, Noah; Hu, Wei

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr17k2d
Full metadata record
DC FieldValueLanguage
dc.contributor.authorArora, Sanjeev-
dc.contributor.authorGolowich, Nadav-
dc.contributor.authorCohen, Noah-
dc.contributor.authorHu, Wei-
dc.date.accessioned2021-10-08T19:51:08Z-
dc.date.available2021-10-08T19:51:08Z-
dc.date.issued2019en_US
dc.identifier.citationArora, Sanjeev, Nadav Cohen, Noah Golowich, and Wei Hu. "A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks." In International Conference on Learning Representations (2019).en_US
dc.identifier.urihttps://openreview.net/pdf?id=SkMQg3C5K7-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr17k2d-
dc.description.abstractWe analyze speed of convergence to global optimum for gradient descent training a deep linear neural network by minimizing the L2 loss over whitened data. Convergence at a linear rate is guaranteed when the following hold: (i) dimensions of hidden layers are at least the minimum of the input and output dimensions; (ii) weight matrices at initialization are approximately balanced; and (iii) the initial loss is smaller than the loss of any rank-deficient solution. The assumptions on initialization (conditions (ii) and (iii)) are necessary, in the sense that violating any one of them may lead to convergence failure. Moreover, in the important case of output dimension 1, i.e. scalar regression, they are met, and thus convergence to global optimum holds, with constant probability under a random initialization scheme. Our results significantly extend previous analyses, e.g., of deep linear residual networks (Bartlett et al., 2018).en_US
dc.language.isoen_USen_US
dc.relation.ispartofInternational Conference on Learning Representationsen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleA Convergence Analysis of Gradient Descent for Deep Linear Neural Networksen_US
dc.typeConference Articleen_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
ConvergenceAnalysis.pdf575.6 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.