Skip to main content

Implicit Regularization in Deep Matrix Factorization

Author(s): Arora, Sanjeev; Cohen, Nadav; Hu, Wei; Luo, Yuping

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1qs0p
Abstract: Efforts to understand the generalization mystery in deep learning have led to the belief that gradient-based optimization induces a form of implicit regularization, a bias towards models of low "complexity." We study the implicit regularization of gradient descent over deep linear neural networks for matrix completion and sensing, a model referred to as deep matrix factorization. Our first finding, supported by theory and experiments, is that adding depth to a matrix factorization enhances an implicit tendency towards low-rank solutions, oftentimes leading to more accurate recovery. Secondly, we present theoretical and empirical arguments questioning a nascent view by which implicit regularization in matrix factorization can be captured using simple mathematical norms. Our results point to the possibility that the language of standard regularizers may not be rich enough to fully encompass the implicit regularization brought forth by gradient-based optimization.
Publication Date: 2019
Citation: Arora, Sanjeev, Nadav Cohen, Wei Hu, and Yuping Luo. "Implicit Regularization in Deep Matrix Factorization." Advances in Neural Information Processing Systems 32 (2019).
ISSN: 1049-5258
Type of Material: Conference Article
Journal/Proceeding Title: Advances in Neural Information Processing Systems
Version: Final published version. Article is made available in OAR by the publisher's permission or policy.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.