Provable bounds for learning some deep representations
Author(s): Arora, Sanjeev; Bhaskara, A; Ge, R; Ma, T
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr18x5s
Abstract: | 2014 We give algorithms with provable guarantees that learn a class of deep nets in the generative model view popularized by Hinton and others. Our generative model is an n node multilayer network that has degree at most nγ for some γ < 1 and each edge has a random edge weight in [-1,1]. Our algorithm learns almost all networks in this class with polynomial running time. The sample complexity is quadratic or cubic depending upon the details of the model. The algorithm uses layerwise learning. It is based upon a novel idea of observing correlations among features and using these to infer the underlying edge structure via a global graph recovery procedure. The analysis of the algorithm reveals interesting structure of neural nets with random edge weights. |
Publication Date: | 2014 |
Citation: | Arora, S, Bhaskara, A, Ge, R, Ma, T. (2014). Provable bounds for learning some deep representations. 1 (883 - 891 |
Pages: | 883 - 891 |
Type of Material: | Conference Article |
Journal/Proceeding Title: | 31st International Conference on Machine Learning |
Version: | Author's manuscript |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.