Skip to main content

Unsupervised learning by competing hidden units.

Author(s): Krotov, Dmitry; Hopfield, John J

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1rr3w
Abstract: It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb's idea that change of the synapse strength should be local-i.e., should depend only on the activities of the pre- and postsynaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way. These learned lower-layer feature detectors can be used to train higher-layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm on simple tasks.
Publication Date: Apr-2019
Electronic Publication Date: 29-Mar-2019
Citation: Krotov, Dmitry, Hopfield, John J. (2019). Unsupervised learning by competing hidden units.. Proceedings of the National Academy of Sciences of the United States of America, 116 (16), 7723 - 7731. doi:10.1073/pnas.1820458116
DOI: doi:10.1073/pnas.1820458116
ISSN: 0027-8424
EISSN: 1091-6490
Pages: 1 - 9
Language: eng
Type of Material: Journal Article
Journal/Proceeding Title: Proceedings of the National Academy of Sciences of the United States of America
Version: Final published version. This is an open access article.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.