# Unsupervised learning by a "softened" correlation game: duality and convergence

## Author(s): Luther, Kyle L; Yang, Runzhe; Seung, H Sebastian

To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1pg1b
DC FieldValueLanguage
dc.contributor.authorLuther, Kyle L-
dc.contributor.authorYang, Runzhe-
dc.contributor.authorSeung, H Sebastian-
dc.date.accessioned2021-10-08T19:45:04Z-
dc.date.available2021-10-08T19:45:04Z-
dc.date.issued2019en_US
dc.identifier.citationLuther, Kyle L., Runzhe Yang, and H. Sebastian Seung. "Unsupervised learning by a 'softened' correlation game: duality and convergence." In 2019 53rd Asilomar Conference on Signals, Systems, and Computers (2019), pp. 876-883. doi:10.1109/IEEECONF44664.2019.9048957en_US
dc.identifier.issn1058-6393-
dc.identifier.urihttps://www.cs.princeton.edu/~runzhey/demo/asilomar2019.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1pg1b-
dc.description.abstractNeural networks with Hebbian excitation and anti-Hebbian inhibition form an interesting class of biologically plausible unsupervised learning algorithms. It has recently been shown that such networks can be regarded as online gradient descent-ascent algorithms for solving min-max problems that are dual to unsupervised learning principles formulated with no explicit reference to neural networks. Here we generalize one such formulation, the correlation game, by replacing a hard constraint with a soft penalty function. Our "softened" correlation game contains the nonnegative similarity matching principle as a special case. For solving the primal problem, we derive a projected gradient ascent algorithm that achieves speed through sorting. For solving the dual problem, we derive a projected gradient descent-ascent algorithm, the stochastic online variant of which can be interpreted as a neural network algorithm. We prove strong duality when the inhibitory connection matrix is positive definite, a condition that also prohibits multistability of neural activity dynamics. We show empirically that the neural net algorithm can converge when inhibitory plasticity is faster than excitatory plasticity, and may fail to converge in the opposing case. This is intuitively interpreted using the structure of the min-max problem.en_US
dc.format.extent876 - 883en_US
dc.language.isoen_USen_US
dc.relation.ispartof2019 53rd Asilomar Conference on Signals, Systems, and Computersen_US
dc.rightsAuthor's manuscripten_US
dc.titleUnsupervised learning by a "softened" correlation game: duality and convergenceen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/IEEECONF44664.2019.9048957-
dc.identifier.eissn2576-2303-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat