Skip to main content

Online ICA: Understanding Global Dynamics of Nonconvex Optimization via Diffusion Processes

Author(s): Li, CJ; Wang, Z; Liu, H

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1g862
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLi, CJ-
dc.contributor.authorWang, Z-
dc.contributor.authorLiu, H-
dc.date.accessioned2021-10-11T14:16:52Z-
dc.date.available2021-10-11T14:16:52Z-
dc.date.issued2016en_US
dc.identifier.citationLi, Chris Junchi, Zhaoran Wang, and Han Liu. "Online ica: Understanding global dynamics of nonconvex optimization via diffusion processes." In Advances in Neural Information Processing Systems 29 (2016), pp. 4967-4975.en_US
dc.identifier.issn1049-5258-
dc.identifier.urihttp://papers.nips.cc/paper/6305-online-ica-understanding-global-dynamics-of-nonconvex-optimization-via-diffusion-processes-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1g862-
dc.description.abstractSolving statistical learning problems often involves nonconvex optimization. Despite the empirical success of nonconvex statistical optimization methods, their global dynamics, especially convergence to the desirable local minima, remain less well understood in theory. In this paper, we propose a new analytic paradigm based on diffusion processes to characterize the global dynamics of nonconvex statistical optimization. As a concrete example, we study stochastic gradient descent (SGD) for the tensor decomposition formulation of independent component analysis. In particular, we cast different phases of SGD into diffusion processes, i.e., solutions to stochastic differential equations. Initialized from an unstable equilibrium, the global dynamics of SGD transit over three consecutive phases: (i) an unstable Ornstein-Uhlenbeck process slowly departing from the initialization, (ii) the solution to an ordinary differential equation, which quickly evolves towards the desirable local minimum, and (iii) a stable Ornstein-Uhlenbeck process oscillating around the desirable local minimum. Our proof techniques are based upon Stroock and Varadhan’s weak convergence of Markov chains to diffusion processes, which are of independent interest.en_US
dc.format.extent4967 - 4975en_US
dc.language.isoen_USen_US
dc.relation.ispartofAdvances in Neural Information Processing Systemsen_US
dc.rightsAuthor's manuscripten_US
dc.titleOnline ICA: Understanding Global Dynamics of Nonconvex Optimization via Diffusion Processesen_US
dc.typeConference Articleen_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
DynamicsOptimizDiffuzionProcesses.pdf1.62 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.