Skip to main content

Pathwise coordinate optimization for sparse learning: Algorithm and theory

Author(s): Zhao, T; Liu, H; Zhang, T

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr16s2b
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhao, T-
dc.contributor.authorLiu, H-
dc.contributor.authorZhang, T-
dc.date.accessioned2021-10-11T14:16:53Z-
dc.date.available2021-10-11T14:16:53Z-
dc.date.issued2018-02-22en_US
dc.identifier.citationZhao, Tuo, Han Liu, and Tong Zhang. "Pathwise coordinate optimization for sparse learning: Algorithm and theory." The Annals of Statistics 46, no. 1 (2018): 180-218. doi:10.1214/17-AOS1547. https://projecteuclid.org/euclid.aos/1519268428en_US
dc.identifier.issn0090-5364-
dc.identifier.urihttps://arxiv.org/abs/1412.7477-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr16s2b-
dc.description.abstractThe pathwise coordinate optimization is one of the most important computational frameworks for high dimensional convex and nonconvex sparse learning problems. It differs from the classical coordinate optimization algorithms in three salient features: warm start initialization, active set updating and strong rule for coordinate preselection. Such a complex algorithmic structure grants superior empirical performance, but also poses significant challenge to theoretical analysis. To tackle this long lasting problem, we develop a new theory showing that these three features play pivotal roles in guaranteeing the outstanding statistical and computational performance of the pathwise coordinate optimization framework. Particularly, we analyze the existing pathwise coordinate optimization algorithms and provide new theoretical insights into them. The obtained insights further motivate the development of several modifications to improve the pathwise coordinate optimization framework, which guarantees linear convergence to a unique sparse local optimum with optimal statistical properties in parameter estimation and support recovery. This is the first result on the computational and statistical guarantees of the pathwise coordinate optimization framework in high dimensions. Thorough numerical experiments are provided to support our theory.en_US
dc.format.extent180 - 218en_US
dc.language.isoen_USen_US
dc.relation.ispartofThe Annals of Statisticsen_US
dc.rightsAuthor's manuscripten_US
dc.titlePathwise coordinate optimization for sparse learning: Algorithm and theoryen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1214/17-AOS1547-
dc.identifier.eissn2168-8966-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
PathwiseCoordinateOptimization.pdf1.88 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.