Skip to main content

Optimal computational and statistical rates of convergence for sparse nonconvex learning problems

Author(s): Wang, Zhaoran; Liu, Han; Zhang, Tong

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1sg51
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Zhaoran-
dc.contributor.authorLiu, Han-
dc.contributor.authorZhang, Tong-
dc.date.accessioned2021-10-11T14:16:58Z-
dc.date.available2021-10-11T14:16:58Z-
dc.date.issued2014en_US
dc.identifier.citationWang, Zhaoran; Liu, Han; Zhang, Tong. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. Ann. Statist. 42 (2014), no. 6, 2164--2201. doi:10.1214/14-AOS1238. https://projecteuclid.org/euclid.aos/1413810725en_US
dc.identifier.issn0090-5364-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1sg51-
dc.description.abstractWe provide theoretical analysis of the statistical and computational properties of penalized M-estimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization and sparse elliptical random design regression. For these problems, it is intractable to calculate the global solution due to the nonconvex formulation. In this paper, we propose an approximate regularization path-following method for solving a variety of learning problems with nonconvex objective functions. Under a unified analytic framework, we simultaneously provide explicit statistical and computational rates of convergence for any local solution attained by the algorithm. Computationally, our algorithm attains a global geometric rate of convergence for calculating the full regularization path, which is optimal among all first-order algorithms. Unlike most existing methods that only attain geometric rates of convergence for one single regularization parameter, our algorithm calculates the full regularization path with the same iteration complexity. In particular, we provide a refined iteration complexity bound to sharply characterize the performance of each stage along the regularization path. Statistically, we provide sharp sample complexity analysis for all the approximate local solutions along the regularization path. In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator. These results show that the final estimator attains an oracle statistical property due to the usage of nonconvex penalty.en_US
dc.format.extent2164 - 2201en_US
dc.language.isoen_USen_US
dc.relation.ispartofThe Annals of Statisticsen_US
dc.rightsFinal published version. Article is made available in OAR by the publisher's permission or policy.en_US
dc.titleOptimal computational and statistical rates of convergence for sparse nonconvex learning problemsen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1214/14-AOS123-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
OptimalRatesConvergenceLearning.pdf646.94 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.