Skip to main content

I-LAMM FOR SPARSE LEARNING: SIMULTANEOUS CONTROL OF ALGORITHMIC COMPLEXITY AND STATISTICAL ERROR.

Author(s): Fan, Jianqing; Liu, Han; Sun, Qiang; Zhang, Tong

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1fs2r
Full metadata record
DC FieldValueLanguage
dc.contributor.authorFan, Jianqing-
dc.contributor.authorLiu, Han-
dc.contributor.authorSun, Qiang-
dc.contributor.authorZhang, Tong-
dc.date.accessioned2021-10-11T14:17:39Z-
dc.date.available2021-10-11T14:17:39Z-
dc.date.issued2018-04-03en_US
dc.identifier.citationFan, Jianqing, Liu, Han, Sun, Qiang, Zhang, Tong. (2018). I-LAMM FOR SPARSE LEARNING: SIMULTANEOUS CONTROL OF ALGORITHMIC COMPLEXITY AND STATISTICAL ERROR.. Annals of statistics, 46 (2), 814 - 841. doi:10.1214/17-aos1568en_US
dc.identifier.issn0090-5364-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1fs2r-
dc.description.abstractWe propose a computational framework named iterative local adaptive majorize-minimization (I-LAMM) to simultaneously control algorithmic complexity and statistical error when fitting high dimensional models. I-LAMM is a two-stage algorithmic implementation of the local linear approximation to a family of folded concave penalized quasi-likelihood. The first stage solves a convex program with a crude precision tolerance to obtain a coarse initial estimator, which is further refined in the second stage by iteratively solving a sequence of convex programs with smaller precision tolerances. Theoretically, we establish a phase transition: the first stage has a sublinear iteration complexity, while the second stage achieves an improved linear rate of convergence. Though this framework is completely algorithmic, it provides solutions with optimal statistical performances and controlled algorithmic complexity for a large family of nonconvex optimization problems. The iteration effects on statistical errors are clearly demonstrated via a contraction property. Our theory relies on a localized version of the sparse/restricted eigenvalue condition, which allows us to analyze a large family of loss and penalty functions and provide optimality guarantees under very weak assumptions (For example, I-LAMM requires much weaker minimal signal strength than other procedures). Thorough numerical results are provided to support the obtained theory.en_US
dc.format.extent814 - 841en_US
dc.languageengen_US
dc.language.isoen_USen_US
dc.relation.ispartofAnnals of statisticsen_US
dc.rightsAuthor's manuscripten_US
dc.titleI-LAMM FOR SPARSE LEARNING: SIMULTANEOUS CONTROL OF ALGORITHMIC COMPLEXITY AND STATISTICAL ERROR.en_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1214/17-aos1568-
dc.identifier.eissn2168-8966-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
I-LAMM for sparse learning Simultaneous control of algorithmic complexity and statistical error.pdf900.17 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.