Dimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization
Author(s): Chen, Minshuo; Yang, Lin F.; Wang, Mengdi; Zhao, Tuo
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1mv27
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Minshuo | - |
dc.contributor.author | Yang, Lin F. | - |
dc.contributor.author | Wang, Mengdi | - |
dc.contributor.author | Zhao, Tuo | - |
dc.date.accessioned | 2020-03-02T17:40:09Z | - |
dc.date.available | 2020-03-02T17:40:09Z | - |
dc.date.issued | 2018 | en_US |
dc.identifier.citation | Chen, Minshuo, Lin Yang, Mengdi Wang, and Tuo Zhao. "Dimensionality reduction for stationary time series via stochastic nonconvex optimization." In Advances in Neural Information Processing Systems, (2018): 3496-3506. https://papers.nips.cc/paper/7609-dimensionality-reduction-for-stationary-time-series-via-stochastic-nonconvex-optimization.pdf | en_US |
dc.identifier.issn | 1049-5258 | - |
dc.identifier.uri | https://papers.nips.cc/paper/7609-dimensionality-reduction-for-stationary-time-series-via-stochastic-nonconvex-optimization.pdf | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr1mv27 | - |
dc.description.abstract | Stochastic optimization naturally arises in machine learning. Efficient algorithms with provable guarantees, however, are still largely missing, when the objective function is nonconvex and the data points are dependent. This paper studies this fundamental challenge through a streaming PCA problem for stationary time series data. Specifically, our goal is to estimate the principle component of time series data with respect to the covariance matrix of the stationary distribution. Computationally, we propose a variant of Oja's algorithm combined with downsampling to control the bias of the stochastic gradient caused by the data dependency. Theoretically, we quantify the uncertainty of our proposed stochastic algorithm based on diffusion approximations. This allows us to prove the asymptotic rate of convergence and further implies near optimal asymptotic sample complexity. Numerical experiments are provided to support our analysis. | en_US |
dc.format.extent | 3496 - 3506 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | Advances in Neural Information Processing Systems | en_US |
dc.rights | Author's manuscript | en_US |
dc.title | Dimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization | en_US |
dc.type | Conference Article | en_US |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceeding | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
OA_DimensionalityReductionStationaryTimeSeriesStochasticNonconvexOptimization.pdf | 997.18 kB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.