Time-Space Lower Bounds for Two-Pass Learning
Author(s): Garg, Sumegha; Raz, Ran; Tal, Avishay
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1r825
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Garg, Sumegha | - |
dc.contributor.author | Raz, Ran | - |
dc.contributor.author | Tal, Avishay | - |
dc.date.accessioned | 2021-10-08T19:45:12Z | - |
dc.date.available | 2021-10-08T19:45:12Z | - |
dc.date.issued | 2019 | en_US |
dc.identifier.citation | Garg, Sumegha, Ran Raz, and Avishay Tal. "Time-space lower bounds for two-pass learning." 34th Computational Complexity Conference (CCC) 137: pp. 22:1-22:39. doi:10.4230/LIPIcs.CCC.2019.22 | en_US |
dc.identifier.issn | 1868-8969 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr1r825 | - |
dc.description.abstract | A line of recent works showed that for a large class of learning problems, any learning algorithm requires either super-linear memory size or a super-polynomial number of samples [11, 7, 12, 9, 2, 5]. For example, any algorithm for learning parities of size n requires either a memory of size Ω(n 2 ) or an exponential number of samples [11]. All these works modeled the learner as a one-pass branching program, allowing only one pass over the stream of samples. In this work, we prove the first memory-samples lower bounds (with a super-linear lower bound on the memory size and super-polynomial lower bound on the number of samples) when the learner is allowed two passes over the stream of samples. For example, we prove that any two-pass algorithm for learning parities of size n requires either a memory of size Ω(n 1.5 ) or at least 2 Ω(√n) samples. More generally, a matrix M : A × X → {−1, 1} corresponds to the following learning problem: An unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a1, b1),(a2, b2). . ., where for every i, ai ∈ A is chosen uniformly at random and bi = M(ai, x). Assume that k, `, r are such that any submatrix of M of at least 2 −k · |A| rows and at least 2 −` · |X| columns, has a bias of at most 2 −r . We show that any two-pass learning algorithm for the learning problem corresponding to M requires either a memory of size at least Ω k · min{k, √ `} , or at least 2 Ω(min{k,√ `,r}) samples. | en_US |
dc.format.extent | 22:1 - 22:39 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | 34th Computational Complexity Conference (CCC) | en_US |
dc.rights | Final published version. This is an open access article. | en_US |
dc.title | Time-Space Lower Bounds for Two-Pass Learning | en_US |
dc.type | Conference Article | en_US |
dc.identifier.doi | 10.4230/LIPIcs.CCC.2019.22 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceeding | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
TimeSpaceLowerBoundsTwoPassLearning.pdf | 721.1 kB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.