Skip to main content

Time-Space Lower Bounds for Two-Pass Learning

Author(s): Garg, Sumegha; Raz, Ran; Tal, Avishay

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1r825
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGarg, Sumegha-
dc.contributor.authorRaz, Ran-
dc.contributor.authorTal, Avishay-
dc.date.accessioned2021-10-08T19:45:12Z-
dc.date.available2021-10-08T19:45:12Z-
dc.date.issued2019en_US
dc.identifier.citationGarg, Sumegha, Ran Raz, and Avishay Tal. "Time-space lower bounds for two-pass learning." 34th Computational Complexity Conference (CCC) 137: pp. 22:1-22:39. doi:10.4230/LIPIcs.CCC.2019.22en_US
dc.identifier.issn1868-8969-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1r825-
dc.description.abstractA line of recent works showed that for a large class of learning problems, any learning algorithm requires either super-linear memory size or a super-polynomial number of samples [11, 7, 12, 9, 2, 5]. For example, any algorithm for learning parities of size n requires either a memory of size Ω(n 2 ) or an exponential number of samples [11]. All these works modeled the learner as a one-pass branching program, allowing only one pass over the stream of samples. In this work, we prove the first memory-samples lower bounds (with a super-linear lower bound on the memory size and super-polynomial lower bound on the number of samples) when the learner is allowed two passes over the stream of samples. For example, we prove that any two-pass algorithm for learning parities of size n requires either a memory of size Ω(n 1.5 ) or at least 2 Ω(√n) samples. More generally, a matrix M : A × X → {−1, 1} corresponds to the following learning problem: An unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a1, b1),(a2, b2). . ., where for every i, ai ∈ A is chosen uniformly at random and bi = M(ai, x). Assume that k, `, r are such that any submatrix of M of at least 2 −k · |A| rows and at least 2 −` · |X| columns, has a bias of at most 2 −r . We show that any two-pass learning algorithm for the learning problem corresponding to M requires either a memory of size at least Ω k · min{k, √ `} , or at least 2 Ω(min{k,√ `,r}) samples.en_US
dc.format.extent22:1 - 22:39en_US
dc.language.isoen_USen_US
dc.relation.ispartof34th Computational Complexity Conference (CCC)en_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleTime-Space Lower Bounds for Two-Pass Learningen_US
dc.typeConference Articleen_US
dc.identifier.doi10.4230/LIPIcs.CCC.2019.22-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
TimeSpaceLowerBoundsTwoPassLearning.pdf721.1 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.