Skip to main content

Extractor-based time-space lower bounds for learning

Author(s): Garg, Sumegha; Raz, Ran; Tal, Avishay

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1fr61
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGarg, Sumegha-
dc.contributor.authorRaz, Ran-
dc.contributor.authorTal, Avishay-
dc.date.accessioned2021-10-08T19:45:17Z-
dc.date.available2021-10-08T19:45:17Z-
dc.date.issued2018-06en_US
dc.identifier.citationGarg, Sumegha, Ran Raz, and Avishay Tal. "Extractor-based time-space lower bounds for learning." Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (2018), pp. 990-1002. doi:10.1145/3188745.3188962en_US
dc.identifier.urihttps://arxiv.org/pdf/1708.02639.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1fr61-
dc.description.abstractA matrix M: A × X → {−1,1} corresponds to the following learning problem: An unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a1, b1), (a2, b2) …, where for every i, ai ∈ A is chosen uniformly at random and bi = M(ai,x). Assume that k, l, r are such that any submatrix of M of at least 2−k · |A| rows and at least 2−l · |X| columns, has a bias of at most 2−r. We show that any learning algorithm for the learning problem corresponding to M requires either a memory of size at least Ω(k · l ), or at least 2Ω(r) samples. The result holds even if the learner has an exponentially small success probability (of 2−Ω(r)). In particular, this shows that for a large class of learning problems, any learning algorithm requires either a memory of size at least Ω((log|X|) · (log|A|)) or an exponential number of samples, achieving a tight Ω((log|X|) · (log|A|)) lower bound on the size of the memory, rather than a bound of Ω(min{(log|X|)2,(log|A|)2}) obtained in previous works by Raz [FOCS’17] and Moshkovitz and Moshkovitz [ITCS’18]. Moreover, our result implies all previous memory-samples lower bounds, as well as a number of new applications. Our proof builds on the work of Raz [FOCS’17] that gave a general technique for proving memory samples lower bounds.en_US
dc.format.extent990 - 1002en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computingen_US
dc.relation.ispartofseriesSTOC 2018;-
dc.rightsAuthor's manuscripten_US
dc.titleExtractor-based time-space lower bounds for learningen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1145/3188745.3188962-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
ExtractorBasedTimeSpaceLowerBoundsLearning.pdf358.54 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.