Extractor-based time-space lower bounds for learning
Author(s): Garg, Sumegha; Raz, Ran; Tal, Avishay
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1fr61
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Garg, Sumegha | - |
dc.contributor.author | Raz, Ran | - |
dc.contributor.author | Tal, Avishay | - |
dc.date.accessioned | 2021-10-08T19:45:17Z | - |
dc.date.available | 2021-10-08T19:45:17Z | - |
dc.date.issued | 2018-06 | en_US |
dc.identifier.citation | Garg, Sumegha, Ran Raz, and Avishay Tal. "Extractor-based time-space lower bounds for learning." Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (2018), pp. 990-1002. doi:10.1145/3188745.3188962 | en_US |
dc.identifier.uri | https://arxiv.org/pdf/1708.02639.pdf | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr1fr61 | - |
dc.description.abstract | A matrix M: A × X → {−1,1} corresponds to the following learning problem: An unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a1, b1), (a2, b2) …, where for every i, ai ∈ A is chosen uniformly at random and bi = M(ai,x). Assume that k, l, r are such that any submatrix of M of at least 2−k · |A| rows and at least 2−l · |X| columns, has a bias of at most 2−r. We show that any learning algorithm for the learning problem corresponding to M requires either a memory of size at least Ω(k · l ), or at least 2Ω(r) samples. The result holds even if the learner has an exponentially small success probability (of 2−Ω(r)). In particular, this shows that for a large class of learning problems, any learning algorithm requires either a memory of size at least Ω((log|X|) · (log|A|)) or an exponential number of samples, achieving a tight Ω((log|X|) · (log|A|)) lower bound on the size of the memory, rather than a bound of Ω(min{(log|X|)2,(log|A|)2}) obtained in previous works by Raz [FOCS’17] and Moshkovitz and Moshkovitz [ITCS’18]. Moreover, our result implies all previous memory-samples lower bounds, as well as a number of new applications. Our proof builds on the work of Raz [FOCS’17] that gave a general technique for proving memory samples lower bounds. | en_US |
dc.format.extent | 990 - 1002 | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing | en_US |
dc.relation.ispartofseries | STOC 2018; | - |
dc.rights | Author's manuscript | en_US |
dc.title | Extractor-based time-space lower bounds for learning | en_US |
dc.type | Conference Article | en_US |
dc.identifier.doi | 10.1145/3188745.3188962 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceeding | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
ExtractorBasedTimeSpaceLowerBoundsLearning.pdf | 358.54 kB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.