Skip to main content

A Time-Space Lower Bound for a Large Class of Learning Problems

Author(s): Raz, Ran

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1kg0w
Abstract: We prove a general memory-samples lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. A matrix M : A × X → {-1, 1} corresponds to the following learning problem: An unknown element x ∈ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a 1 , b 1 ), (a 2 , b 2 ) ..., where for every i, a i ∈ A is chosen uniformly at random and b i = M(ai, x). Let σ max be the largest singular value of M and note that always σ max ≤ |A| 1/2 · |X| 1/2 . We show that if σ max ≤ |A| 1/2 · |X| 1/2-ε , then any learning algorithm for the corresponding learning problem requires either a memory of size at least Ω ((εn) 2 ) or at least 2 Ω(εn) samples, where n = log 2 |X|. As a special case, this gives a new proof for the memory-samples lower bound for parity learning [14].
Publication Date: 2017
Citation: Raz, Ran. "A Time-Space Lower Bound for a Large Class of Learning Problems." IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) (2017): pp. 732-742. doi:10.1109/FOCS.2017.73
DOI: 10.1109/FOCS.2017.73
ISSN: 0272-5428
Pages: 732 - 742
Type of Material: Conference Article
Journal/Proceeding Title: IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.