Skip to main content

Time-space hardness of learning sparse parities

Author(s): Kol, Gillat; Raz, Ran; Tal, Avishay

To refer to this page use:
Abstract: We define a concept class ℱ to be time-space hard (or memory-samples hard) if any learning algorithm for ℱ requires either a memory of size super-linear in n or a number of samples super-polynomial in n, where n is the length of one sample. A recent work shows that the class of all parity functions is time-space hard [Raz, FOCS'16]. Building on [Raz, FOCS'16], we show that the class of all sparse parities of Hamming weight ℓ is time-space hard, as long as ℓ ≥ ω(logn / loglogn). Consequently, linear-size DNF Formulas, linear-size Decision Trees and logarithmic-size Juntas are all time-space hard. Our result is more general and provides time-space lower bounds for learning any concept class of parity functions. We give applications of our results in the field of bounded-storage cryptography. For example, for every ωlogn) ≤ k ≤ n, we obtain an encryption scheme that requires a private key of length k, and time complexity of n per encryption/decryption of each bit, and is provably and unconditionally secure as long as the attacker uses at most o(nk) memory bits and the scheme is used at most 2o(k) times. Previously, this was known only for k=n [Raz, FOCS'16].
Publication Date: Jun-2017
Citation: Kol, Gillat, Ran Raz, and Avishay Tal. "Time-space hardness of learning sparse parities." Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing (2017): pp. 1067-1080. doi:10.1145/3055399.3055430
DOI: 10.1145/3055399.3055430
Pages: 1067 - 1080
Type of Material: Conference Article
Series/Report no.: STOC 2017;
Journal/Proceeding Title: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
Version: Author's manuscript

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.