Skip to main content

Chaining mutual information and tightening generalization bounds

Author(s): Asadi, AR; Abbe, Emmanuel; Verdú, S

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1pc2m
Abstract: Bounding the generalization error of learning algorithms has a long history, which yet falls short in explaining various generalization successes including those of deep learning. Two important difficulties are (i) exploiting the dependencies between the hypotheses, (ii) exploiting the dependence between the algorithm's input and output. Progress on the first point was made with the chaining method, originating from the work of Kolmogorov, and used in the VC-dimension bound. More recently, progress on the second point was made with the mutual information method by Russo and Zou'15. Yet, these two methods are currently disjoint. In this paper, we introduce a technique to combine the chaining and mutual information methods, to obtain a generalization bound that is both algorithm-dependent and that exploits the dependencies between the hypotheses. We provide an example in which our bound significantly outperforms both the chaining and the mutual information bounds. As a corollary, we tighten Dudley's inequality when the learning algorithm chooses its output from a small subset of hypotheses with high probability
Publication Date: 2018
Citation: Asadi, AR, Abbe, E, Verdú, S. (2018). Chaining mutual information and tightening generalization bounds. 2018-December (7234 - 7243
Pages: 7234 - 7243
Type of Material: Conference Article
Journal/Proceeding Title: Advances in Neural Information Processing Systems
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.