Skip to main content

Calibration, Entropy Rates, and Memory in Language Models

Author(s): Braverman, Mark; Chen, X; Kakade, SM; Narasimhan, Karthik; Zhang, C; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr17z5t
Abstract: Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing. Towards this end, we present a calibration-based approach to measure long-term discrepancies between a generative sequence model and the true distribution, and use these discrepancies to improve the model. Empirically, we show that state-of-the-art language models, including LSTMs and Transformers, are \emph{miscalibrated}: the entropy rates of their generations drift dramatically upward over time. We then provide provable methods to mitigate this phenomenon. Furthermore, we show how this calibration-based approach can also be used to measure the amount of memory that language models use for prediction.
Publication Date: 1-Jun-2019
Citation: Braverman, M, Chen, X, Kakade, SM, Narasimhan, K, Zhang, C, Zhang, Y. (2019). Calibration, Entropy Rates, and Memory in Language Models. eprint arXiv:1906.05664, arXiv - 1906.05664
Pages: arXiv - 1906.05664
Type of Material: Journal Article
Journal/Proceeding Title: eprint arXiv:1906.05664
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.