Calibration, Entropy Rates, and Memory in Language Models
Author(s): Braverman, Mark; Chen, Xinyi; Kakade, Sham; Narasimhan, Karthik; Zhang, Cyril; et al
DownloadTo refer to this page use:
http://arks.princeton.edu/ark:/88435/pr1f859
Abstract: | Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing. Towards this end, we present a calibration-based approach to measure long-term discrepancies between a generative sequence model and the true distribution, and use these discrepancies to improve the model. Empirically, we show that state-of-the-art language models, including LSTMs and Transformers, are miscalibrated: the entropy rates of their generations drift dramatically upward over time. We then provide provable methods to mitigate this phenomenon. Furthermore, we show how this calibration-based approach can also be used to measure the amount of memory that language models use for prediction. |
Publication Date: | 2020 |
Citation: | Braverman, Mark, Xinyi Chen, Sham Kakade, Karthik Narasimhan, Cyril Zhang, and Yi Zhang. "Calibration, Entropy Rates, and Memory in Language Models." In Proceedings of the 37th International Conference on Machine Learning (2020): pp. 1089-1099. |
Pages: | 1089 - 1099 |
Type of Material: | Conference Article |
Journal/Proceeding Title: | Proceedings of the 37th International Conference on Machine Learning |
Version: | Final published version. Article is made available in OAR by the publisher's permission or policy. |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.