Skip to main content

Learning to Prove Theorems via Interacting with Proof Assistants

Author(s): Yang, Kaiyu; Deng, Jia

To refer to this page use:
Abstract: Humans prove theorems by relying on substantial high-level reasoning and problem-specific insights. Proof assistants offer a formalism that resembles human mathematical reasoning, representing theorems in higher-order logic and proofs as high-level tactics. However, human experts have to construct proofs manually by entering tactics into the proof assistant. In this paper, we study the problem of using machine learning to automate the interaction with proof assistants. We construct CoqGym, a large-scale dataset and learning environment containing 71K human-written proofs from 123 projects developed with the Coq proof assistant. We develop ASTactic, a deep learning-based model that generates tactics as programs in the form of abstract syntax trees (ASTs). Experiments show that ASTactic trained on CoqGym can generate effective tactics and can be used to prove new theorems not previously provable by automated methods. Code is available at
Publication Date: 2019
Citation: Yang, Kaiyu, and Jia Deng. "Learning to Prove Theorems via Interacting with Proof Assistants." Proceedings of the 36th International Conference on Machine Learning 97 (2019): pp. 6984-6994.
ISSN: 2640-3498
Pages: 6984 - 6994
Type of Material: Conference Article
Journal/Proceeding Title: Proceedings of the 36th International Conference on Machine Learning
Version: Final published version. Article is made available in OAR by the publisher's permission or policy.
Notes: Supplementary Material: Code:

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.