Graph Sequence Learning for Premise Selection

Edvard k. Holden, Konstantin Korovin

Research output: Contribution to journalArticlepeer-review

Abstract

Premise selection is crucial for large theory reasoning with automated theorem provers as the sheer size of the problems quickly leads to resource exhaustion. This paper proposes a premise selection method inspired by the machine learning domain of image captioning, where language models automatically generate a suitable caption for a given image. Likewise, we attempt to generate the sequence of axioms required to construct the proof of a given conjecture. In our axiom captioning approach, a pre-trained graph neural network is combined with a language model via transfer learning to encapsulate both the inter-axiom and conjecture-axiom relationships. We evaluate different configurations of our method and experience a 14% improvement in the number of solved problems over a baseline.
Original languageEnglish
Article number102376
JournalJournal of Symbolic Computation
Early online date27 Aug 2024
DOIs
Publication statusPublished - 27 Aug 2024

Keywords

  • Automated Theorem Proving
  • Machine Learning
  • Premise Selection
  • Sequence Learning
  • Graph Neural Network

Fingerprint

Dive into the research topics of 'Graph Sequence Learning for Premise Selection'. Together they form a unique fingerprint.

Cite this