Exploring the Limits of Fine-grained LLM-based Physics Inference via Premise Removal Interventions

Jordan Meadows, Tamsin James, Andre Freitas

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

4 Downloads (Pure)

Abstract

Language models (LMs) can hallucinate when performing complex mathematical reasoning. Physics provides a rich domain for assessing their mathematical capabilities, where physical context requires that any symbolic manipulation satisfies complex semantics (e.g., units, tensorial order). In this work, we systematically remove crucial context from prompts to force instances where model inference may be algebraically coherent, yet unphysical. We assess LM capabilities in this domain using a curated dataset encompassing multiple notations and Physics subdomains. Further, we improve zero-shot scores using synthetic in-context examples, and demonstrate non-linear degradation of derivation quality with perturbation strength via the progressive omission of
supporting premises. We find that the models’ mathematical reasoning is not physics-informed in this setting, where physical context is predominantly ignored in favour of reverse-engineering solutions.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2024,
PublisherAssociation for Computational Linguistics
Pages6487–6502
DOIs
Publication statusE-pub ahead of print - 1 Nov 2024

Fingerprint

Dive into the research topics of 'Exploring the Limits of Fine-grained LLM-based Physics Inference via Premise Removal Interventions'. Together they form a unique fingerprint.

Cite this