HeGTa: Leveraging Heterogeneous Graph-enhanced Large Language Models for Few-shot Complex Table Understanding

Rihui Jin, Yu Li, Guilin Qi, Nan Hu, Yuan-Fang Li, Jiaoyan Chen, Jianan Wang, Yongrui Chen, Dehai Min, Sheng Bi

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

Abstract

Table understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures.To address these challenges, we propose HGT, a framework with a heterogeneous graph (HG)-enhanced large language model (LLM) to tackle few-shot TU tasks.It leverages the LLM by aligning the table semantics with the LLM's parametric knowledge through soft prompts and instruction turning and deals with complex tables by a multi-task pre-training scheme involving three novel multi-granularity self-supervised HG pre-training objectives.We empirically demonstrate the effectiveness of HGT, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks.
Original languageEnglish
Title of host publicationProceedings of the AAAI Conference on Artificial Intelligence
Publication statusAccepted/In press - 15 Dec 2024
EventThe 39th Annual AAAI Conference on Artificial Intelligence - Philadelphia, United States
Duration: 25 Feb 20254 Mar 2025
https://aaai.org/conference/aaai/aaai-25/

Conference

ConferenceThe 39th Annual AAAI Conference on Artificial Intelligence
Abbreviated titleAAAI-2025
Country/TerritoryUnited States
CityPhiladelphia
Period25/02/254/03/25
Internet address

Fingerprint

Dive into the research topics of 'HeGTa: Leveraging Heterogeneous Graph-enhanced Large Language Models for Few-shot Complex Table Understanding'. Together they form a unique fingerprint.

Cite this