Team:PULSAR at ProbSum 2023: PULSAR: Pre-training with Extracted Healthcare Terms for Summarising Patients’ Problems and Data Augmentation with Black-box Large Language Models

Hao Li, Yuping Wu, Viktor Schlegel, Riza Batista-Navarro, Thanh-Tung Nguyen, Abhinav Ramesh kashyap, Xiao-Jun Zeng, Daniel Beck, Stefan Winkler, Goran Nenadic

Research output: Contribution to conferencePaperpeer-review

Abstract

Medical progress notes play a crucial role in documenting a patient’s hospital journey, including his or her condition, treatment plan, and any updates for healthcare providers. Automatic summarisation of a patient’s problems in the form of a “problem list” can aid stakeholders in understanding a patient’s condition, reducing workload and cognitive bias. BioNLP 2023 Shared Task 1A focusses on generating a list of diagnoses and problems from the provider’s progress notes during hospitalisation. In this paper, we introduce our proposed approach to this task, which integrates two complementary components. One component employs large language models (LLMs) for data augmentation; the other is an abstractive summarisation LLM with a novel pre-training objective for generating the patients’ problems summarised as a list. Our approach was ranked second among all submissions to the shared task. The performance of our model on the development and test datasets shows that our approach is more robust on unknown data, with an improvement of up to 3.1 points over the same size of the larger model.
Original languageEnglish
Pages503-509
Number of pages7
DOIs
Publication statusPublished - 2023
EventThe 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks - Toronto, Canada
Duration: 1 Jul 20231 Jul 2023

Conference

ConferenceThe 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
Period1/07/231/07/23

Fingerprint

Dive into the research topics of 'Team:PULSAR at ProbSum 2023: PULSAR: Pre-training with Extracted Healthcare Terms for Summarising Patients’ Problems and Data Augmentation with Black-box Large Language Models'. Together they form a unique fingerprint.

Cite this