Radiology Report Generation Using Transformers Conditioned with Non-imaging Data

Nurbanu Aksoy, Nishant Ravikumar, Alejandro F. Frangi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Medical image interpretation is central to most clinical applications such as disease diagnosis, treatment planning, and prognostication. In clinical practice, radiologists examine medical images (e.g. chest x-rays, computed tomography images, etc.) and manually compile their findings into reports, which can be a time-consuming process. Automated approaches to radiology report generation, therefore, can reduce radiologist workload and improve efficiency in the clinical pathway. While recent deep-learning approaches for automated report generation from medical images have seen some success, most studies have relied on image-derived features alone, ignoring non-imaging patient data. Although a few studies have included the word-level contexts along with the image, the use of patient demographics is still unexplored. On the other hand, prior approaches to this task commonly use encoder-decoder frameworks that consist of a convolution vision model followed by a recurrent language model. Although recurrent-based text generators have achieved noteworthy results, they had the drawback of having a limited reference window and identifying only one part of the image while generating the next word. This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information, to synthesise patient-specific radiology reports. The proposed network uses a convolutional neural network (CNN) to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information, to synthesise full-text radiology reports. The designed network not only alleviates the limitations of the recurrent models but also improves the encoding and generative processes by including more context in the network. Data from two public databases were used to train and evaluate the proposed approach. CXRs and reports were extracted from the MIMIC-CXR database and combined with corresponding patients’ data (gender, age, and ethnicity) from MIMIC-IV. Based on the evaluation metrics used (BLEU 1-4 and BERTScore), including patient demographic information was found to improve the quality of reports generated using the proposed approach, relative to a baseline network trained using CXRs alone. The proposed approach shows potential for enhancing radiology report generation by leveraging rich patient metadata and combining semantic text embeddings derived thereof, with medical image-derived visual features.

Original languageEnglish
Title of host publicationProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Subtitle of host publicationImaging Informatics for Healthcare, Research, and Applications
EditorsBrian J. Park, Hiroyuki Yoshida
PublisherSPIE
ISBN (Electronic)9781510660434
DOIs
Publication statusPublished - 10 Apr 2023
EventMedical Imaging 2023: Imaging Informatics for Healthcare, Research, and Applications - San Diego, United States
Duration: 19 Feb 202321 Feb 2023

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume12469
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2023: Imaging Informatics for Healthcare, Research, and Applications
Country/TerritoryUnited States
CitySan Diego
Period19/02/2321/02/23

Keywords

  • Radiology Report Generation
  • Self Attention
  • Transformer

Fingerprint

Dive into the research topics of 'Radiology Report Generation Using Transformers Conditioned with Non-imaging Data'. Together they form a unique fingerprint.

Cite this