Deep learning with visual explanation for radiotherapy-induced toxicity prediction

Behnaz Elhaminia, Alexandra Gilbert, Alejandro F. Frangi, Andrew Scarsbrook, John Lilley, Ane Appelt, Ali Gooya

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Deep learning models are widely studied for radiotherapy toxicity prediction; however, one of the major challenges is that they are complex models and difficult to understand.1 To aid in the creation of optimal dose treatment plans, it is critical to understand the mechanism and reasoning behind the network's prediction, as well as the specific anatomical regions involved in toxicity. In this work, we propose a convolutional neural network to predict the toxicity after pelvic radiotherapy that is able to explain the network's prediction. The proposed model analyses the dose treatment plan using multiple instance learning and convolutional encores. A dataset of 315 patients was included in the study, and experiments with both quantitative and qualitative approaches were conducted to assess the network's performance.

Original languageEnglish
Title of host publicationMedical Imaging 2023
Subtitle of host publicationComputer-Aided Diagnosis
EditorsKhan M. Iftekharuddin, Weijie Chen
PublisherSPIE
ISBN (Electronic)9781510660359
DOIs
Publication statusPublished - 2023
EventMedical Imaging 2023: Computer-Aided Diagnosis - San Diego, United States
Duration: 19 Feb 202323 Feb 2023

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume12465
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2023: Computer-Aided Diagnosis
Country/TerritoryUnited States
CitySan Diego
Period19/02/2323/02/23

Keywords

  • deep learning
  • explainable model
  • multiple instance learning
  • toxicity prediction

Fingerprint

Dive into the research topics of 'Deep learning with visual explanation for radiotherapy-induced toxicity prediction'. Together they form a unique fingerprint.

Cite this