Attention-guided hierarchical fusion U-Net for uncertainty-driven medical image segmentation

Afsana Ahmed Munia, Moloud Abdar*, Mehedi Hasan, Mohammad S. Jalali, Biplab Banerjee, Abbas Khosravi, Ibrahim Hossain, Huazhu Fu, Alejandro F. Frangi

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Small inaccuracies in the system components or artificial intelligence (AI) models for medical imaging could have significant consequences leading to life hazards. To mitigate those risks, one must consider the precision of the image analysis outcomes (e.g., image segmentation), along with the confidence in the underlying model predictions. U-shaped architectures, based on the convolutional encoder–decoder, have established themselves as a critical component of many AI-enabled diagnostic imaging systems. However, most of the existing methods focus on producing accurate diagnostic predictions without assessing the uncertainty associated with such predictions or the introduced techniques. Uncertainty maps highlight areas in the predicted segmented results, where the model is uncertain or less confident. This could lead radiologists to pay more attention to ensuring patient safety and pave the way for trustworthy AI applications. In this paper, we therefore propose the Attention-guided Hierarchical Fusion U-Net (named AHF-U-Net) for medical image segmentation. We then introduce the uncertainty-aware version of it called UA-AHF-U-Net which provides the uncertainty map alongside the predicted segmentation map. The network is designed by integrating the Encoder Attention Fusion module (EAF) and the Decoder Attention Fusion module (DAF) on the encoder and decoder sides of the U-Net architecture, respectively. The EAF and DAF modules utilize spatial and channel attention to capture relevant spatial information and indicate which channels are appropriate for a given image. Furthermore, an enhanced skip connection is introduced and named the Hierarchical Attention-Enhanced (HAE) skip connection. We evaluated the efficiency of our model by comparing it with eleven well-established methods for three popular medical image segmentation datasets consisting of coarse-grained images with unclear boundaries. Based on the quantitative and qualitative results, the proposed method ranks first in two datasets and second in a third. The code can be accessed at: https://github.com/AfsanaAhmedMunia/AHF-Fusion-U-Net.

Original languageEnglish
Article number102719
JournalInformation Fusion
Volume115
DOIs
Publication statusPublished - Mar 2025

Keywords

  • Deep learning
  • Feature fusion
  • Medical image segmentation
  • Uncertainty quantification

Fingerprint

Dive into the research topics of 'Attention-guided hierarchical fusion U-Net for uncertainty-driven medical image segmentation'. Together they form a unique fingerprint.

Cite this