Towards an interpretable model for automatic classification of endoscopy images

Rogelio García-Aguirre, Luis Torres Treviño, Eva Navarro Lopez, José Alberto González-González

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

67 Downloads (Pure)

Abstract

Deep learning strategies have become the mainstream for computer-assisted diagnosis tools development since they outperform other machine learning techniques. However, these systems can not reach their full potential since the lack of understanding of their operation and questionable generalizability provokes mistrust from the users, limiting their application. In this paper, we generate a Convolutional Neural Network (CNN) using a genetic algorithm for hyperparameter optimization. Our CNN has state-of-the-art classification performance, delivering higher evaluation metrics than other recent papers that use AI models to classify images from the same dataset. We provide visual explanations of the classifications made by our model implementing Grad-CAM and analyze the behavior of our model on misclassifications using this technique.
Original languageEnglish
Title of host publicationProceedings of the 21st Mexican International Conference on Artificial Intelligence (MICAI 2022)
PublisherSpringer Berlin
Number of pages12
Volume13612
ISBN (Print)9783031194948
Publication statusPublished - 31 Dec 2022

Publication series

NameLectures Notes in Artificial Intelligence
PublisherSpringer
Volume13612
ISSN (Print)0302-9743

Keywords

  • Medical Imaging
  • Artificial intelligence
  • Deep learning
  • Explainable AI

Fingerprint

Dive into the research topics of 'Towards an interpretable model for automatic classification of endoscopy images'. Together they form a unique fingerprint.

Cite this