EnnCore: End-to-End Conceptual Guarding of Neural Architectures

Edoardo Manino, Danilo Carvalho, Yi Dong, Julia Rozanova, Xidan Song, Mustafa A. Mustafa, Andre Freitas, Gavin Brown, Mikel Luján, Xiaowei Huang, Lucas Cordeiro

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

Abstract

EnnCore addresses the fundamental security problem of guaranteeing safety, transparency, and robustness in neuralbased architectures. Specifically, EnnCore aims at enabling system designers to specify essential conceptual/behavioral properties of neural-based systems, verify them, and thus safeguard the system against unpredictable behavior and attacks. In this respect, EnnCore will pioneer the dialogue between contemporary explainable neural models and full-stack neural software verification. This paper describes existing studies’ limitations, our research objectives, current achievements, and future trends towards this goal. In particular, we describe the development and evaluation of new methods, algorithms, and tools to achieve fully-verifiable intelligent systems, which are explainable, whose correct behavior is guaranteed, and robust against attacks. We also describe how EnnCore will be validated on two diverse and high-impact application scenarios: securing an AI system for (i) cancer diagnosis and (ii) energy demand response.
Original languageEnglish
Title of host publicationAAAI's Workshops on Artificial Intelligence Safety (SafeAI)
Publication statusAccepted/In press - 7 Dec 2021

Research Beacons, Institutes and Platforms

  • Thomas Ashton Institute

Fingerprint

Dive into the research topics of 'EnnCore: End-to-End Conceptual Guarding of Neural Architectures'. Together they form a unique fingerprint.

Cite this