Adversarial vulnerability bounds for Gaussian process classification

Mike T Smith, K Grosse, M Backes, MA Alvarez

Research output: Contribution to journalArticlepeer-review

Abstract

Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is an attacker perturbing a confidently classified input to produce a confident misclassification. We consider in this paper the L attack in which a small number of inputs can be perturbed by the attacker at test-time. To quantify the risk of this form of attack we have devised a formal guarantee in the form of an adversarial bound (AB) for a binary, Gaussian process classifier using the EQ kernel. This bound holds for the entire input domain, bounding the potential of any future adversarial attack to cause a confident misclassification. We explore how to extend to other kernels and investigate how to maximise the bound by altering the classifier (for example by using sparse approximations). We test the bound using a variety of datasets and show that it produces relevant and practical bounds for many of them.

Original languageEnglish
Pages (from-to)971-1009
Number of pages39
JournalMachine Learning
Volume112
Issue number3
Early online date8 Sept 2022
DOIs
Publication statusPublished - 1 Mar 2023

Keywords

  • Adversarial example
  • Bound
  • Classification
  • Gaussian process
  • Gaussian process classification
  • Machine learning

Fingerprint

Dive into the research topics of 'Adversarial vulnerability bounds for Gaussian process classification'. Together they form a unique fingerprint.

Cite this