VPN: Verification of Poisoning in Neural Networks

Youcheng Sun, Muhammad Usman, Divya Gopinath, Corina Păsăreanu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

30 Downloads (Pure)


Neural networks are successfully used in a variety of applications, many of them having safety and security concerns. As a result researchers have proposed formal verification techniques for verifying neural network properties. While previous efforts have mainly focused on checking local robustness in neural networks, we instead study another neural network security issue, namely model poisoning. In this case an attacker inserts a trigger into a subset of the training data, in such a way that at test time, this trigger in an input causes the trained model to misclassify to some target class. We show how to formulate the check for model poisoning as a property that can be checked with off-the-shelf verification tools, such as Marabou and nneum, where counterexamples of failed checks constitute the triggers. We further show that the discovered triggers are ‘transferable’ from a small model to a larger, better-trained model, allowing us to analyze state-of-the art performant models trained for image classification tasks.
Original languageEnglish
Title of host publication5th Workshop on Formal Methods for ML-Enabled Autonomous Systems Affiliated with FLoC 2022
Publication statusAccepted/In press - 17 Jun 2022


Dive into the research topics of 'VPN: Verification of Poisoning in Neural Networks'. Together they form a unique fingerprint.

Cite this