An overview of structural coverage metrics for testing neural networks

Muhammad Usman, Youcheng Sun, Divya Gopinath, Rishi Dange, Luca Manolache, Corina S. Păsăreanu

Research output: Contribution to journalArticlepeer-review

Abstract

Deep neural network (DNN) models, including those used in safety-critical domains, need to be thoroughly tested to ensure that they can reliably perform well in different scenarios. In this article, we provide an overview of structural coverage metrics for testing DNN models, including neuron coverage, k-multisection neuron coverage, top-k neuron coverage, neuron boundary coverage, strong neuron activation coverage and modified condition/decision coverage. We evaluate the metrics on realistic DNN models used for perception tasks (LeNet-1, LeNet-4, LeNet-5, ResNet20) including networks used in autonomy (TaxiNet). We also provide a tool, DNNCov, which can measure the testing coverage for all these metrics. DNNCov outputs an informative coverage report to enable researchers and practitioners to assess the adequacy of DNN testing, to compare different coverage measures, and to more conveniently inspect the model’s internals during testing.

Original languageEnglish
JournalInternational Journal on Software Tools for Technology Transfer
DOIs
Publication statusAccepted/In press - 2022

Keywords

  • Coverage
  • Neural networks
  • Testing

Fingerprint

Dive into the research topics of 'An overview of structural coverage metrics for testing neural networks'. Together they form a unique fingerprint.

Cite this