QNNRepair: Quantized Neural Network Repair

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Downloads (Pure)


We present QNNRepair, the first method in the literature for repairing quantized neural networks (QNNs). QNNRepair aims to improve the accuracy of a neural network model after quantization. It accepts the full-precision and weight-quantized neural networks and a repair dataset of passing and failing tests. At first, QNNRepair applies a software fault localization method to identify the neurons that cause performance degradation during neural network quantization. Then, it formulates the repair problem into a linear programming problem of solving neuron weights parameters, which corrects the QNN's performance on failing tests while not compromising its performance on passing tests. We evaluate QNNRepair with widely used neural network architectures such as MobileNetV2, ResNet, and VGGNet on popular datasets, including high-resolution images. We also compare QNNRepair with the state-of-the-art data-free quantization method SQuant. According to the experiment results, we conclude that QNNRepair is effective in improving the quantized model's performance in most cases. Its repaired models have 24% higher accuracy than SQuant's in the independent validation set, especially for the ImageNet dataset.
Original languageEnglish
Title of host publication21st International Conference on Software Engineering and Formal Methods
Publication statusAccepted/In press - 18 Aug 2023


  • neural network repair
  • quantization
  • fault localization
  • constraints solving


Dive into the research topics of 'QNNRepair: Quantized Neural Network Repair'. Together they form a unique fingerprint.

Cite this