Formalising the Robustness of Counterfactual Explanations for Neural Networks

Authors

  • Junqi Jiang Imperial College London
  • Francesco Leofante Imperial College London
  • Antonio Rago Imperial College London
  • Francesca Toni Imperial College London

DOI:

https://doi.org/10.1609/aaai.v37i12.26740

Keywords:

General

Abstract

The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., following retraining), which raises questions about their reliability in real-world applications. Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees. To remedy this, we propose ∆-robustness, the first notion to formally and deterministically assess the robustness (to model changes) of CFXs for neural networks. We introduce an abstraction framework based on interval neural networks to verify the ∆-robustness of CFXs against a possibly infinite set of changes to the model parameters, i.e., weights and biases. We then demonstrate the utility of this approach in two distinct ways. First, we analyse the ∆-robustness of a number of CFX generation methods from the literature and show that they unanimously host significant deficiencies in this regard. Second, we demonstrate how embedding ∆-robustness within existing methods can provide CFXs which are provably robust.

Downloads

Published

2023-06-26

How to Cite

Jiang, J., Leofante, F., Rago, A., & Toni, F. (2023). Formalising the Robustness of Counterfactual Explanations for Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14901-14909. https://doi.org/10.1609/aaai.v37i12.26740

Issue

Section

AAAI Special Track on Safe and Robust AI