Our paper titled "Countering Overfitting with Counterfactual Examples" has been accepted at KDD 2026

Dec 4, 2025·
Fabiano Veglianti
Fabiano Veglianti
Flavio Giorgi
Flavio Giorgi
Fabrizio Silvestri
Fabrizio Silvestri
Gabriele Tolomei
Gabriele Tolomei
· 1 min read

In this work, we connect two key properties of machine learning models: generalization and explainability. Specifically, we show that the more a model overfits its training data, the easier it becomes, on average, to find counterfactual examples (i.e., small input perturbations that change the model’s prediction).

Leveraging this observation, we propose CF-Reg, a new regularization term that mitigates overfitting by enforcing a sufficient margin between training samples and their counterfactuals.

Our experiments indicate that this strategy not only improves generalization but also produces counterfactual explanations naturally as a by product of the training process.

🔗 The preprint is available at the following link.

Fabiano Veglianti
Authors
Fabiano Veglianti
PhD Student in Data Science
Flavio Giorgi
Authors
Flavio Giorgi
PhD Student in Computer Science
Fabrizio Silvestri
Authors
Fabrizio Silvestri
Full Professor of Computer Science
Gabriele Tolomei
Authors
Gabriele Tolomei
Associate Professor of Computer Science