Our paper titled "Countering Overfitting with Counterfactual Examples" has been accepted at KDD 2026
Dec 4, 2025·


·
1 min read
Fabiano Veglianti
Flavio Giorgi
Fabrizio Silvestri
Gabriele Tolomei

In this work, we connect two key properties of machine learning models: generalization and explainability. Specifically, we show that the more a model overfits its training data, the easier it becomes, on average, to find counterfactual examples (i.e., small input perturbations that change the model’s prediction).
Leveraging this observation, we propose CF-Reg, a new regularization term that mitigates overfitting by enforcing a sufficient margin between training samples and their counterfactuals.
Our experiments indicate that this strategy not only improves generalization but also produces counterfactual explanations naturally as a by product of the training process.
🔗 The preprint is available at the following link.