Our paper titled "Countering Overfitting with Counterfactual Examples" has been accepted at KDD 2026
We connect two key properties of machine learning models, generalization and explainability.
Human-Explainable, Robust, and COllaborative LEarning.
At the HERCOLE Lab, our research mission is to advance the development of next-generation machine learning and artificial intelligence systems that are not only powerful, but also understandable, resilient, and decentralized.
We strive to make AI systems more interpretable and transparent to human users, enabling greater trust and collaboration. We design algorithms that are robust to adversarial attacks, ensuring reliability in real-world conditions.
Moreover, we design intelligent systems that can operate efficiently on edge devices, enabling local decision-making, enhancing privacy, and reducing reliance on centralized infrastructures.
Through interdisciplinary research and innovation, we aim to shape AI that is more human-centered, secure, and scalable.
We connect two key properties of machine learning models, generalization and explainability.
The article “Enhancing XAI Narratives through Multi-Narrative Refinement and Knowledge Distillation” received the Best Paper Award at the HCAI 2025 Workshop (ACM CIKM 2025).
This survey provides the first systematic review of this approach, focusing on systems in which LLMs act as post-hoc narrative translators rather than autonomous explainers.