Large Language Models

eXplainable AI Narratives
eXplainable AI Narratives

Introduction The field of Explainable Artificial Intelligence (XAI) has flourished, producing a vast array of methods to elucidate the internal mechanisms and decision-making rationales of opaque models. Classical XAI techniques—such as feature attribution, saliency mapping, rule extraction, and counterfactual reasoning—seek to expose the logic underlying a model’s predictions. However, despite their algorithmic sophistication, the outputs of these explainers are often intricate, mathematical, and complex for non-technical stakeholders to interpret. This mismatch between technical transparency and human interpretability remains a central challenge to the practical adoption of explainable AI.

Dec 4, 2025

We receive the best paper award at the HCAI Workshop - ACM CIKM 2025
We receive the best paper award at the HCAI Workshop - ACM CIKM 2025

The article "Enhancing XAI Narratives through Multi-Narrative Refinement and Knowledge Distillation" received the Best Paper Award at the HCAI 2025 Workshop (ACM CIKM 2025).

Nov 14, 2025

Our new work "A Survey on Explainable AI Narratives based on Large Language Models" is available online
Our new work "A Survey on Explainable AI Narratives based on Large Language Models" is available online

This survey provides the first systematic review of this approach, focusing on systems in which LLMs act as post-hoc narrative translators rather than autonomous explainers.

Nov 11, 2025

Our paper "Enhancing XAI Narratives through Multi-Narrative Refinement and Knowledge Distillation" has been accepted at CIKM 2025 "Human-Centric AI - From Explainability and Trustworthiness to Actionable Ethics"

We introduce a novel approach that leverages open-source Large Language Models (LLMs) to transform counterfactual explanations for GNNs into natural language descriptions.

Oct 13, 2025

Our paper "Natural Language Counterfactual Explanations for Graphs Using Large Language Models" has been accepted at AISTATS 2025

We introduce a novel approach that leverages open-source Large Language Models (LLMs) to transform counterfactual explanations for GNNs into natural language descriptions.

Jan 27, 2025