Our new work "A Survey on Explainable AI Narratives based on Large Language Models" is available online

Nov 11, 2025·
Matteo Silvestri
Matteo Silvestri
Vittoria Vineis
Vittoria Vineis
Edoardo Gabrielli
Edoardo Gabrielli
Fabiano Veglianti
Fabiano Veglianti
Flavio Giorgi
Flavio Giorgi
Fabrizio Silvestri
Fabrizio Silvestri
Gabriele Tolomei
Gabriele Tolomei
· 1 min read

Explainable Artificial Intelligence (XAI) seeks to elucidate the inner logic of machine learning models, yet its outputs often remain difficult for non-technical users to understand. The emerging paradigm of XAI Narratives leverages Large Language Models (LLMs) to translate technical explanations into coherent, human-readable accounts. This survey provides the first systematic review of this approach, focusing on systems in which LLMs act as post-hoc narrative translators rather than autonomous explainers. We formalize this task as the Narrative Generation Problem, examine its integration with classical XAI methods such as feature attribution and counterfactual explanations across multiple data modalities, and introduce a taxonomy for narrative evaluation spanning three core dimensions. Finally, we analyze prompting strategies and outline open challenges and future directions for advancing reliable, interpretable, and context-aware XAI Narratives.

🔗 Read the full survey at the following link

Matteo Silvestri
Authors
Matteo Silvestri
PhD Student in Computer Science
Vittoria Vineis
Authors
Vittoria Vineis
PhD Student in Data Science
Edoardo Gabrielli
Authors
Edoardo Gabrielli
PhD Student in Cybersecurity
Fabiano Veglianti
Authors
Fabiano Veglianti
PhD Student in Data Science
Flavio Giorgi
Authors
Flavio Giorgi
PhD Student in Computer Science
Fabrizio Silvestri
Authors
Fabrizio Silvestri
Full Professor of Computer Science
Gabriele Tolomei
Authors
Gabriele Tolomei
Associate Professor of Computer Science