Our new work "A Survey on Explainable AI Narratives based on Large Language Models" is available online

Explainable Artificial Intelligence (XAI) seeks to elucidate the inner logic of machine learning models, yet its outputs often remain difficult for non-technical users to understand. The emerging paradigm of XAI Narratives leverages Large Language Models (LLMs) to translate technical explanations into coherent, human-readable accounts. This survey provides the first systematic review of this approach, focusing on systems in which LLMs act as post-hoc narrative translators rather than autonomous explainers. We formalize this task as the Narrative Generation Problem, examine its integration with classical XAI methods such as feature attribution and counterfactual explanations across multiple data modalities, and introduce a taxonomy for narrative evaluation spanning three core dimensions. Finally, we analyze prompting strategies and outline open challenges and future directions for advancing reliable, interpretable, and context-aware XAI Narratives.
🔗 Read the full survey at the following link