<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>XAI2026 | HERCOLE Lab</title><link>https://hercolelab.netlify.app/tags/xai2026/</link><atom:link href="https://hercolelab.netlify.app/tags/xai2026/index.xml" rel="self" type="application/rss+xml"/><description>XAI2026</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Fri, 06 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Two Papers Accepted at the XAI 2026 Conference</title><link>https://hercolelab.netlify.app/news/26xai/</link><pubDate>Fri, 06 Mar 2026 00:00:00 +0000</pubDate><guid>https://hercolelab.netlify.app/news/26xai/</guid><description>&lt;p>We are delighted to share that two papers from HERCOLE Lab have been accepted at the 4th World Conference on eXplainable Artificial Intelligence. This is a great recognition of our lab&amp;rsquo;s ongoing efforts to advance transparency, trust, and human-centered AI.&lt;/p>
&lt;h2 id="1-ponte-personalized-orchestration-for-natural-language-trustworthy-explanations">1. PONTE: Personalized Orchestration for Natural Language Trustworthy Explanations&lt;/h2>
&lt;p>&lt;strong>Authors:&lt;/strong> Vittoria Vineis, Matteo Silvestri, Lorenzo Antonelli, Filippo Betello, and Gabriele Tolomei&lt;/p>
&lt;p>This work introduces &lt;strong>PONTE&lt;/strong>, a human-in-the-loop framework for generating personalized and trustworthy natural-language explanations for AI systems. Instead of relying on static prompts, PONTE models personalization as a closed-loop process, combining preference-aware generation with verification modules that enforce faithfulness, completeness, and stylistic alignment. Experiments and human evaluations show that this verification-refinement loop significantly improves the quality and reliability of AI-generated explanations across domains such as healthcare and finance.&lt;/p>
&lt;p>Pre-print: &lt;a href="https://arxiv.org/abs/2603.06485" target="_blank" rel="noopener">https://arxiv.org/abs/2603.06485&lt;/a>&lt;/p>
&lt;h2 id="2-demystifying-sequential-recommendations-counterfactual-explanations-via-genetic-algorithms">2. Demystifying Sequential Recommendations: Counterfactual Explanations via Genetic Algorithms&lt;/h2>
&lt;p>&lt;strong>Authors:&lt;/strong> Filippo Betello, Domiziano Scarcelli, Giuseppe Perelli, Fabrizio Silvestri, and Gabriele Tolomei&lt;/p>
&lt;p>This paper proposes the first counterfactual explanation framework for Sequential Recommender Systems (SRSs). By leveraging a specialized genetic algorithm for discrete sequences, the method answers the question: &amp;ldquo;What minimal changes in a user&amp;rsquo;s interaction history would lead to different recommendations?&amp;rdquo; The work also shows that generating such explanations is NP-Complete, and demonstrates through extensive experiments that meaningful counterfactual explanations can be generated while preserving model fidelity.&lt;/p>
&lt;p>Pre-print: &lt;a href="https://arxiv.org/abs/2508.03606" target="_blank" rel="noopener">https://arxiv.org/abs/2508.03606&lt;/a>&lt;/p>
&lt;p>Congratulations to all the authors for this excellent work and contribution to the field of Explainable AI.&lt;/p></description></item></channel></rss>