This section outlines our current research focus. For each topic, you can explore a brief overview, review our related works, and discover available thesis projects.
Introduction The field of Explainable Artificial Intelligence (XAI) has flourished, producing a vast array of methods to elucidate the internal mechanisms and decision-making rationales of opaque models. Classical XAI techniques—such as feature attribution, saliency mapping, rule extraction, and counterfactual reasoning—seek to expose the logic underlying a model’s predictions. However, despite their algorithmic sophistication, the outputs of these explainers are often intricate, mathematical, and complex for non-technical stakeholders to interpret. This mismatch between technical transparency and human interpretability remains a central challenge to the practical adoption of explainable AI.
Introduction Community detection is a fundamental problem in network science, where algorithms attempt to uncover groups of nodes, called communities, that share strong internal connections. While this task is widely applied in areas such as social networks, recommendation systems, and biological data analysis, it raises significant privacy concerns. Being identified as a member of a community can expose sensitive affiliations, such as political, religious, or professional memberships, which individuals may wish to keep private.