Explanation Methods for Sequential Data Models

Explanation Methods for Sequential Data Models


19 February 2025
VIDEO

Abstract

Sequential data is fundamental in critical fields such as healthcare, finance, and mobility, where reliable and interpretable predictions are essential for informed decision-making. However, many state-of-the-art predictive models function as complex black boxes, making their outputs difficult to understand and trust. Explainable Artificial Intelligence (XAI) addresses this challenge by increasing transparency and interpretability in AI systems. This seminar explores XAI for sequential data from three perspectives: data, models, and explanations. By analyzing different types of sequential data—including time series, trajectories, and textual documents—we will discuss innovative methods to enhance interpretability. The focus on classification and regression tasks across univariate and multivariate time series, trajectories, and text data provides new insights into building trustworthy AI models. The final goal is a deeper understanding of techniques that make AI-driven predictions more transparent, ultimately improving confidence in AI-assisted decision-making.

Attachments

Speakers

  • Francesco Spinnato
    Università di Pisa

    Francesco Spinnato is a researcher at the University of Pisa specializing in Explainable Artificial Intelligence (XAI) applied to sequential data, with a particular focus on interpreting black-box models for univariate and multivariate time series. In 2017, he earned a Bachelor's degree in Economics and Management from the University of Padua, and in 2020, he obtained a Master's degree in Data Science and Business Informatics from the University of Pisa. In 2024, he completed a Ph.D. in Data Science at the Scuola Normale Superiore. He is currently part of the KDDLab (Knowledge Discovery and Data Mining Laboratory), collaborating with the University of Pisa and the "A. Faedo" Institute of Information Science and Technologies at the CNR.