Abstract
Sequential data is fundamental in critical fields such as healthcare, finance, and mobility, where reliable and interpretable predictions are essential for informed decision-making. However, many state-of-the-art predictive models function as complex black boxes, making their outputs difficult to understand and trust. Explainable Artificial Intelligence (XAI) addresses this challenge by increasing transparency and interpretability in AI systems. This seminar explores XAI for sequential data from three perspectives: data, models, and explanations. By analyzing different types of sequential data—including time series, trajectories, and textual documents—we will discuss innovative methods to enhance interpretability. The focus on classification and regression tasks across univariate and multivariate time series, trajectories, and text data provides new insights into building trustworthy AI models. The final goal is a deeper understanding of techniques that make AI-driven predictions more transparent, ultimately improving confidence in AI-assisted decision-making.