Explainable Artificial Intelligence in the Life Sciences

Explainable Artificial Intelligence in the Life Sciences


27 February 2026
VIDEO

Abstract

Deep learning has become a widely used tool in chemoinformatics and bioinformatics. For instance, graph-based models, such as graph neural networks, can be applied to molecular graphs to predict chemical properties or biochemical activity with high accuracy. However, deep learning models lack transparency, which is undesirable in applications such as drug design, where model outputs must be interpretable to be trusted. To address this limitation, explainable artificial intelligence strategies have been developed and applied. This seminar will discuss how neural networks can be used and explained in the context of life sciences and biomedicine. We will also examine whether such models are capable of extracting and learning meaningful biochemical knowledge from data, or whether they mainly rely on memorizing statistical patterns, to determine their applicability in biomedical tasks such as drug design.

Speakers
  • Andrea Mastropietro
    Lamarr Institute at the University of Bonn, Nara Institute of Science and Technology
    HOMEPAGE

    Andrea Mastropietro is a Junior Research Group Leader at the Lamarr Institute at the University of Bonn (Germany) and an Assistant Professor at the Nara Institute of Science and Technology (Japan). His research interests are in the field of graph learning and explainable artificial intelligence in the life sciences, with applications in chemoinformatics and bioinformatics. Previously, he was a Postdoctoral Researcher at the University of Bonn and an awardee of the Lamarr Stipendium Program fellowship. He got his Ph.D. at Sapienza University of Rome (cum Laude) and his thesis was awarded the Best Ph.D. Thesis Award on Big Data & Data Science by the CINI National Lab on Data Science during the ITADATA 2025 conference.