Dear all,
On Monday, August 29, we are organising at IDSIA USI-SUPSI a mini workshop on explainable
AI and scientific understanding in data-drive science.
The meeting will be held in room D1.06, Sector D, first floor, Campus Est, USI-SUPSI,
Lugano-Viganello
The programme is the following
10:30-11:15 : Talk by Florian J. Boge (Wuppertal University)
*Title: Deep Learning Robustness and Scientific Discovery: The Case of Anomaly Detect”
*Abstract: Anomalies have long been a driving force behind scientific progress. In recent
years, particle physicists have begun to employ unsupervised deep learning for the sake of
discovering potential anomalies, indicative of new physics. In my talk, I will present
joint work with two particle physicists that, in a first step, distinguishes two senses of
deep learning robustness: feature and performance robustness. I will then argue that state
of the art deep learning can succeed by violating feature robustness but that it needs to
obey performance robustness. However, at present, this still represents a major hurdle to
the successful employment of deep learning for the sake of scientific discovery, as can be
shown on the case of anomaly detection.
11:15-11:45 : Q&A / Discussion
*** Lunch break
14:00-14:45 : Talk by Emanuele Ratti (Johannes Kepler University Linz)
*Title: A coherentist approach to explainable AI in scientific research
*Abstract: In the past few years, there has been an explosion of concerned literature
about the opacity of data science tools. The problem with opacity, it is said, is that it
makes the epistemic warrants and the moral accountability of AI tools problematic. If we
cannot understand how and why a tool has arrived at certain conclusions, how do we know if
this tool is reliable and/or trustworthy? Recently, a field called Explainable AI (XAI)
has advanced various solutions to ‘open’ the black-box of opaque algorithmic systems.
Finding the right way to ‘explain’ AI models (e.g. data science models) or the processes
leading to them, it is said, is what can ensure the epistemic and moral accountability of
AI. But despite the richness of XAI proposals, it has been noticed that this emerging
field suffers from several problems. First, it is not clear what the ultimate goals of XAI
tools are, whether they are about trustworthiness or reliability, which are both equally
problematic goals. Second, it is not clear what XAI tools are supposed to explain: are the
explanations about data-generating processes, or about the models themselves? Third, there
are many ways of thinking about explanations, and it is not clear how to evaluate which
one is the best given a certain context.
In this talk, I start from the assumption that these concerns are well-motivated, and that
XAI is a promising field in need of a clearer goal. By limiting myself to the context of
scientific research, I propose that XAI, despite the name, does not have an explanatory
purpose; rather, I formulate a new conceptualization of XAI tools that I call
‘coherentist’. The notion of ‘coherence’ is taken from Hasok Chang’s work on science as a
system of practices (SoP). A SoP is a network of epistemic activities, scientific objects,
and agents; these components have to stay in a relation of coherence (defined in various
ways) in order to ensure the optimal functioning of the overall SoP of a given scientific
project. Through Chang’s lens, AI tools should not be seen as isolated entities which
fully determine scientific decisions. Rather, AI tools are just one component of a dense
network constituting a given SoP. In this context, the role of XAI is not to explain what
AI tools do: the role of XAI is to facilitate the integration of AI tools into a given
scientific project, and to make sure that AI tools themselves are in a relation of
‘coherence’ with the other components of a given SoP. Through a case study of biomedical
data science, I will delineate (1) the idea of SoP, (2) the different ways in which
‘coherence’ acts as a ‘glue’ among different components of a given SoP, and (3) the
special coherentist role that XAI plays in integrating AI tools in scientific practice.
14:45-15:15 : Q&A / Discussion
*************
Informations on the speakers:
*Florian J. Boge is a postdoctoral researcher at Wuppertal University in the
interdisciplinary DFG/FWF research unit "The Epistemology of the Large Hadron
Collider". He received his PhD from the University of Cologne in 2017 for a thesis on
the interpretation of Quantum Mechanics. He was recently granted an Emmy Noether junior
research group leadership for studying the impact of deep learning on scientific
understanding. From October on, he will also fill a temporary, part-time interim
professorship.
*Emanuele Ratti is a postdoc at the Institute of Philosophy and Scientific Method at
Johannes Kepler University Linz. He has a PhD in Ethics and Foundations of the Life
Sciences from the European School of Molecular Medicine (SEMM) in Milan, and he has worked
for almost five years at the University of Notre Dame. He has research interests in
philosophy of biology and philosophy of AI.