Dear all,
On Thursday, December 21, we are organising at IDSIA USI-SUPSI a mini pre-Christmas
workshop on the Philosophy of Machine Learning.
The meeting will be held in room C2.09, Sector C, second floor, Campus Est, USI-SUPSI,
Lugano-Viganello
The programme is the following:
10.00 - 11.00: Talk by Juan Duran (TU Delft)
*Title: Justification, reliabilism, and machine learning
*Abstract: In this talk, I aim to explore the significance of justification in machine
learning (ML). To begin, I'll briefly touch upon two promising epistemologies for
ML—transparency and computational reliabilism (CR). However, my focus will be on defending
the latter, requiring a more in-depth discussion. I'll dedicate some time to elucidate
how CR operates, and which assumptions are built-in. Next, I plan to illustrate how CR
works in the context of Forensic ML. This emerging field sparks debates about
justification due to the inherent challenges in achieving both explanation and
understanding, which are crucial for judicial and forensic purposes. Lastly, I'll
address two objections against CR: i) the concern that, under CR, statistically
insignificant yet serious errors can compromise the reliability of AI algorithms; and ii)
the argument that CR, being a reliabilist epistemology, demands a high frequency of
success, ultimately posing an issue of high predictive accuracy. I'll present
arguments to counter these objections, advocating for computational reliabilism as a
promising epistemology for ML.
11.00 - 12.00: Joint talk by Emanuele Ratti (Bristol) and Alberto Termine (IDSIA
USI-SUPSI).
On the mediating role of XAI in scientific research
In recent years, philosophers have deserved increasing attention to the eXplainable AI
(XAI) research programme. Surprisingly, their analyses focused more on the typologies and
format of explanations delivered by XAI models, while little has been said about the
potential epistemic roles of XAI in scientific research. In this talk, we will provide a
novel framework to understand XAI in scientific research as a class of models that, rather
than explaining, mediates. We call this framework ‘XAI-as-mediator’, and we will build on
the literature on ‘models as mediators’ to pinpoint its characteristics. In particular, we
argue that XAI models mediate between opaque models generated by ML algorithms and the
domain/theoretical knowledge of the field to which models are applied.
In support to our thesis, we will analyse two examples of how XAI models can play the role
of mediators. The first example focuses on the use of a specific family of XAI tools,
namely feature selection methods, as support tools for post-hoc evaluation of ML models by
domain experts. The second example, on the other hand, examines the use of XAI, and
specifically counterfactual explanation methods, as tools to support hypothesis
formulation during exploratory research. The talk is based on a joint work with Alessandro
Facchini (IDSIA USI-SUPSI)
*** Lunch break (and panettonata @ IDSIA)
14.00 - 14.45: Talk by Andrea Ferrario (ETHZ)
*Title: The quest for justification in AI - where do we stand?
Establishing well-grounded beliefs about artificial intelligence (AI) and its capabilities
is crucial, considering its widespread applications. However, existing research lacks a
unified approach to the justification of these beliefs. I briefly review current
literature, identifying challenges, and suggesting a few insights to advance the field.
14.45 - 15.45: Talk by Chiara Manganini (UniMI)
*Title: On the Ontology of Machine Learning Systems and its consequences for the taxonomy
of Miscomputation
*Abstract: When compared to "traditional" computational artefacts, ML systems
show a crucial difference in the role played by their function, discovered through the
training process, rather than fixed from the beginning. This has deep implications for the
notions of correctness and, consequently, of miscomputation in ML. By adapting the
Ontology of the Levels of Abstraction, a revised framework is proposed to accommodate the
essential features of ML systems. The result is a complex ontology composed of three
artefacts: the Training Sample, the Training Engine, and the Machine Learning Model. This
new ontological framework is then used to develop systematic insights into the types of ML
errors, their relationship with non-ML miscomputations, and with fairness and
explainability. The talk is based on joint works with Alberto Termine (IDSIA USI-SUPSI),
and Giuseppe Primiero (UniMI).
For more informations, please write to alessandro.facchini @ idsia.ch
*** Informations on the speakers:
*Juan Duran is Assistant Professor at the Faculty of Technology, Policy and Management, TU
Delft. His research focuses on the philosophy of science and ethics of computer-based
science and engineering (computer simulations, AI, and Big Data).
He is the 2019 recipient of the Herbert A. Simon Award for outstanding research in
computing and philosophy.
*Andrea Ferrario is the Scientific Director of the Mobiliar Lab for Analytics at ETH and
PostDoc at ETH Zurich. His research interests lie at the intersection of philosophy and
technology, with a focus on the philosophy of AI and health interventions.
*Chiara Manganini is PhD student at the University of Milan, Department of Philosophy. She
is part of the Logic, Uncertainty, Computation and Information Group, where she studies
the logical and philosophical aspects of the problem of bias in machine learning.
*Emanuele Ratti is Lecturer (i.e. tenure-track) in the Department of Philosophy at the
University of Bristol. His areas of specialisation are the History and Philosophy of
Science and Technology (molecular biology, genomics, and AI), and Ethics of Science and
Technology (including virtue ethics).
*Alberto Termine is Assistant Researcher at IDSIA USI-SUPSI. He obtained a PhD at the
Logic, Uncertainty, Computation and Information Lab, Department of Philosophy, University
of Milan. His current research spans from causal and counterfactual methods in Explainable
Artificial Intelligence to the metaphysics and epistemology of Machine Learning.