Riccardo Meucci - 40 Years of Chaos in Lasers and its Control
by IDSIA Announcements of talks@IDSIA
Dear Colleagues,
On Friday, September the 16th at 11:30 in Room A1.02 (East Campus, via La Santa 1, Lugano), Riccardo Meucci will give a talk titled:
40 Years of Chaos in Lasers and its Control
Abstract
We revisit the laser model with cavity loss modulation, from which evidence of chaos and generalized multistability was discovered in 1982 [1]. Multistability refers to the coexistence of two or more attractors in nonlinear dynamical systems. Despite its relative simplicity, the adopted model shows us how the multistability depends on the dissipation of the system. The model is then tested under the action of a secondary sinusoidal perturbation, which can remove bistability when a suitable relative phase is chosen [2]. Such a control strategy is universally known as “phase control” and it was first implemented in the same physical system but at different resonance frequencies [3].
The potential of this control technique has been validated on other paradigmatic chaotic systems such as the Duffing oscillator. In this particular case, it has been verified that the control is more sensitive when applied to the cubic term of the nonlinearity [4].
We recently demonstrated that phase control, which is classified as a nonfeedback method, can be converted to a closed loop control (feedback method) when a suitable adaptive filter is used on the chaotic signal to be processed [5].
References
[1] F. T. Arecchi, R. Meucci, G. Puccioni, and J. Tredicce, “Experimental evidence of subharmonic bifurcations, multistability, and turbulence in a Q-switched gas laser,” Phys. Rev. Lett. 49(17), 1217–1220 (1982).
[2] R. Meucci, J.M. Ginoux, M. Mehrabbeik, S. Jafari, and J. C.Sprott” Generalized multistability and its control in a laser”: Chaos 32, 083111 (2022); doi: 10.1063/5.0093727.
[3] Meucci, W. Gadomski, M. Ciofini, and F. T. Arecchi, “Experimental control of chaos by means of weak parametric perturbations,” Phys. Rev. E 49(4), R2528–R2531 (1994).
[4] R. Meucci, S. Euzzor, E. Pugliese, S. Zambrano, M. R. Gallas, and J. A. C. Gallas, “Optimal phase-control strategy for damped-driven Duffing oscillators,” Phys. Rev. Lett. 116(4), 044101 (2016).
[5] R. Meucci, S. Euzzor, M.Ciofini, A.Lapucci, and S. Zambrano” Demonstrating Filtered Feedback Control Near a Boundary Crisis” IEEE Trans. on Circuits and Systems¬ -I: 68(7), 3023 (2021).
The speaker
Riccardo Meucci received the doctor degree in Physics in 1982 and the specialization degree in Optics (PhD), both from University of Florence, Italy. From 1984 to 1987, he was research fellow at the Istituto di Cibernetica of the National Research Council (CNR) of Italy. Since 1987, he joined with the National Institute of Optics, Firenze, Italy, where he holds the position of Research Director. He is also contract professor of physical optics and mathematical methods for optics at University of Firenze. Riccardo Meucci is associate editor for the International Journal of Bifurcation and Chaos (IJBC) from 1 January 2018.
His research interests include nonlinear dynamics, chaos, control of chaos, synchronization and infrared digital holography.
He is IEEE Senior Member since 17 November 2018.
2 years, 3 months
Mini workshop on explainable AI and scientific understanding in data-drive science, Monday, August 29, room D1.06, Campus Est, USI-SUPSI, Lugano-Viganello
by IDSIA Announcements of talks@IDSIA
Dear all,
On Monday, August 29, we are organising at IDSIA USI-SUPSI a mini workshop on explainable AI and scientific understanding in data-drive science.
The meeting will be held in room D1.06, Sector D, first floor, Campus Est, USI-SUPSI, Lugano-Viganello
The programme is the following
10:30-11:15 : Talk by Florian J. Boge (Wuppertal University)
*Title: Deep Learning Robustness and Scientific Discovery: The Case of Anomaly Detect”
*Abstract: Anomalies have long been a driving force behind scientific progress. In recent years, particle physicists have begun to employ unsupervised deep learning for the sake of discovering potential anomalies, indicative of new physics. In my talk, I will present joint work with two particle physicists that, in a first step, distinguishes two senses of deep learning robustness: feature and performance robustness. I will then argue that state of the art deep learning can succeed by violating feature robustness but that it needs to obey performance robustness. However, at present, this still represents a major hurdle to the successful employment of deep learning for the sake of scientific discovery, as can be shown on the case of anomaly detection.
11:15-11:45 : Q&A / Discussion
*** Lunch break
14:00-14:45 : Talk by Emanuele Ratti (Johannes Kepler University Linz)
*Title: A coherentist approach to explainable AI in scientific research
*Abstract: In the past few years, there has been an explosion of concerned literature about the opacity of data science tools. The problem with opacity, it is said, is that it makes the epistemic warrants and the moral accountability of AI tools problematic. If we cannot understand how and why a tool has arrived at certain conclusions, how do we know if this tool is reliable and/or trustworthy? Recently, a field called Explainable AI (XAI) has advanced various solutions to ‘open’ the black-box of opaque algorithmic systems. Finding the right way to ‘explain’ AI models (e.g. data science models) or the processes leading to them, it is said, is what can ensure the epistemic and moral accountability of AI. But despite the richness of XAI proposals, it has been noticed that this emerging field suffers from several problems. First, it is not clear what the ultimate goals of XAI tools are, whether they are about trustworthiness or reliability, which are both equally problematic goals. Second, it is not clear what XAI tools are supposed to explain: are the explanations about data-generating processes, or about the models themselves? Third, there are many ways of thinking about explanations, and it is not clear how to evaluate which one is the best given a certain context.
In this talk, I start from the assumption that these concerns are well-motivated, and that XAI is a promising field in need of a clearer goal. By limiting myself to the context of scientific research, I propose that XAI, despite the name, does not have an explanatory purpose; rather, I formulate a new conceptualization of XAI tools that I call ‘coherentist’. The notion of ‘coherence’ is taken from Hasok Chang’s work on science as a system of practices (SoP). A SoP is a network of epistemic activities, scientific objects, and agents; these components have to stay in a relation of coherence (defined in various ways) in order to ensure the optimal functioning of the overall SoP of a given scientific project. Through Chang’s lens, AI tools should not be seen as isolated entities which fully determine scientific decisions. Rather, AI tools are just one component of a dense network constituting a given SoP. In this context, the role of XAI is not to explain what AI tools do: the role of XAI is to facilitate the integration of AI tools into a given scientific project, and to make sure that AI tools themselves are in a relation of ‘coherence’ with the other components of a given SoP. Through a case study of biomedical data science, I will delineate (1) the idea of SoP, (2) the different ways in which ‘coherence’ acts as a ‘glue’ among different components of a given SoP, and (3) the special coherentist role that XAI plays in integrating AI tools in scientific practice.
14:45-15:15 : Q&A / Discussion
*************
Informations on the speakers:
*Florian J. Boge is a postdoctoral researcher at Wuppertal University in the interdisciplinary DFG/FWF research unit "The Epistemology of the Large Hadron Collider". He received his PhD from the University of Cologne in 2017 for a thesis on the interpretation of Quantum Mechanics. He was recently granted an Emmy Noether junior research group leadership for studying the impact of deep learning on scientific understanding. From October on, he will also fill a temporary, part-time interim professorship.
*Emanuele Ratti is a postdoc at the Institute of Philosophy and Scientific Method at Johannes Kepler University Linz. He has a PhD in Ethics and Foundations of the Life Sciences from the European School of Molecular Medicine (SEMM) in Milan, and he has worked for almost five years at the University of Notre Dame. He has research interests in philosophy of biology and philosophy of AI.
2 years, 4 months