AI and Data Talk Series

We are thrilled to announce the AI and Data Talk Series — an engaging lecture series presented by the ACE AI and Data Grand Challenge Research Committee. This series aims to elevate awareness and foster dialogue among a diverse audience, including established researchers, industry professionals, and early career scholars, about the vital intersection of control theory and AI technologies.

Event Details

Start Date: Mid-April
Frequency: Monthly Presentations (Each seminar is 1 hour long)
Location: Online (Zoom links provided upon registration)

Join us for a deep dive into the challenges and innovations within AI and control theory, from theoretical foundations to real-world applications, presented by renowned academics from around the globe.

Register Now to secure your spot and receive event reminders, Zoom links, and post-event materials.

Programme Schedule

Speaker

Date & Time

Title and Details

Prof. Sebastien Gros

 YouTube Recording

Head of Department, Dept. of Eng. Cybernetics, NTNU, Trondheim, Norway

 

Reinforcement Learning and Model Predictive Control: what did we learn on AI for decision making

Abstract

The combination Reinforcement Learning (RL) and Model Predictive Control (MPC) has been extensively studied in the recent literature. It has been shown that MPC (even with inaccurate models) can be a universal approximation for RL, with the benefits over standard Machine Learning tools to be structured, to provide rational decisions, to be able to use system knowledge, and to be equipped with a rich theory of stability and safety. Research on RL and MPC has also allowed to identify the class of problems where “classical” approaches (building the MPC model to fit the data) work well, and by default also the problems where they do not necessarily work, and hence where RL for MPC is effective. This research also suggests tracks to explore for a new paradigm on AI for decision. In this talk, we will briefly introduce these elements.

Bio

Sebastien Gros received his PhD degree in 2008 at the Automatic Control Lab, EPFL. After a bike trip in full autonomy from Switzerland to the Everest base camp, he worked in the wind industry in 2010-2011. He joined the Optimal Control group at KU Leuven in 2011 as a postdoc where he worked on numerical optimization methods, and NMPC for complex mechanical applications. He then joined the University of Chalmers in 2013 as an Assistant Professor, where he worked on distributed optimization methods, autonomous driving, vehicle control and traffic optimisation. He was promoted to Associate Professor in 2017. He joined the Dept. of Cybernetics at the Norwegian University of Technology (NTNU) in 2019 as a full Professor, and became head of Dept. in 2022. He has been working on learning methods for MPC since 2018, focusing on the combination of Reinforcement Learning and MPC with closed-loop optimality as target. 

Prof. Raphael Jungers

 YouTube Recording

UCLouvain

 

On the Links Between Binary-Classification Learning and the Scenario Approach

Abstract

We establish connections between the problems of binary classification, where one has to classify unseen inputs based on observed labelled inputs, with risk-aware decision making, a surrogate of the Scenario Approach, where one has to find the best decision satisfying constraints with high probability based on observed constraints.

For that, we first propose a notion of PAC-learnability (Probably Approximately Correct) that applies to algorithms rather than hypothesis classes. We also extend the notion of PAC-learnability to account for restricted classes of ground-truth labelling functions. We show that decision making is an instance of binary classification where the ground-truth labelling function is assumed to be one almost everywhere. We then analyze the PAC-learnability of algorithms for decision making. We show that unlike in the binary-classification case, a finite VC-dimension (Vapnick-Chervonenkis dimension) or a finite compression size are not necessary for PAC-learnability of decision-making algorithms.

Finally, we propose a sufficient condition inspired by the VC-dimension and we formulate the conjecture that this condition is also sufficient.

Bio

Raphael Jungers is a Professor at UCLouvain, Belgium. His main interests lie in the fields of Computer Science, Graph Theory, Optimization and Control. He received a Ph.D. in Mathematical Engineering from UCLouvain (2008), and a M.Sc. in Applied Mathematics, both from the Ecole Centrale Paris, (2004), and from UCLouvain (2005).

He has held various invited positions, at the Université Libre de Bruxelles (2008-2009), at the Laboratory for Information and Decision Systems of the Massachusetts Institute of Technology (2009-2010), at the University of L´Aquila (2011, 2013, 2016), at the University of California Los Angeles (2016-2017), and at Oxford University (2022-2023).

He is a FNRS, BAEF, and Fulbright fellow. He has been an Editor at large for the IEEE CDC, Associate Editor for the IEEE CSS Conference Editorial Board, and the journals NAHS (2015-2016), Systems and Control Letters (2016-2017), IEEE Transactions on Automatic Control (2015-2020), Automatica (2020-). He is currently serving as a Senior Editor for NAHS. He was the recipient of the IBM Belgium 2009 award and a finalist of the ERCIM Cor Baayen award 2011. He was the co-recipient of the SICON best paper award 2013-2014, the HSCC2020 best paper award, the NAHS 2020-2022 best paper award, and he is the recipient of an ERC 2019 award. He is an IEEE Fellow (class 2025).

Prof. Pete Seiler

 YouTube Recording

University of Michigan, Dept. of Electrical Engineering and Computer Science, USA

 

Extending Traditional Robust Control Methods for V&V
of Nontraditional Controllers

Abstract

This talk will start with a short tutorial on disk
margins. These are robust stability measures that account for simultaneous gain and phase perturbations in a feedback system. This provides a generalization of classical (gain-only and phase-only) stability margins. Disk margins are the starting point for other rigorous uncertainty analysis tools developed in robust control. The remainder of the talk will discuss the issues associated with analyzing non-traditional control architectures including neural networks, real-time optimization, and adaptive control. Some initial
results will be given for these problems. For example, existing robust control methods can be used to assess stability and robustness of feedback loops that use neural networks. Possible extensions to other non-traditional control architectures will also be discussed.

Bio

Peter Seiler is an Associate Professor in Electrical Engineering and Computer Science at the University of Michigan. He is an IEEE Fellow. He is also the recipient of the O. Hugo Schuck Award in 2003, the Brockett-Willems Outstanding Paper Award in 2020, and an NSF
CAREER award in 2021. His research focuses on robust control theory which addresses the impact of model uncertainty on systems design. He has been a contributor to the Robust Control Toolbox in Matlab since 2001. He was a Principal Scientist from 2004-2008 in the Aerospace
Electronic Systems group at the Honeywell Labs. During that time, he worked on the redundancy management system for the Boeing 787, sensor fusion algorithms for automotive active safety systems, and re-entry flight control laws for NASA’s Orion vehicle.

Dr. Karthik Elamvazhuthi 

 YouTube Recording

Los Alamos National Laboratory

 

Control of Nonlinear Systems Using Denoising Diffusion Models

Abstract

In this talk, we view classical control and planning problems from the point of view of sampling from a probability distribution. In contrast to classical sampling, the samples in this scenario have to respect constraints imposed by the control system. Denoising Diffusion Probabilistic Models (DDPMs) is a recently introduced method in machine learning for sampling based image generation. One of the theoretical workhorses of the method is the idea of time reversals of the Fokker-Planck equation (FPE). We extend DDPMs to control problems, investigating the existence of such time reversals in such a constrained setting  based on non-holonomic versions of the Fokker-Planck equation. Then we test the method numerically on examples such as path planning for in a non-convex environment, designing bistable control laws for linear systems and stabilization of driftless nonlinear systems. 

Bio

Karthik Elamvazhuthi received the B.Tech. degree in mechatronics engineering from the Manipal Institute of Technology, Manipal, India in 2011. He received the M.S. and Ph.D. degrees in mechanical engineering from Arizona State University, Tempe,AZ, in 2014 and 2019, respectively. He was a CAM Assistant Adjunct Professor at the Department of Mathematics, University of California, Los Angeles from 2019–2022 and a postdoctoral scholar at the Department of Mechanical Engineering, University of California, Riverside 2022-2024. He is currently a postdoctoral scholar at the Los Alamos National Laboratory, in the Applied Mathematics and Plasma Physics Group.  His research interests include modelling and control of robotic swarms using methods from partial differential equations and stochastic processes.

Dr. James Anderson

 YouTube Recording

Columbia University, USA

 

Collaborative Learning for Control

Abstract

In this talk I will describe the model-free federated linear quadratic regulator (LQR) problem where $M$ agents with unknown, distinct yet similar dynamics collaboratively learn an optimal policy to minimize an average quadratic cost while keeping their data private. To exploit the similarity of the agents’ dynamics, we propose to use federated learning (FL) to allow the agents to periodically communicate with a central server to train policies by leveraging a larger dataset from all the agents. With this setup, we seek to understand the following questions: (i) Is the learned common policy stabilizing for all agents? (ii) How close is the learned common policy to each agent’s own optimal policy?  (iii) Can each agent learn its own optimal policy faster by leveraging data from all agents? If time permits, extensions of this work to meta learning and task selection will be discussed.

Bio

James Anderson is Associate Professor in the Department of Electrical Engineering at Columbia University, where he is also a member of the Data Science Institute. From 2016 to 2019 he was a senior postdoctoral scholar in the Computing + Mathematical Sciences department at the California Institute of Technology. Prior to Caltech, he held a Junior Research Fellowship at St John’s College at the University Oxford and was also affiliated with the Department of Engineering Science. He was awarded a DPhil (PhD) from Oxford in 2012 and the BSc and MSc degrees from the University of Reading in 2005 and 2006 respectively. Through work with his students and collaborators  he has won several awards including the IEEE Transactions on Network Control Systems outstanding paper award and the best L4DC best paper award. 

Dr. Luca Furieri

 YouTube Recording

University of Oxford, UK

 

Closing the loop between optimal nonlinear control and learning-based optimization

Abstract

The increasing complexity of modern engineering systems demands new approaches to control and optimisation. Traditional methods provide fundamental theoretical guarantees but often struggle with scalability and adaptation to real-world uncertainties. Conversely, machine learning–based techniques achieve remarkable empirical performance but typically lack formal guarantees of stability and convergence. This talk introduces a recent unified approach to (1) neural-network control with stability guarantees and (2) learning linearly convergent algorithms for convex and non-convex optimisation. In the first part, we present a parametrisation of all and only those control policies that can stabilise a given time-varying nonlinear system. The main insight is that we can learn over a stable neural-network operator, thereby capturing exclusively the stabilising nonlinear control policies for a wide class of nonlinear systems, even under classes of model uncertainty. In the second part, we turn to convex and non-convex optimisation. While systems theory has established optimal worst-case linear convergence rates for convex functions, a recent trend in machine learning, Learning to Optimize (L2O), uses neural networks to discover update rules that excel even in non-convex scenarios. The catch is that formal convergence guarantees are generally not available. We bridge these two paradigms by developing a constructive characterisation of all linearly and asymptotically convergent algorithms for classes of smooth and non-smooth convex and non-convex functions. We illustrate the developed methods on optimal control benchmarks inspired by collision-avoidance problems and on optimisation benchmarks arising in image classification, ill-conditioned least-squares, and MPC.

Bio

Luca Furieri is an Associate Professor in the Department of Engineering Science at the University of Oxford, where he began in June 2025. His research focuses on optimal control and optimisation for distributed decision-making and large-scale cyber-physical systems. He received the Swiss National Science Foundation (SNSF) Ambizione career grant in 2022, the IEEE Transactions on Control of Network Systems Best Paper Award in 2022, and the American Control Conference O. Hugo Schuck Best Paper Award in 2018.

Dr. Giordano Scarciotti

 YouTube Recording

Imperial College, UK

 

One Equation to Rule Them All: Direct Data-Driven Cascade Stabilisation, Regulation, and Reduction

Abstract

In this seminar we present a framework for direct data-driven control for general problems involving interconnections of dynamical systems. We first develop a method to determine the solution of a Sylvester equation from data. Such solution is used to describe a subspace that plays a role in a large variety of problems. We then provide an error analysis of the impact that noise has on this solution. Thanks to the interconnection approach developed, we are able to track how the noise propagates at each stage, and thereby provide bounds on the final designs. Among the many potential problems that can be solved with this framework, we focus on three representatives: cascade stabilisation, output regulation, and model order reduction.

Part I: https://arxiv.org/abs/2508.17248v1

Part II: https://arxiv.org/abs/2508.17251v1

Bio

Giordano Scarciotti received his B.Sc. and M.Sc. degrees in Automation Engineering from the University of Rome “Tor Vergata”, Italy, in 2010 and 2012, respectively. In 2012 he joined the Control and Power Group, Imperial College London, UK, where he obtained a Ph.D. degree in 2016. He also received an M.Sc. in Applied Mathematics from Imperial in 2020. He is currently an Associate Professor at Imperial. He was a visiting scholar at New York University in 2015, at University of California Santa Barbara in 2016, and a Visiting Fellow of Shanghai University in 2021-2022. He is the recipient of an Imperial College Junior Research Fellowship (2016), of the IET Control & Automation PhD Award (2016), the Eryl Cadwaladr Davies Prize (2017), an ItalyMadeMe award (2017) and the IEEE Transactions on Control Systems Technology Outstanding Paper Award (2023). He is a member of the EUCA Conference Editorial Board, of the IFAC and IEEE CSS Technical Committees on Nonlinear Control Systems and has served in the International Programme Committees of multiple conferences. He is Associate Editor of Automatica. He was the National Organising Committee Chair for the EUCA European Control Conference (ECC) 2022, and of the 7th IFAC Conference on Analysis and Control of Nonlinear Dynamics and Chaos 2024, and the Invited Session Chair and Editor for the IFAC Symposium on Nonlinear Control Systems 2022 and 2025, respectively. He is the General Co-Chair of ECC 2029.

Prof. Chris Freeman

  

University of Southampton, UK

 12th November at 1:00 PM – 2:00 PM

AI in Next Generation Stroke Rehabilitation

Abstract

As progress in smart textiles, soft robotics and pervasive sensing networks gathers pace, the prospect of truly wearable technology for stroke rehabilitation is fast emerging. However the ability of control systems and machine learning to support this vision is lagging far behind, limiting its potential to restore independence and dignity to people with long term conditions. An overview of current research into advanced control and machine learning strategies for electrical stimulation and robotics will be presented, showing their effectiveness for upper limb stroke rehabilitation. Then a discussion is given of open control, identification, sensing and motor learning problems that must be solved to maximise its effectiveness and allow it to reach the end user.

Bio

Chris Freeman is a Professor in Electronics and Computer Science at the University of Southampton, UK. His research interests include the development, application and assessment of iterative learning, repetitive and adaptive controllers for both biomedical and industrial systems. Over the last fifteen years he has developed new healthcare technologies combining robotics and electrical stimulation to enable people with upper limb impairments to perform functional tasks. Over this time he has worked closely with clinicians, patients and carers. These include five clinical trials using technology he has helped develop, as well as numerous smaller studies and user-led design sessions. His focus has been to understand and define clinical problems within an engineering perspective and translate this into usable solutions. His research covers the spectrum from control application, control theory, rehabilitation engineering, biomechanics, clinical studies and user perspectives.

 

 

Registration

We offer flexible registration options:

  • Full Series Registration: Sign up once and gain access to all seminars.
  • Individual Session Registration: Select and register only for the sessions that interest you most.

Register Now to secure your spot and receive event reminders, Zoom links, and post-event materials.

Don’t miss this opportunity to connect with leading experts and enhance your understanding of the synergy between AI and control theory.

For any inquiries, please contact us at info@ukcontrol.org. 

© ACE 2024