
Automatic Control Engineering: Pioneering Safe and Intelligent AI
Control theory emerged at the dawn of the Industrial Revolution. At this time, data was scarce and computational resources were limited. Control theory’s main application of automating early industrial machines made safety, resilience, and reliability of uttermost importance. Within this environment, the fundamental concepts of control theory- controllability, observability, and stability- were rigorously formalized, laying the foundation for numerous modern model-based techniques that continue to shape engineering and automation practices today. As shown in Figure 1, much of control theory generally starts with a model, synthesizes a controller, and analyses performance by collecting data from practical implementation. In contrast, most machine learning-based methods move in the opposite direction, starting from data and then designing models or controllers.
Machine learning has emerged as a distinct field, diverging from control theory, propelled by the availability of modern abundant datasets and ample computational resources. In static prediction applications, machine learning has showcased its efficacy, demonstrating the ability to interpolate vast datasets in ways beyond human comprehension, enabling accurate predictions across various domains, from image classification to predicting the next word in a sequence (as exemplified by ChatGPT). However, machine learning encounters significant challenges in dynamic problems, where states evolve over time. AlphaGo illustrates that in closed games where dynamics can be precisely simulated, machine learning excels. In such environments, offline algorithms can amass extensive datasets, learning from past mistakes and uncovering novel strategies. Yet, in open games characterized by uncertainties and randomness in dynamics, machine learning algorithms struggle. They face limitations in exploring the vast decision space due to the inability to generate exhaustive datasets. Moreover, a trial-and-error approach may prove impractical, especially in scenarios involving safety-critical decisions.
It is within the context of dynamic safety-critical uncertain problems that machine learning arguably encounters difficulties in generalizing and extrapolating patterns from limited datasets. In stark contrast, control theory, entrenched in rigorous mathematical principles and system modelling, offers a robust framework ensuring predictable system responses and behaviour under various conditions and uncertainties. The Automatic Control Engineering (ACE) Network is at the forefront of driving innovation in AI. By merging control theory with cutting-edge AI methods, ACE serves as a critical enabler of safe, reliable, and high-performing intelligent systems. The ACE network is currently focusing on the following themes:
Safe and Trustworthy AI for Feedback Control
Feedback control dynamically measures and corrects errors in real time, enabling robust performance, disturbance rejection, and long-term reliability and stability. In contrast, open-loop control lacks the ability to adapt during operation and thus relies on near-perfect predictions to achieve similar outcomes. However, poorly designed feedback systems can unintentionally introduce instability, with delays, noise, and measurement errors amplifying oscillations and degrading performance. Advances in AI have opened new avenues for developing feedback synthesis algorithms, but it is crucial to incorporate lessons from classical control to ensure data-driven controllers do not exhibit unsafe behaviour during deployment. The ACE network is at the forefront of this effort, working to harness control theory to certify that AI-derived feedback controllers are both safe and trustworthy.
Neural and Data-Driven Control Design
Data-driven control removes the need for explicit models derived from physical systems, allowing for the control of complex and poorly understood systems. Many data-driven approaches rely on neural networks, which have a unique structure that enables them to process and manage high-dimensional inputs and outputs, such as those found in sensor-rich environments that are challenging for traditional controllers. This capability allows for end-to-end control, where raw sensor data (e.g., images, audio, or time-series data) is directly mapped to control actions, eliminating the need for intermediate steps like feature extraction or explicit state estimation. The ACE network is leveraging neural and data-driven control design to tackle practical problems where traditional methods have struggled to succeed.
Control Theory for Machine Learning Optimization
The vast majority of optimization problems encountered in practical applications lack closed-form solutions. In the absence of a closed-form solution, approximate iterative algorithms are typically employed. These algorithms can be reinterpreted through the lens of dynamical systems. Within this framework, the concept of asymptotic stability corresponds to the convergence of the algorithm toward a local or global optimum. The ACE network is interested in leveraging control-theoretic techniques to enhance the robustness and stability of these algorithms, thereby improving upon standard iterative methods such as gradient descent and Newton’s method. These enhanced algorithms have the potential to revolutionise machine learning, for example any improvement to the backpropagation algorithm would lead to major advances in training neural networks for which many fields rely on.
Multi-Agent Systems and Reinforcement Learning
Richard Bellman is a towering figure in modern control theory, renowned for his pioneering work on Dynamic Programming (DP), a framework for determining the optimal policy or feedback controller that drives a system to maximize rewards or minimize costs. The equations derived from DP are foundational to reinforcement learning, underpinning popular algorithms such as value iteration and Q-learning. Traditionally, these multi-stage optimization problems were treated as single-player games, but advancements in machine learning, such as distributed optimization and scalable neural networks, have recently made multi-agent versions computationally tractable in specific cases. The ACE network seeks to advance this frontier, enabling decentralized decision-making for applications ranging from autonomous vehicles to industrial robotics.
The AI and Data committee will initiate discussions on these significant research domains and encourage engagement through various channels, including a series of talks, a white paper, a comprehensive roadmap of research initiatives, and workshops aimed at developing research proposals. These efforts are intended to enhance the understanding of academics, industry professionals, and policymakers regarding the critical nature of these topics. The committee will cover applications spanning Smart Cities, Energy and Sustainability, Autonomous Vehicles, Manufacturing, and Automation, as well as Healthcare Systems.