Data Driven Control
Project Showcase

Data Driven Control

Reinforcement Learning versus Classical control for Control Systems

By: Ben Ruijsch van Dugteren , Nathan Wells , Nicholas Cristaudo

Supervised by: Krupa Prag


About

Abstract

This study examines the application of deep reinforcement learning to data-driven control and benchmarks it against classical proportional–integral–derivative control. Two system classes are considered: mechanical systems (inverted pendulum and cart-pole) and electrical systems (buck converter and inverted buck–boost converter). Using the Stable-Baselines3 library, we implement six algorithms-Advantage Actor–Critic, Proximal Policy Optimization, Deep Q-Network, Deep Deterministic Policy Gradient, Twin Delayed Deep Deterministic Policy Gradient, and Soft Actor–Critic-within a MATLAB-Python co-simulation and in pure Python implementations. Performance is evaluated on four criteria: stabilisation time, steady-state error, training efficiency, and robustness to sensor noise. The results show that deep reinforcement learning controllers can surpass proportional–integral–derivative control in specific regimes, particularly under strong nonlinearities and measurement noise, while requiring substantially greater training time and a larger number of training steps. We also identify systematic trade-offs between discrete-action methods and continuous-action actor–critic methods, with the latter often yielding smoother control and improved noise robustness at higher computational cost. These findings show that deep reinforcement learning is practically advantageous for real-world, data-driven control applications.

Videos 1

Watch presentations, demos, and related content

Documents 1

Downloadable resources and documentation

Click "View Full" to open documents in a new window

Gallery 1

Explore the visual story of this exhibit