Category Archives: Control Engineering

Nice summary of reinforcement learning in control (Adaptive Dynamic Programming) and the use of Q-learning plus NN approximators for solving a control problem under a game theory framework

Kyriakos G. Vamvoudakis, Non-zero sum Nash Q-learning for unknown deterministic continuous-time linear systems, Automatica, Volume 61, November 2015, Pages 274-281, ISSN 0005-1098, DOI: 10.1016/j.automatica.2015.08.017.

This work proposes a novel Q-learning algorithm to solve the problem of non-zero sum Nash games of linear time invariant systems with N -players (control inputs) and centralized uncertain/unknown dynamics. We first formulate the Q-function of each player as a parametrization of the state and all other the control inputs or players. An integral reinforcement learning approach is used to develop a model-free structure of N -actors/ N -critics to estimate the parameters of the N -coupled Q-functions online while also guaranteeing closed-loop stability and convergence of the control policies to a Nash equilibrium. A 4th order, simulation example with five players is presented to show the efficacy of the proposed approach.

Reinforcement learning for discovering the parameters of the physical model of a system

S.P. Nageshrao, G.A.D. Lopes, D. Jeltsema, R. Babuška, Passivity-based reinforcement learning control of a 2-DOF manipulator arm, Mechatronics, Volume 24, Issue 8, December 2014, Pages 1001-1007, ISSN 0957-4158, DOI: 10.1016/j.mechatronics.2014.10.005.

Passivity-based control (PBC) is commonly used for the stabilization of port-Hamiltonian (PH) systems. The PH framework is suitable for multi-domain systems, for example mechatronic devices or micro-electro-mechanical systems. Passivity-based control synthesis for PH systems involves solving partial differential equations, which can be cumbersome. Rather than explicitly solving these equations, in our approach the control law is parameterized and the unknown parameter vector is learned using an actor\u2013critic reinforcement learning algorithm. The key advantages of combining learning with PBC are: (i) the complexity of the control design procedure is reduced, (ii) prior knowledge about the system, given in the form of a PH model, speeds up the learning process, (iii) physical meaning can be attributed to the learned control law. In this paper we extended the learning-based PBC method to a regulation problem and present the experimental results for a two-degree-of-freedom manipulator. We show that the learning algorithm is capable of achieving feedback regulation in the presence of model uncertainties.

A reinforcement learning controller to tune sub-controllers

Kevin Van Vaerenbergh, Peter Vrancx, Yann-Michaël De Hauwere, Ann Nowé, Erik Hostens, Christophe Lauwerys, Tuning hydrostatic two-output drive-train controllers using reinforcement learning, Mechatronics, Volume 24, Issue 8, December 2014, Pages 975-985, ISSN 0957-4158. DOI: 10.1016/j.mechatronics.2014.07.005

When controlling a complex system consisting of several subsystems, a simple divide and conquer approach is to design a controller for each system separately. However, this does not necessarily result in a good overall control behavior. Especially when there are strong interactions between the subsystems, the selfish behavior of one controller might deteriorate the performance of the other subsystems. An alternative approach is to design a global controller for the entire mechatronic system. Such a design procedure might result in more optimal behavior, however it requires a lot more effort, especially when the interactions between the different subsystems cannot be modeled exactly or if the number of parameters is large.
In this paper we present a hybrid approach to this problem that overcomes the problems encountered when using several independent subsystems. Starting from such a system with individual subsystem controllers, we add a global layer which uses reinforcement learning to simultaneously tune the lower level controllers. While each subsystem still has its own individual controller, the reinforcement learning layer is used to tune these controllers in order to optimize global system behavior. This mitigates both the problem of subsystems behaving selfishly without the added complexity of designing a global controller for the entire system. Our approach is validated on a hydrostatic drive train.