Category Archives: Control Engineering

Cognitive control: a nice bunch of definitions and state-of-the-art

S. Haykin, M. Fatemi, P. Setoodeh and Y. Xue, Cognitive Control, in Proceedings of the IEEE, vol. 100, no. 12, pp. 3156-3169, Dec. 2012., DOI: 10.1109/JPROC.2012.2215773.

This paper is inspired by how cognitive control manifests itself in the human brain and does so in a remarkable way. It addresses the many facets involved in the control of directed information flow in a dynamic system, culminating in the notion of information gap, defined as the difference between relevant information (useful part of what is extracted from the incoming measurements) and sufficient information representing the information needed for achieving minimal risk. The notion of information gap leads naturally to how cognitive control can itself be defined. Then, another important idea is described, namely the two-state model, in which one is the system’s state and the other is the entropic state that provides an essential metric for quantifying the information gap. The entropic state is computed in the perceptual part (i.e., perceptor) of the dynamic system and sent to the controller directly as feedback information. This feedback information provides the cognitive controller the information needed about the environment and the system to bring reinforcement leaning into play; reinforcement learning (RL), incorporating planning as an integral part, is at the very heart of cognitive control. The stage is now set for a computational experiment, involving cognitive radar wherein the cognitive controller is enabled to control the receiver via the environment. The experiment demonstrates how RL provides the mechanism for improved utilization of computational resources, and yet is able to deliver good performance through the use of planning. The paper finishes with concluding remarks.

Incremental (hierarchical) search for the optimal policy on markov decision processes

Vu Anh Huynh, Sertac Karaman, and Emilio Frazzoli, An incremental sampling-based algorithm for stochastic optimal control, The International Journal of Robotics Research April 2016 35: 305-333, DOI: 10.1177/0278364915616866.

In this paper, we consider a class of continuous-time, continuous-space stochastic optimal control problems. Using the Markov chain approximation method and recent advances in sampling-based algorithms for deterministic path planning, we propose a novel algorithm called the incremental Markov Decision Process to incrementally compute control policies that approximate arbitrarily well an optimal policy in terms of the expected cost. The main idea behind the algorithm is to generate a sequence of finite discretizations of the original problem through random sampling of the state space. At each iteration, the discretized problem is a Markov Decision Process that serves as an incrementally refined model of the original problem. We show that with probability one, (i) the sequence of the optimal value functions for each of the discretized problems converges uniformly to the optimal value function of the original stochastic optimal control problem, and (ii) the original optimal value function can be computed efficiently in an incremental manner using asynchronous value iterations. Thus, the proposed algorithm provides an anytime approach to the computation of optimal control policies of the continuous problem. The effectiveness of the proposed approach is demonstrated on motion planning and control problems in cluttered environments in the presence of process noise.

Reinforcement learning in the automatic control area

Yu Jiang; Zhong-Ping Jiang, Global Adaptive Dynamic Programming for Continuous-Time Nonlinear Systems, in Automatic Control, IEEE Transactions on , vol.60, no.11, pp.2917-2929, Nov. 2015, DOI: 10.1109/TAC.2015.2414811.

This paper presents a novel method of global adaptive dynamic programming (ADP) for the adaptive optimal control of nonlinear polynomial systems. The strategy consists of relaxing the problem of solving the Hamilton-Jacobi-Bellman (HJB) equation to an optimization problem, which is solved via a new policy iteration method. The proposed method distinguishes from previously known nonlinear ADP methods in that the neural network approximation is avoided, giving rise to significant computational improvement. Instead of semiglobally or locally stabilizing, the resultant control policy is globally stabilizing for a general class of nonlinear polynomial systems. Furthermore, in the absence of the a priori knowledge of the system dynamics, an online learning method is devised to implement the proposed policy iteration technique by generalizing the current ADP theory. Finally, three numerical examples are provided to validate the effectiveness of the proposed method.

Nice summary of reinforcement learning in control (Adaptive Dynamic Programming) and the use of Q-learning plus NN approximators for solving a control problem under a game theory framework

Kyriakos G. Vamvoudakis, Non-zero sum Nash Q-learning for unknown deterministic continuous-time linear systems, Automatica, Volume 61, November 2015, Pages 274-281, ISSN 0005-1098, DOI: 10.1016/j.automatica.2015.08.017.

This work proposes a novel Q-learning algorithm to solve the problem of non-zero sum Nash games of linear time invariant systems with N -players (control inputs) and centralized uncertain/unknown dynamics. We first formulate the Q-function of each player as a parametrization of the state and all other the control inputs or players. An integral reinforcement learning approach is used to develop a model-free structure of N -actors/ N -critics to estimate the parameters of the N -coupled Q-functions online while also guaranteeing closed-loop stability and convergence of the control policies to a Nash equilibrium. A 4th order, simulation example with five players is presented to show the efficacy of the proposed approach.

Reinforcement learning for discovering the parameters of the physical model of a system

S.P. Nageshrao, G.A.D. Lopes, D. Jeltsema, R. Babuška, Passivity-based reinforcement learning control of a 2-DOF manipulator arm, Mechatronics, Volume 24, Issue 8, December 2014, Pages 1001-1007, ISSN 0957-4158, DOI: 10.1016/j.mechatronics.2014.10.005.

Passivity-based control (PBC) is commonly used for the stabilization of port-Hamiltonian (PH) systems. The PH framework is suitable for multi-domain systems, for example mechatronic devices or micro-electro-mechanical systems. Passivity-based control synthesis for PH systems involves solving partial differential equations, which can be cumbersome. Rather than explicitly solving these equations, in our approach the control law is parameterized and the unknown parameter vector is learned using an actor\u2013critic reinforcement learning algorithm. The key advantages of combining learning with PBC are: (i) the complexity of the control design procedure is reduced, (ii) prior knowledge about the system, given in the form of a PH model, speeds up the learning process, (iii) physical meaning can be attributed to the learned control law. In this paper we extended the learning-based PBC method to a regulation problem and present the experimental results for a two-degree-of-freedom manipulator. We show that the learning algorithm is capable of achieving feedback regulation in the presence of model uncertainties.

A reinforcement learning controller to tune sub-controllers

Kevin Van Vaerenbergh, Peter Vrancx, Yann-Michaël De Hauwere, Ann Nowé, Erik Hostens, Christophe Lauwerys, Tuning hydrostatic two-output drive-train controllers using reinforcement learning, Mechatronics, Volume 24, Issue 8, December 2014, Pages 975-985, ISSN 0957-4158. DOI: 10.1016/j.mechatronics.2014.07.005

When controlling a complex system consisting of several subsystems, a simple divide and conquer approach is to design a controller for each system separately. However, this does not necessarily result in a good overall control behavior. Especially when there are strong interactions between the subsystems, the selfish behavior of one controller might deteriorate the performance of the other subsystems. An alternative approach is to design a global controller for the entire mechatronic system. Such a design procedure might result in more optimal behavior, however it requires a lot more effort, especially when the interactions between the different subsystems cannot be modeled exactly or if the number of parameters is large.
In this paper we present a hybrid approach to this problem that overcomes the problems encountered when using several independent subsystems. Starting from such a system with individual subsystem controllers, we add a global layer which uses reinforcement learning to simultaneously tune the lower level controllers. While each subsystem still has its own individual controller, the reinforcement learning layer is used to tune these controllers in order to optimize global system behavior. This mitigates both the problem of subsystems behaving selfishly without the added complexity of designing a global controller for the entire system. Our approach is validated on a hydrostatic drive train.