Tag Archives: Q-learning

Q-learning with a variation of e-greedy to learn the optimal management of energy in autonomous vehicles navigation

Mojgan Fayyazi, Monireh Abdoos, Duong Phan, Mohsen Golafrouz, Mahdi Jalili, Reza N. Jazar, Reza Langari, Hamid Khayyam, Real-time self-adaptive Q-learning controller for energy management of conventional autonomous vehicles, Expert Systems with Applications, Volume 222, 2023 DOI: 10.1016/j.eswa.2023.119770.

Reducing emissions and energy consumption of autonomous vehicles is critical in the modern era. This paper presents an intelligent energy management system based on Reinforcement Learning (RL) for conventional autonomous vehicles. Furthermore, in order to improve the efficiency, a new exploration strategy is proposed to replace the traditional decayed \u03b5-greedy strategy in the Q-learning algorithm associated with RL. Unlike traditional Q-learning algorithms, the proposed self-adaptive Q-learning (SAQ-learning) can be applied in real-time. The learning capability of the controllers can help the vehicle deal with unknown situations in real-time. Numerical simulations show that compared to other controllers, Q-learning and SAQ-learning controllers can generate the desired engine torque based on the vehicle road power demand and control the air/fuel ratio by changing the throttle angle efficiently in real-time. Also, the proposed real-time SAQ-learning is shown to improve the operational time by 23% compared to standard Q-learning. Our simulations reveal the effectiveness of the proposed control system compared to other methods, namely dynamic programming and fuzzy logic methods.

Dealing with continuous spaces in Q-learning by maintaining several spaces, each one corresponding to a particular time-step

Joao Pedro Araujo, Mario A.T. Figueiredo, Miguel Ayala Botto, Control with adaptive Q-learning: A comparison for two classical control problems, Engineering Applications of Artificial Intelligence, Volume 112, 2022 DOI: 10.1016/j.engappai.2022.104797.

This paper evaluates adaptive Q-learning (AQL) and single-partition adaptive Q-learning (SPAQL), two algorithms for efficient model-free episodic reinforcement learning (RL), in two classical control problems (Pendulum and CartPole). AQL adaptively partitions the state\u2013action space of a Markov decision process (MDP), while learning the control policy, i.e., the mapping from states to actions. The main difference between AQL and SPAQL is that the latter learns time-invariant policies, where the mapping from states to actions does not depend explicitly on the time step. This paper also proposes the SPAQL with terminal state (SPAQL-TS), an improved version of SPAQL tailored for the design of regulators for control problems. The time-invariant policies are shown to result in a better performance than the time-variant ones in both problems studied. These algorithms are particularly fitted to RL problems where the action space is finite, as is the case with the CartPole problem. SPAQL-TS solves the OpenAI GymCartPole problem, while also displaying a higher sample efficiency than trust region policy optimization (TRPO), a standard RL algorithm for solving control tasks. Moreover, the policies learned by SPAQL are interpretable, while TRPO policies are typically encoded as neural networks, and therefore hard to interpret. Yielding interpretable policies while being sample-efficient are the major advantages of SPAQL. The code for the experiments is available at https://github.com/jaraujo98/SinglePartitionAdaptiveQLearning.

Modifications of Q-learning for better learning of robot navigation

Ee Soong Low, Pauline Ong, Cheng Yee Low, Rosli Omar, Modified Q-learning with distance metric and virtual target on path planning of mobile robot, Expert Systems with Applications, Volume 199, 2022, DOI: 10.1016/j.eswa.2022.117191.

Path planning is an essential element in mobile robot navigation. One of the popular path planners is Q-learning \u2013 a type of reinforcement learning that learns with little or no prior knowledge of the environment. Despite the successful implementation of Q-learning reported in numerous studies, its slow convergence associated with the curse of dimensionality may limit the performance in practice. To solve this problem, an Improved Q-learning (IQL) with three modifications is introduced in this study. First, a distance metric is added to Q-learning to guide the agent moves towards the target. Second, the Q function of Q-learning is modified to overcome dead-ends more effectively. Lastly, the virtual target concept is introduced in Q-learning to bypass dead-ends. Experimental results across twenty types of navigation maps show that the proposed strategies accelerate the learning speed of IQL in comparison with the Q-learning. Besides, performance comparison with seven well-known path planners indicates its efficiency in terms of the path smoothness, time taken, shortest distance and total distance used.

Live-RL enhancement / reduction of unsafe situations by reducing the transition possibility of unsafe actions

Serhat Duman, Hamdi Tolga Kahraman, Yusuf Sonmez, Ugur Guvenc, Mehmet Kati, Sefa Aras, A powerful meta-heuristic search algorithm for solving global optimization and real-world solar photovoltaic parameter estimation problems, Engineering Applications of Artificial Intelligence, Volume 111, 2022 DOI: 10.1016/j.engappai.2022.104763.

The teaching-learning-based artificial bee colony (TLABC) is a new hybrid swarm-based metaheuristic search algorithm. It combines the exploitation of the teaching learning-based optimization (TLBO) with the exploration of the artificial bee colony (ABC). With the hybridization of these two nature-inspired swarm intelligence algorithms, a robust method has been proposed to solve global optimization problems. However, as with swarm-based algorithms, with the TLABC method, it is a great challenge to effectively simulate the selection process. Fitness-distance balance (FDB) is a powerful recently developed method to effectively imitate the selection process in nature. In this study, the three search phases of the TLABC algorithm were redesigned using the FDB method. In this way, the FDB-TLABC algorithm, which imitates nature more effectively and has a robust search performance, was developed. To investigate the exploitation, exploration, and balanced search capabilities of the proposed algorithm, it was tested on standard and complex benchmark suites (Classic, IEEE CEC 2014, IEEE CEC 2017, and IEEE CEC 2020). In order to verify the performance of the proposed FDB-TLABC for global optimization problems and in the photovoltaic parameter estimation problem (a constrained real-world engineering problem) a very comprehensive and qualified experimental study was carried out according to IEEE CEC standards. Statistical analysis results confirmed that the proposed FDB-TLABC provided the best optimum solution and yielded a superior performance compared to other optimization methods.

Incremental learning (i.e., non-stationary environments, online -live- learning, task adaptation, life-long learning,…) for robots with Q-learning

Y. Hu, D. Li, Y. He and J. Han, Incremental Learning Framework for Autonomous Robots Based on Q-Learning and the Adaptive Kernel Linear Model IEEE Transactions on Cognitive and Developmental Systems, vol. 14, no. 1, pp. 64-74, March 2022 DOI: 10.1109/TCDS.2019.2962228.

The performance of autonomous robots in varying environments needs to be improved. For such incremental improvement, here we propose an incremental learning framework based on Q -learning and the adaptive kernel linear (AKL) model. The AKL model is used for storing behavioral policies that are learned by Q -learning. Both the structure and parameters of the AKL model can be trained using a novel L2-norm kernel recursive least squares (L2-KRLS) algorithm. The AKL model initially without nodes and gradually accumulates content. The proposed framework allows to learn new behaviors without forgetting the previous ones. A novel local \u03b5 -greedy policy is proposed to speed the convergence rate of Q -learning. It calculates the exploration probability of each state for generating and selecting more important training samples. The performance of our incremental learning framework was validated in two experiments. A curve-fitting example shows that the L2-KRLS-based AKL model is suitable for incremental learning. The second experiment is based on robot learning tasks. The results show that our framework can incrementally learn behaviors in varying environments. Local \u03b5 -greedy policy-based Q -learning is faster than the existing Q -learning algorithms.

Discrete Q-learning used, along a Deep CNN for localization, for mobile robot navigation

Amirhossein Shantia, Rik Timmers, Yiebo Chong, Cornel Kuiper, Francesco Bidoia, Lambert Schomaker, Marco Wiering, Two-stage visual navigation by deep neural networks and multi-goal reinforcement learning, . Robotics and Autonomous Systems, Volume 138, 2021 DOI: 10.1016/j.robot.2021.103731.

In this paper, we propose a two-stage learning framework for visual navigation in which the experience of the agent during exploration of one goal is shared to learn to navigate to other goals. We train a deep neural network for estimating the robot’s position in the environment using ground truth information provided by a classical localization and mapping approach. The second simpler multi-goal Q-function learns to traverse the environment by using the provided discretized map. Transfer learning is applied to the multi-goal Q-function from a maze structure to a 2D simulator and is finally deployed in a 3D simulator where the robot uses the estimated locations from the position estimator deep network. In the experiments, we first compare different architectures to select the best deep network for location estimation, and then compare the effects of the multi-goal reinforcement learning method to traditional reinforcement learning. The results show a significant improvement when multi-goal reinforcement learning is used. Furthermore, the results of the location estimator show that a deep network can learn and generalize in different environments using camera images with high accuracy in both position and orientation.

Learning the parameters of a robot navigator through Q-learning

Chang, L., Shan, L., Jiang, C. et al, Reinforcement based mobile robot path planning with improved dynamic window approach in unknown environment, . Auton Robot 45, 51–76 (2021) DOI: 10.1007/s10514-020-09947-4.

Mobile robot path planning in an unknown environment is a fundamental and challenging problem in the field of robotics. Dynamic window approach (DWA) is an effective method of local path planning, however some of its evaluation functions are inadequate and the algorithm for choosing the weights of these functions is lacking, which makes it highly dependent on the global reference and prone to fail in an unknown environment. In this paper, an improved DWA based on Q-learning is proposed. First, the original evaluation functions are modified and extended by adding two new evaluation functions to enhance the performance of global navigation. Then, considering the balance of effectiveness and speed, we define the state space, action space and reward function of the adopted Q-learning algorithm for the robot motion planning. After that, the parameters of the proposed DWA are adaptively learned by Q-learning and a trained agent is obtained to adapt to the unknown environment. At last, by a series of comparative simulations, the proposed method shows higher navigation efficiency and successful rate in the complex unknown environment. The proposed method is also validated in experiments based on XQ-4 Pro robot to verify its navigation capability in both static and dynamic environment.

Mixing Monte-Carlo Tree Search with Q-learning for robot learning

Francesco Riccio, Roberto Capobianco, Daniele Nardi, LoOP: Iterative learning for optimistic planning on robots, . Robotics and Autonomous Systems, Volume 36, 2021 DOI: 10.1016/j.robot.2020.103693.

Efficient robotic behaviors require robustness and adaptation to dynamic changes of the environment, whose characteristics rapidly vary during robot operation. To generate effective robot action policies, planning and learning techniques have shown the most promising results. However, if considered individually, they present different limitations. Planning techniques lack generalization among similar states and require experts to define behavioral routines at different levels of abstraction. Conversely, learning methods usually require a considerable number of training samples and iterations of the algorithm. To overcome these issues, and to efficiently generate robot behaviors, we introduce LoOP, an iterative learning algorithm for optimistic planning that combines state-of-the-art planning and learning techniques to generate action policies. The main contribution of LoOP is the combination of Monte-Carlo Search Planning and Q-learning, which enables focused exploration during policy refinement in different robotic applications. We demonstrate the robustness and flexibility of LoOP in various domains and multiple robotic platforms, by validating the proposed approach with an extensive experimental evaluation.

On the effects of large variances in the transition function for Q-learning

D. Lee and W. B. Powell, Bias-Corrected Q-Learning With Multistate Extension. IEEE Transactions on Automatic Control, vol. 64, no. 10, pp. 4011-4023, DOI: 10.1109/TAC.2019.2912443.

Q-learning is a sample-based model-free algorithm that solves Markov decision problems asymptotically, but in finite time, it can perform poorly when random rewards and transitions result in large variance of value estimates. We pinpoint its cause to be the estimation bias due to the maximum operator in Q-learning algorithm, and present the evidence of max-operator bias in its Q value estimates. We then present an asymptotically optimal bias-correction strategy and construct an extension to bias-corrected Q-learning algorithm to multistate Markov decision processes, with asymptotic convergence properties as strong as those from Q-learning. We report the empirical performance of the bias-corrected Q-learning algorithm with multistate extension in two model problems: A multiarmed bandit version of Roulette and an electricity storage control simulation. The bias-corrected Q-learning algorithm with multistate extension is shown to control max-operator bias effectively, where the bias-resistance can be tuned predictably by adjusting a correction parameter.

Nice summary of reinforcement learning in control (Adaptive Dynamic Programming) and the use of Q-learning plus NN approximators for solving a control problem under a game theory framework

Kyriakos G. Vamvoudakis, Non-zero sum Nash Q-learning for unknown deterministic continuous-time linear systems, Automatica, Volume 61, November 2015, Pages 274-281, ISSN 0005-1098, DOI: 10.1016/j.automatica.2015.08.017.

This work proposes a novel Q-learning algorithm to solve the problem of non-zero sum Nash games of linear time invariant systems with N -players (control inputs) and centralized uncertain/unknown dynamics. We first formulate the Q-function of each player as a parametrization of the state and all other the control inputs or players. An integral reinforcement learning approach is used to develop a model-free structure of N -actors/ N -critics to estimate the parameters of the N -coupled Q-functions online while also guaranteeing closed-loop stability and convergence of the control policies to a Nash equilibrium. A 4th order, simulation example with five players is presented to show the efficacy of the proposed approach.