Tag Archives: Deep Reinforcement Learning

Generating intrinsic rewards to address the sparse reward problem of RL

Z. Gao et al., Self-Supervised Exploration via Temporal Inconsistency in Reinforcement Learning, IEEE Transactions on Artificial Intelligence, vol. 5, no. 11, pp. 5530-5539, Nov. 2024, DOI: 10.1109/TAI.2024.3413692.

In sparse extrinsic reward settings, reinforcement learning remains a challenge despite increasing interest in this field. Existing approaches suggest that intrinsic rewards can alleviate issues caused by reward sparsity. However, many studies overlook the critical role of temporal information, essential for human curiosity. This article introduces a novel intrinsic reward mechanism inspired by human learning processes, where curiosity is evaluated by comparing current observations with historical knowledge. Our method involves training a self-supervised prediction model, periodically saving snapshots of the model parameters, and employing the nuclear norm to assess the temporal inconsistency between predictions from different snapshots as intrinsic rewards. Additionally, we propose a variational weighting mechanism to adaptively assign weights to the snapshots, enhancing the model’s robustness and performance. Experimental results across various benchmark environments demonstrate the efficacy of our approach, which outperforms other state-of-the-art methods without incurring additional training costs and exhibits higher noise tolerance. Our findings indicate that leveraging temporal information in intrinsic rewards can significantly improve exploration performance, motivating future research to develop more robust and accurate reward systems for reinforcement learning.

Improving sample efficiency under sparse rewards and large continuous action spaces through predictive control in RL

Antonyshyn, L., Givigi, S., Deep Model-Based Reinforcement Learning for Predictive Control of Robotic Systems with Dense and Sparse Rewards, J Intell Robot Syst 110, 100 (2024) DOI: 10.1007/s10846-024-02118-y.

Sparse rewards and sample efficiency are open areas of research in the field of reinforcement learning. These problems are especially important when considering applications of reinforcement learning to robotics and other cyber-physical systems. This is so because in these domains many tasks are goal-based and naturally expressed with binary successes and failures, action spaces are large and continuous, and real interactions with the environment are limited. In this work, we propose Deep Value-and-Predictive-Model Control (DVPMC), a model-based predictive reinforcement learning algorithm for continuous control that uses system identification, value function approximation and sampling-based optimization to select actions. The algorithm is evaluated on a dense reward and a sparse reward task. We show that it can match the performance of a predictive control approach to the dense reward problem, and outperforms model-free and model-based learning algorithms on the sparse reward task on the metrics of sample efficiency and performance. We verify the performance of an agent trained in simulation using DVPMC on a real robot playing the reach-avoid game. Video of the experiment can be found here: https://youtu.be/0Q274kcfn4c.

Reducing the need of samples in RL through evolutionary techniques

Onori, G., Shahid, A.A., Braghin, F. et al. , Adaptive Optimization of Hyper-Parameters for Robotic Manipulation through Evolutionary Reinforcement Learning, J Intell Robot Syst 110, 108 (2024) DOI: 10.1007/s10846-024-02138-8.

Deep Reinforcement Learning applications are growing due to their capability of teaching the agent any task autonomously and generalizing the learning. However, this comes at the cost of a large number of samples and interactions with the environment. Moreover, the robustness of learned policies is usually achieved by a tedious tuning of hyper-parameters and reward functions. In order to address this issue, this paper proposes an evolutionary RL algorithm for the adaptive optimization of hyper-parameters. The policy is trained using an on-policy algorithm, Proximal Policy Optimization (PPO), coupled with an evolutionary algorithm. The achieved results demonstrate an improvement in the sample efficiency of the RL training on a robotic grasping task. In particular, the learning is improved with respect to the baseline case of a non-evolutionary agent. The evolutionary agent needs % fewer samples to completely learn the grasping task, enabled by the adaptive transfer of knowledge between the agents through the evolutionary algorithm. The proposed approach also demonstrates the possibility of updating reward parameters during training, potentially providing a general approach to creating reward functions.

Improving explainability of deep RL in Robotics

Mehran Taghian, Shotaro Miwa, Yoshihiro Mitsuka, Johannes Günther, Shadan Golestan, Osmar Zaiane, Explainability of deep reinforcement learning algorithms in robotic domains by using Layer-wise Relevance Propagation, Engineering Applications of Artificial Intelligence, Volume 137, Part A, 2024 DOI: 10.1016/j.engappai.2024.109131.

A key component to the recent success of reinforcement learning is the introduction of neural networks for representation learning. Doing so allows for solving challenging problems in several domains, one of which is robotics. However, a major criticism of deep reinforcement learning (DRL) algorithms is their lack of explainability and interpretability. This problem is even exacerbated in robotics as they oftentimes cohabitate space with humans, making it imperative to be able to reason about their behavior. In this paper, we propose to analyze the learned representation in a robotic setting by utilizing Graph Networks (GNs). Using the GN and Layer-wise Relevance Propagation (LRP), we represent the observations as an entity-relationship to allow us to interpret the learned policy. We evaluate our approach in two environments in MuJoCo. These two environments were delicately designed to effectively measure the value of knowledge gained by our approach to analyzing learned representations. This approach allows us to analyze not only how different parts of the observation space contribute to the decision-making process but also differentiate between policies and their differences in performance. This difference in performance also allows for reasoning about the agent’s recovery from faults. These insights are key contributions to explainable deep reinforcement learning in robotic settings.

A relatively simple way of reducing the sampling cost of DQN

Hossein Hassani, Soodeh Nikan, Abdallah Shami, Traffic navigation via reinforcement learning with episodic-guided prioritized experience replay, Engineering Applications of Artificial Intelligence, Volume 137, Part A, 2024, DOI: 10.1016/j.engappai.2024.109147.

Deep Reinforcement Learning (DRL) models play a fundamental role in autonomous driving applications; however, they typically suffer from sample inefficiency because they often require many interactions with the environment to learn effective policies. This makes the training process time-consuming. To address this shortcoming, Prioritized Experience Replay (PER) has proven to be effective by prioritizing samples with high Temporal-Difference (TD) error for learning. In this context, this study contributes to artificial intelligence by proposing a sample-efficient DRL algorithm called Episodic-Guided Prioritized Experience Replay (EPER). The core innovation of EPER lies in the utilization of an episodic memory, dedicated to storing successful training episodes. Within this memory, expected returns for each state–action pair are extracted. These returns, combined with TD error-based prioritization, form a novel objective function for deep Q-network training. To prevent excessive determinism, EPER introduces exploration into the learning process by incorporating a regularization term into the objective function that allows exploration of state-space regions with diverse Q-values. The proposed EPER algorithm is suitable to train a DRL agent for handling episodic tasks, and it can be integrated into off-policy DRL models. EPER is employed for traffic navigation through scenarios such as highway driving, merging, roundabout, and intersection to showcase its application in engineering. The attained results denote that, compared with the PER and an additional state-of-the-art training technique, EPER is superior in expediting the training of the agent and learning a more optimal policy that leads to lower collision rates within the constructed navigation scenarios.

A good survey and taxonomy for DRL in robotics

Chen Tang 1, Ben Abbatematteo 1, Jiaheng Hu 1, Rohan Chandra , Roberto Martı́n-Martı́n , Peter Stone, Deep Reinforcement Learning for Robotics: A Survey of Real-World
Successes,
arXiv:2408.03539 [cs.RO] https://www.arxiv.org/abs/2408.03539.

Reinforcement learning (RL), particularly its combination with deep neural networks referred to as deep RL (DRL), has shown tremendous promise across a wide range of applications, suggesting its potential for enabling the development of sophisticated robotic behaviors. Robotics problems, however, pose fundamental difficulties for the application of RL, stemming from the complexity and cost of interacting with the physical world. This article provides a modern survey of DRL for robotics, with a particular focus on evaluating the real-world successes achieved with DRL in realizing several key robotic competencies. Our analysis aims to identify the key factors underlying those exciting successes, reveal underexplored areas, and provide an overall characterization of the status of DRL in robotics. We highlight several important avenues for future work, emphasizing the need for stable and sample-efficient real-world RL paradigms, holistic approaches for discovering and integrating various competencies to tackle complex long-horizon, open-world tasks, and principled development and evaluation procedures. This survey is designed to offer insights for both RL practitioners and roboticists toward harnessing RL’s power to create generally capable real-world robotic systems.

Using physical models to guide Deep RL in robotics

X. Li, W. Shang and S. Cong, Offline Reinforcement Learning of Robotic Control Using Deep Kinematics and Dynamics, IEEE/ASME Transactions on Mechatronics, vol. 29, no. 4, pp. 2428-2439, Aug. 2024 DOI: 10.1109/TMECH.2023.3336316.

With the rapid development of deep learning, model-free reinforcement learning algorithms have achieved remarkable results in many fields. However, their high sample complexity and the potential for causing damage to environments and robots pose severe challenges for their application in real-world environments. Model-based reinforcement learning algorithms are often used to reduce the sample complexity. One limitation of these algorithms is the inevitable modeling errors. While the black-box model can fit complex state transition models, it ignores the existing knowledge of physics and robotics, especially studies of kinematic and dynamic models of the robotic manipulator. Compared with the black-box model, the physics-inspired deep models do not require specific knowledge of each system to obtain interpretable kinematic and dynamic models. In model-based reinforcement learning, these models can simulate the motion and be combined with classical controllers. This is due to their sharing the same form as traditional models, leading to higher precision tracking results. In this work, we utilize physics-inspired deep models to learn the kinematics and dynamics of a robotic manipulator. We propose a model-based offline reinforcement learning algorithm for controller parameter learning, combined with the traditional computed-torque controller. Experiments on trajectory tracking control of the Baxter manipulator, both in joint and operational space, are conducted in simulation and real environments. Experimental results demonstrate that our algorithm can significantly improve tracking accuracy and exhibits strong generalization and robustness.

Improving reward-sparse situations in RL by adding backward learning

X. Qi, D. Chen, Z. Li and X. Tan, Back-Stepping Experience Replay With Application to Model-Free Reinforcement Learning for a Soft Snake Robot, IEEE Robotics and Automation Letters, vol. 9, no. 9, pp. 7517-7524, Sept. 2024 DOI: 10.1109/LRA.2024.3427550.

In this letter, we propose a novel technique, Back-stepping Experience Replay (BER), that is compatible with arbitrary off-policy reinforcement learning (RL) algorithms. BER aims to enhance learning efficiency in systems with approximate reversibility, reducing the need for complex reward shaping. The method constructs reversed trajectories using back-stepping transitions to reach random or fixed targets. Interpretable as a bi-directional approach, BER addresses inaccuracies in back-stepping transitions through a purification of the replay experience during learning. Given the intricate nature of soft robots and their complex interactions with environments, we present an application of BER in a model-free RL approach for the locomotion and navigation of a soft snake robot, which is capable of serpentine motion enabled by anisotropic friction between the body and ground. In addition, a dynamic simulator is developed to assess the effectiveness and efficiency of the BER algorithm, in which the robot demonstrates successful learning (reaching a 100% success rate) and adeptly reaches random targets, achieving an average speed 48% faster than that of the best baseline approach.

Avoiding the sim-to-real RL transfer problem through learning the parameters of a physical system

Viktor Wiberg, Erik Wallin, Arvid Fälldin, Tobias Semberg, Morgan Rossander, Eddie Wadbro, Martin Servin, Sim-to-real transfer of active suspension control using deep reinforcement learning, Robotics and Autonomous Systems, Volume 179, 2024 DOI: 10.1016/j.robot.2024.104731.

We explore sim-to-real transfer of deep reinforcement learning controllers for a heavy vehicle with active suspensions designed for traversing rough terrain. While related research primarily focuses on lightweight robots with electric motors and fast actuation, this study uses a forestry vehicle with a complex hydraulic driveline and slow actuation. We simulate the vehicle using multibody dynamics and apply system identification to find an appropriate set of simulation parameters. We then train policies in simulation using various techniques to mitigate the sim-to-real gap, including domain randomization, action delays, and a reward penalty to encourage smooth control. In reality, the policies trained with action delays and a penalty for erratic actions perform nearly at the same level as in simulation. In experiments on level ground, the motion trajectories closely overlap when turning to either side, as well as in a route tracking scenario. When faced with a ramp that requires active use of the suspensions, the simulated and real motions are in close alignment. This shows that the actuator model together with system identification yields a sufficiently accurate model of the actuators. We observe that policies trained without the additional action penalty exhibit fast switching or bang–bang control. These present smooth motions and high performance in simulation but transfer poorly to reality. We find that policies make marginal use of the local height map for perception, showing no indications of predictive planning. However, the strong transfer capabilities entail that further development concerning perception and performance can be largely confined to simulation.

Reducing discovered skills in DRL to the essential ones, modelling skills with SMDP Q-learning

Shuai Qing, Fei Zhu, Refine to the essence: Less-redundant skill learning via diversity clustering, Engineering Applications of Artificial Intelligence, Volume 133, Part A, 2024 DOI: 10.1016/j.engappai.2024.107981.

In reinforcement learning, skill is a potentially conditional policy that solves tasks in a hierarchically controlled manner. Progress on skill discovery helps agents learn a set of diverse and useful skills without external supervision to tackle complex tasks with sparse rewards. Although most of the studies have aimed to maximize the diversity of skills discovered, the distinguishability between skills diminishes as the number of skills increases, leading to a subset of similar and redundant skills. To tackle this problem, a method called Refine to the Essence of Skills (RE-Skill) is proposed, which aims at learning skills with less redundancy. RE-Skill integrates the concepts of cluster analysis and policy distillation, clustering similar skills together based on their unique features, learning the most optimal performance within each cluster, and filtering out similar skills that involve excessive and intricate actions, thereby reducing redundancy among skills. By refining clusters of similar skills into less-redundant independent skills, RE-Skill demonstrates superior performance compared to other skill discovery algorithms and shows how these less-redundant skills effectively address downstream tasks, indicating that RE-Skill is able to extend its efficacy to engineering applications in robot control and obstacle training tasks within complex environments.