Author Archives: Juan-antonio Fernández-madrigal

A novel way of addressing the maximization bias in RL

Martin Waltz, Ostap Okhrin, Addressing maximization bias in reinforcement learning with two-sample testing, Artificial Intelligence, Volume 336, 2024, DOI: 10.1016/j.artint.2024.104204.

Value-based reinforcement-learning algorithms have shown strong results in games, robotics, and other real-world applications. Overestimation bias is a known threat to those algorithms and can sometimes lead to dramatic performance decreases or even complete algorithmic failure. We frame the bias problem statistically and consider it an instance of estimating the maximum expected value (MEV) of a set of random variables. We propose the T-Estimator (TE) based on two-sample testing for the mean, that flexibly interpolates between over- and underestimation by adjusting the significance level of the underlying hypothesis tests. We also introduce a generalization, termed K-Estimator (KE), that obeys the same bias and variance bounds as the TE and relies on a nearly arbitrary kernel function. We introduce modifications of Q-Learning and the Bootstrapped Deep Q-Network (BDQN) using the TE and the KE, and prove convergence in the tabular setting. Furthermore, we propose an adaptive variant of the TE-based BDQN that dynamically adjusts the significance level to minimize the absolute estimation bias. All proposed estimators and algorithms are thoroughly tested and validated on diverse tasks and environments, illustrating the bias control and performance potential of the TE and KE.

Safety in RL through “predictive safety filters”

Aksel Vaaler, Svein Jostein Husa, Daniel Menges, Thomas Nakken Larsen, Adil Rasheed, Modular control architecture for safe marine navigation: Reinforcement learning with predictive safety filters, Artificial Intelligence, Volume 336, 2024, DOI: 10.1016/j.artint.2024.104201.

Many autonomous systems are safety-critical, making it essential to have a closed-loop control system that satisfies constraints arising from underlying physical limitations and safety aspects in a robust manner. However, this is often challenging to achieve for real-world systems. For example, autonomous ships at sea have nonlinear and uncertain dynamics and are subject to numerous time-varying environmental disturbances such as waves, currents, and wind. There is increasing interest in using machine learning-based approaches to adapt these systems to more complex scenarios, but there are few standard frameworks that guarantee the safety and stability of such systems. Recently, predictive safety filters (PSF) have emerged as a promising method to ensure constraint satisfaction in learning-based control, bypassing the need for explicit constraint handling in the learning algorithms themselves. The safety filter approach leads to a modular separation of the problem, allowing the use of arbitrary control policies in a task-agnostic way. The filter takes in a potentially unsafe control action from the main controller and solves an optimization problem to compute a minimal perturbation of the proposed action that adheres to both physical and safety constraints. In this work, we combine reinforcement learning (RL) with predictive safety filtering in the context of marine navigation and control. The RL agent is trained on path-following and safety adherence across a wide range of randomly generated environments, while the predictive safety filter continuously monitors the agents’ proposed control actions and modifies them if necessary. The combined PSF/RL scheme is implemented on a simulated model of Cybership II, a miniature replica of a typical supply ship. Safety performance and learning rate are evaluated and compared with those of a standard, non-PSF, RL agent. It is demonstrated that the predictive safety filter is able to keep the vessel safe, while not prohibiting the learning rate and performance of the RL agent.

See also: https://doi.org/10.1016/j.artint.2024.104195

It seems that vectors can help in the path toward symbols for ANNs

Steven T. Piantadosi, Dyana C.Y. Muller, Joshua S. Rule, Karthikeya Kaushik, Mark Gorenstein, Elena R. Leib, Emily Sanford, Why concepts are (probably) vectors, Trends in Cognitive Sciences, Volume 28, Issue 9, 2024, Pages 844-856 DOI: 10.1016/j.tics.2024.06.011.

For decades, cognitive scientists have debated what kind of representation might characterize human concepts. Whatever the format of the representation, it must allow for the computation of varied properties, including similarities, features, categories, definitions, and relations. It must also support the development of theories, ad hoc categories, and knowledge of procedures. Here, we discuss why vector-based representations provide a compelling account that can meet all these needs while being plausibly encoded into neural architectures. This view has become especially promising with recent advances in both large language models and vector symbolic architectures. These innovations show how vectors can handle many properties traditionally thought to be out of reach for neural models, including compositionality, definitions, structures, and symbolic computational processes.

Using physical models to guide Deep RL in robotics

X. Li, W. Shang and S. Cong, Offline Reinforcement Learning of Robotic Control Using Deep Kinematics and Dynamics, IEEE/ASME Transactions on Mechatronics, vol. 29, no. 4, pp. 2428-2439, Aug. 2024 DOI: 10.1109/TMECH.2023.3336316.

With the rapid development of deep learning, model-free reinforcement learning algorithms have achieved remarkable results in many fields. However, their high sample complexity and the potential for causing damage to environments and robots pose severe challenges for their application in real-world environments. Model-based reinforcement learning algorithms are often used to reduce the sample complexity. One limitation of these algorithms is the inevitable modeling errors. While the black-box model can fit complex state transition models, it ignores the existing knowledge of physics and robotics, especially studies of kinematic and dynamic models of the robotic manipulator. Compared with the black-box model, the physics-inspired deep models do not require specific knowledge of each system to obtain interpretable kinematic and dynamic models. In model-based reinforcement learning, these models can simulate the motion and be combined with classical controllers. This is due to their sharing the same form as traditional models, leading to higher precision tracking results. In this work, we utilize physics-inspired deep models to learn the kinematics and dynamics of a robotic manipulator. We propose a model-based offline reinforcement learning algorithm for controller parameter learning, combined with the traditional computed-torque controller. Experiments on trajectory tracking control of the Baxter manipulator, both in joint and operational space, are conducted in simulation and real environments. Experimental results demonstrate that our algorithm can significantly improve tracking accuracy and exhibits strong generalization and robustness.

Cognitive evidences of the need of abstraction (==”modularity”) in achieving AI

Schilling, M., Hammer, B., Ohl, F.W. et al. Modularity in Nervous Systems—a Key to Efficient Adaptivity for Deep Reinforcement Learning, Cogn Comput 16, 2358–2373 (2024) DOI: 10.1007/s12559-022-10080-w.

Modularity as observed in biological systems has proven valuable for guiding classical motor theories towards good answers about action selection and execution. New challenges arise when we turn to learning: Trying to scale current computational models, such as deep reinforcement learning (DRL), to action spaces, input dimensions, and time horizons seen in biological systems still faces severe obstacles unless vast amounts of training data are available. This leads to the question: does biological modularity also hold an important key for better answers to obtain efficient adaptivity for deep reinforcement learning? We review biological experimental work on modularity in biological motor control and link this with current examples of (deep) RL approaches. Analyzing outcomes of simulation studies, we show that these approaches benefit from forms of modularization as found in biological systems. We identify three different strands of modularity exhibited in biological control systems. Two of them—modularity in state (i) and in action (ii) spaces—appear as a consequence of local interconnectivity (as in reflexes) and are often modulated by higher levels in a control hierarchy. A third strand arises from chunking of action elements along a (iii) temporal dimension. Usually interacting in an overarching spatio-temporal hierarchy of the overall system, the three strands offer major “factors” decomposing the entire modularity structure. We conclude that modularity with its above strands can provide an effective prior for DRL approaches to speed up learning considerably and making learned controllers more robust and adaptive.

Reducing dimensionality of brain-body state dynamics

Daniel S. Kluger, Micah G. Allen, Joachim Gross, Brain–body states embody complex temporal dynamics, Trends in Cognitive Sciences, Volume 28, Issue 8, 2024, Pages 695-698 DOI: 10.1016/j.tics.2024.05.003.

We propose a computational framework for high-dimensional brain–body states as transient embodiments of nested internal and external dynamics governed by interoception. Unifying recent theoretical work, we suggest ways to reduce arbitrary state complexity to an observable number of features in order to accurately predict and intervene in pathological trajectories.

Improving reward-sparse situations in RL by adding backward learning

X. Qi, D. Chen, Z. Li and X. Tan, Back-Stepping Experience Replay With Application to Model-Free Reinforcement Learning for a Soft Snake Robot, IEEE Robotics and Automation Letters, vol. 9, no. 9, pp. 7517-7524, Sept. 2024 DOI: 10.1109/LRA.2024.3427550.

In this letter, we propose a novel technique, Back-stepping Experience Replay (BER), that is compatible with arbitrary off-policy reinforcement learning (RL) algorithms. BER aims to enhance learning efficiency in systems with approximate reversibility, reducing the need for complex reward shaping. The method constructs reversed trajectories using back-stepping transitions to reach random or fixed targets. Interpretable as a bi-directional approach, BER addresses inaccuracies in back-stepping transitions through a purification of the replay experience during learning. Given the intricate nature of soft robots and their complex interactions with environments, we present an application of BER in a model-free RL approach for the locomotion and navigation of a soft snake robot, which is capable of serpentine motion enabled by anisotropic friction between the body and ground. In addition, a dynamic simulator is developed to assess the effectiveness and efficiency of the BER algorithm, in which the robot demonstrates successful learning (reaching a 100% success rate) and adeptly reaches random targets, achieving an average speed 48% faster than that of the best baseline approach.

Avoiding the sim-to-real RL transfer problem through learning the parameters of a physical system

Viktor Wiberg, Erik Wallin, Arvid Fälldin, Tobias Semberg, Morgan Rossander, Eddie Wadbro, Martin Servin, Sim-to-real transfer of active suspension control using deep reinforcement learning, Robotics and Autonomous Systems, Volume 179, 2024 DOI: 10.1016/j.robot.2024.104731.

We explore sim-to-real transfer of deep reinforcement learning controllers for a heavy vehicle with active suspensions designed for traversing rough terrain. While related research primarily focuses on lightweight robots with electric motors and fast actuation, this study uses a forestry vehicle with a complex hydraulic driveline and slow actuation. We simulate the vehicle using multibody dynamics and apply system identification to find an appropriate set of simulation parameters. We then train policies in simulation using various techniques to mitigate the sim-to-real gap, including domain randomization, action delays, and a reward penalty to encourage smooth control. In reality, the policies trained with action delays and a penalty for erratic actions perform nearly at the same level as in simulation. In experiments on level ground, the motion trajectories closely overlap when turning to either side, as well as in a route tracking scenario. When faced with a ramp that requires active use of the suspensions, the simulated and real motions are in close alignment. This shows that the actuator model together with system identification yields a sufficiently accurate model of the actuators. We observe that policies trained without the additional action penalty exhibit fast switching or bang–bang control. These present smooth motions and high performance in simulation but transfer poorly to reality. We find that policies make marginal use of the local height map for perception, showing no indications of predictive planning. However, the strong transfer capabilities entail that further development concerning perception and performance can be largely confined to simulation.

Predicting changes in the environment through time series for better robot navigation

Yanbo Wang, Yaxian Fan, Jingchuan Wang, Weidong Chen, Long-term navigation for autonomous robots based on spatio-temporal map prediction, Robotics and Autonomous Systems, Volume 179, 2024 DOI: 10.1016/j.robot.2024.104724.

The robotics community has witnessed a growing demand for long-term navigation of autonomous robots in diverse environments, including factories, homes, offices, and public places. The core challenge in long-term navigation for autonomous robots lies in effectively adapting to varying degrees of dynamism in the environment. In this paper, we propose a long-term navigation method for autonomous robots based on spatio-temporal map prediction. The time series model is introduced to learn the changing patterns of different environmental structures or objects on multiple time scales based on the historical maps and forecast the future maps for long-term navigation. Then, an improved global path planning algorithm is performed based on the time-variant predicted cost maps. During navigation, the current observations are fused with the predicted map through a modified Bayesian filter to reduce the impact of prediction errors, and the updated map is stored for future predictions. We run simulation and conduct several weeks of experiments in multiple scenarios. The results show that our algorithm is effective and robust for long-term navigation in dynamic environments.

RL to learn the coordination of different goals in autonomous driving

J. Liu, J. Yin, Z. Jiang, Q. Liang and H. Li, Attention-Based Distributional Reinforcement Learning for Safe and Efficient Autonomous Driving, IEEE Robotics and Automation Letters, vol. 9, no. 9, pp. 7477-7484, Sept. 2024 DOI: 10.1109/LRA.2024.3427551.

Autonomous driving vehicles play a critical role in intelligent transportation systems and have garnered considerable attention. Currently, the popular approach in autonomous driving systems is to design separate optimal objectives for each independent module. Therefore, a major concern arises from the fact that these diverse optimal objectives may have an impact on the final driving policy. However, reinforcement learning provides a promising solution to tackle the challenge through joint training and its exploration ability. This letter aims to develop a safe and efficient reinforcement learning approach with advanced features for autonomous navigation in urban traffic scenarios. Firstly, we develop a novel distributional reinforcement learning method that integrates an implicit distribution model into an actor-critic framework. Subsequently, we introduce a spatial attention module to capture interaction features between the ego vehicle and other traffic vehicles, and design a temporal attention module to extract the long-term sequential feature. Finally, we utilize bird’s-eye-view as a context-aware representation of traffic scenarios, fused by the above spatio-temporal features. To validate our approach, we conduct experiments on the NoCrash and CoRL benchmarks, especially on our closed-loop openDD scenarios. The experimental results demonstrate the impressive performance of our approach in terms of convergence and stability compared to the baselines.