Tag Archives: Model-based Reinforcement Learning

Short letter with evidences of the use of models in mammal decision making, relating it to reinforcement learning

Ivo Jacobs, Tomas Persson, Peter Gärdenfors, Model-based animal cognition slips through the sequence bottleneck, Trends in Cognitive Sciences, Volume 29, Issue 10, 2025, Pages 872-873, 10.1016/j.tics.2025.06.009.

In a recent article in TiCS, Lind and Jon-And argued that the sequence memory of animals constitutes a cognitive bottleneck, the ‘sequence bottleneck’, and that mental simulations require faithful representation of sequential information. They therefore concluded that animals cannot perform mental simulations, and that behavioral and neurobiological studies suggesting otherwise are best interpreted as results of associative learning. Through examples of predictive maps, cognitive control, and active sleep, we illustrate the overwhelming evidence that mammals and birds make model-based simulations, which suggests the sequence bottleneck to be more limited in scope than proposed by Lind and Jon-And […]

There is a response to this paper.

Model-based RL that addresses the problem of building models that can produce off-distribution data more safely

X. -Y. Liu et al., DOMAIN: Mildly Conservative Model-Based Offline Reinforcement Learning, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 55, no. 10, pp. 7142-7155, Oct. 2025, 10.1109/TSMC.2025.3578666.

Model-based reinforcement learning (RL), which learns an environment model from the offline dataset and generates more out-of-distribution model data, has become an effective approach to the problem of distribution shift in offline RL. Due to the gap between the learned and actual environment, conservatism should be incorporated into the algorithm to balance accurate offline data and imprecise model data. The conservatism of current algorithms mostly relies on model uncertainty estimation. However, uncertainty estimation is unreliable and leads to poor performance in certain scenarios, and the previous methods ignore differences between the model data, which brings great conservatism. To address the above issues, this article proposes a mildly conservative model-based offline RL algorithm (DOMAIN) without estimating model uncertainty, and designs the adaptive sampling distribution of model samples, which can adaptively adjust the model data penalty. In this article, we theoretically demonstrate that the Q value learned by the DOMAIN outside the region is a lower bound of the true Q value, the DOMAIN is less conservative than previous model-based offline RL algorithms, and has the guarantee of safety policy improvement. The results of extensive experiments show that DOMAIN outperforms prior RL algorithms and the average performance has improved by 1.8% on the D4RL benchmark.

Related: 10.1109/TSMC.2025.3583392

Hybridizing model-free and model-based in continuous RL, and a nice review of current research and benchmarks in robotics

Pinosky A, Abraham I, Broad A, Argall B, Murphey TD. Hybrid control for combining model-based and model-free reinforcement learning The International Journal of Robotics Research. 2023;42(6):337-355 DOI: 10.1177/02783649221083331.

We develop an approach to improve the learning capabilities of robotic systems by combining learned predictive models with experience-based state-action policy mappings. Predictive models provide an understanding of the task and the dynamics, while experience-based (model-free) policy mappings encode favorable actions that override planned actions. We refer to our approach of systematically combining model-based and model-free learning methods as hybrid learning. Our approach efficiently learns motor skills and improves the performance of predictive models and experience-based policies. Moreover, our approach enables policies (both model-based and model-free) to be updated using any off-policy reinforcement learning method. We derive a deterministic method of hybrid learning by optimally switching between learning modalities. We adapt our method to a stochastic variation that relaxes some of the key assumptions in the original derivation. Our deterministic and stochastic variations are tested on a variety of robot control benchmark tasks in simulation as well as a hardware manipulation task. We extend our approach for use with imitation learning methods, where experience is provided through demonstrations, and we test the expanded capability with a real-world pick-and-place task. The results show that our method is capable of improving the performance and sample efficiency of learning motor skills in a variety of experimental domains.

Bayesian estimation of the model in model-based RL for robots

Senda, Kei, Hishinuma, Toru, Tani, Yurika, Approximate Bayesian reinforcement learning based on estimation of plant, Autonomous Robots 44(5), DOI: 10.1007/s10514-020-09901-4.

This study proposes an approximate parametric model-based Bayesian reinforcement learning approach for robots, based on online Bayesian estimation and online planning for an estimated model. The proposed approach is designed to learn a robotic task with a few real-world samples and to be robust against model uncertainty, within feasible computational resources. The proposed approach employs two-stage modeling, which is composed of (1) a parametric differential equation model with a few parameters based on prior knowledge such as equations of motion, and (2) a parametric model that interpolates a finite number of transition probability models for online estimation and planning. The proposed approach modifies the online Bayesian estimation to be robust against approximation errors of the parametric model to a real plant. The policy planned for the interpolating model is proven to have a form of theoretical robustness. Numerical simulation and hardware experiments of a planar peg-in-hole task demonstrate the effectiveness of the proposed approach.

Reinforcement learning to learn the model of the world intrinsically motivated

Todd Hester, Peter Stone, Intrinsically motivated model learning for developing curious robots, Artificial Intelligence, Volume 247, June 2017, Pages 170-186, ISSN 0004-3702, DOI: 10.1016/j.artint.2015.05.002.

Reinforcement Learning (RL) agents are typically deployed to learn a specific, concrete task based on a pre-defined reward function. However, in some cases an agent may be able to gain experience in the domain prior to being given a task. In such cases, intrinsic motivation can be used to enable the agent to learn a useful model of the environment that is likely to help it learn its eventual tasks more efficiently. This paradigm fits robots particularly well, as they need to learn about their own dynamics and affordances which can be applied to many different tasks. This article presents the texplore with Variance-And-Novelty-Intrinsic-Rewards algorithm (texplore-vanir), an intrinsically motivated model-based RL algorithm. The algorithm learns models of the transition dynamics of a domain using random forests. It calculates two different intrinsic motivations from this model: one to explore where the model is uncertain, and one to acquire novel experiences that the model has not yet been trained on. This article presents experiments demonstrating that the combination of these two intrinsic rewards enables the algorithm to learn an accurate model of a domain with no external rewards and that the learned model can be used afterward to perform tasks in the domain. While learning the model, the agent explores the domain in a developing and curious way, progressively learning more complex skills. In addition, the experiments show that combining the agent’s intrinsic rewards with external task rewards enables the agent to learn faster than using external rewards alone. We also present results demonstrating the applicability of this approach to learning on robots.

Model-based reinforcement learning with a reduced number of basis functions to aproximate the value function, a study of its convergence guarantees, and a nice state of the art on the use of (mdel-based) reinforcement learning for automatic control

Rushikesh Kamalapurkar, Joel A. Rosenfeld, Warren E. Dixon, Efficient model-based reinforcement learning for approximate online optimal control, Automatica, Volume 74, 2016, Pages 247-258, ISSN 0005-1098, DOI: 10.1016/j.automatica.2016.08.004.

An infinite horizon optimal regulation problem is solved online for a deterministic control-affine nonlinear dynamical system using a state following (StaF) kernel method to approximate the value function. Unlike traditional methods that aim to approximate a function over a large compact set, the StaF kernel method aims to approximate a function in a small neighborhood of a state that travels within a compact set. Simulation results demonstrate that stability and approximate optimality of the control system can be achieved with significantly fewer basis functions than may be required for global approximation methods.

Survey of model-based reinforcement learning (and of reinforcement learning in general), for its application to improve learning time in robotics; a lot of references but not so many -or clear- explanations

Athanasios S. Polydoros, Lazaros Nalpantidis, Survey of Model-Based Reinforcement Learning: Applications on Robotics, Journal of Intelligent & Robotic Systems, May 2017, Volume 86, Issue 2, pp 153–173, DOI: 10.1007/s10846-017-0468-y.

Reinforcement learning is an appealing approach for allowing robots to learn new tasks. Relevant literature reveals a plethora of methods, but at the same time makes clear the lack of implementations for dealing with real life challenges. Current expectations raise the demand for adaptable robots. We argue that, by employing model-based reinforcement learning, the—now limited—adaptability characteristics of robotic systems can be expanded. Also, model-based reinforcement learning exhibits advantages that makes it more applicable to real life use-cases compared to model-free methods. Thus, in this survey, model-based methods that have been applied in robotics are covered. We categorize them based on the derivation of an optimal policy, the definition of the returns function, the type of the transition model and the learned task. Finally, we discuss the applicability of model-based reinforcement learning approaches in new applications, taking into consideration the state of the art in both algorithms and hardware.