Improving the generalization of robotic RL by inspiration in the humman motion control system

P. Zhang, Z. Hua and J. Ding, A Central Motor System Inspired Pretraining Reinforcement Learning for Robotic Control, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 55, no. 9, pp. 6285-6298, Sept. 2025, 10.1109/TSMC.2025.3577698.

Robots typically encounter diverse tasks, bringing a significant challenge for motion control. Pretraining reinforcement learning (PRL) enables robots to adapt quickly to various tasks by exploiting reusable skills. The existing PRL methods often rely on datasets and human expert knowledge, struggle to discover diverse and dynamic skills, and exhibit generalization and adaptability to different types of robots and downstream tasks. This article proposes a novel PRL algorithm based on the central motor system mechanisms, which can discover diverse and dynamic skills without relying on data and expert knowledge, effectively enabling robots to tackle different types of downstream tasks. Inspired by the cerebellum’s role in balance control and skill storage within the central motor system, an intrinsic fused reward is introduced to explore dynamic skills and eliminate dependence on data and expert knowledge during pretraining. Drawing from the basal ganglia’s function in motor programming, a discrete skill encoding method is designed to increase the diversity of discovered skills, improving the performance of complex robots in challenging environments. Furthermore, incorporating the basal ganglia’s role in motor regulation, a skill activity function is proposed to generate skills at varying dynamic levels, thereby improving the adaptability of robots in multiple downstream tasks. The effectiveness of the proposed algorithm has been demonstrated through simulation experiments on four different morphological robots across multiple downstream tasks.

Stacking multiple MDPs in an abstraction hierarchy to better solve RL

Roberto Cipollone, Marco Favorito, Flavio Maiorana, Giuseppe De Giacomo, Luca Iocchi, Fabio Patrizi, Exploiting robot abstractions in episodic RL via reward shaping and heuristics, Robotics and Autonomous Systems, Volume 193, 2025, 10.1016/j.robot.2025.105116.

One major limitation to the applicability of Reinforcement Learning (RL) to many domains of practical relevance, in particular in robotic applications, is the large number of samples required to learn an optimal policy. To address this problem and improve learning efficiency, we consider a linear hierarchy of abstraction layers of the Markov Decision Process (MDP) underlying the target domain. Each layer is an MDP representing a coarser model of the one immediately below in the hierarchy. In this work, we propose novel techniques to automatically define Reward Shaping and Reward Heuristic functions that are based on the solution obtained at a higher level of abstraction and provide rewards to the finer (possibly the concrete) MDP at the lower level, thus inducing an exploration heuristic that can effectively guide the learning process in the more complex domain. In contrast with other works in Hierarchical RL, our technique imposes fewer requirements on the design of the abstract models and is tolerant to modeling errors, thus making the proposed approach practical. We formally analyze the relationship between the abstract models and the exploration heuristic induced in the lower-level domain, we prove that the method guarantees optimal convergence, and finally demonstrate its effectiveness experimentally in several complex robotic domains.

How biology uses primary rewards coming from basic physiological signals and proxy rewards, more immediate (predicting primary rewards) as shaping rewards

Lilian A. Weber, Debbie M. Yee, Dana M. Small, and Frederike H. Petzschner, The interoceptive origin of reinforcement learning, IEEE Robotics and Automation Letters, vol. 10, no. 8, pp. 7723-7730, Aug. 2025, 10.1016/j.tics.2025.05.008.

Rewards play a crucial role in sculpting all motivated behavior. Traditionally, research on reinforcement learning has centered on how rewards guide learning and decision-making. Here, we examine the origins of rewards themselves. Specifically, we discuss that the critical signal sustaining reinforcement for food is generated internally and subliminally during the process of digestion. As such, a shift in our understanding of primary rewards as an immediate sensory gratification to a state-dependent evaluation of an action’s impact on vital phys- iological processes is called for. We integrate this perspective into a revised reinforcement learning framework that recognizes the subliminal nature of bio-logical rewards and their dependency on internal states and goals.

Including “fear” and “curiosity” in RL applied to robot navigation

D. Hu, L. Mo, J. Wu and C. Huang, “Feariosity”-Guided Reinforcement Learning for Safe and Efficient Autonomous End-to-End Navigation, IEEE Robotics and Automation Letters, vol. 10, no. 8, pp. 7723-7730, Aug. 2025, 10.1109/LRA.2025.3577523.

End-to-end navigation strategies using reinforcement learning (RL) can improve the adaptability and autonomy of Autonomous ground vehicles (AGVs) in complex environments. However, RL still faces challenges in data efficiency and safety. Neuroscientific and psychological research shows that during exploration, the brain balances between fear and curiosity, a critical process for survival and adaptation in dangerous environments. Inspired by this scientific insight, we propose the “Feariosity” model, which integrates fear and curiosity model to simulate the complex psychological dynamics organisms experience during exploration. Based on this model, we developed an innovative policy constraint method that evaluates potential hazards and applies necessary safety constraints while encouraging exploration of unknown areas. Additionally, we designed a new experience replay mechanism that quantifies the threat and unknown level of data, optimizing their usage probability. Extensive experiments in both simulation and real-world scenarios demonstrate that the proposed method significantly improves data efficiency, asymptotic performance during training. Furthermore, it achieves higher success rates, driving efficiency, and robustness in deployment. This also highlights the key role of mimicking biological neural and psychological mechanisms in improving the safety and efficiency through RL.

It seems that human predictive brain works by predicting at the abstract level, not at the sensory level

Kaitlyn M. Gabhart, Yihan (Sophy) Xiong, André M. Bastos, Predictive coding: a more cognitive process than we thought?, Trends in Cognitive Sciences, Volume 29, Issue 7, 2025, Pages 627-640, 10.1016/j.tics.2025.01.012.

In predictive coding (PC), higher-order brain areas generate predictions that are sent to lower-order sensory areas. Top-down predictions are compared with bottom-up sensory data, and mismatches evoke prediction errors. In PC, the prediction errors are encoded in layer 2/3 pyramidal neurons of sensory cortex that feed forward. The PC model has been tested with multiple recording modalities using the global–local oddball paradigm. Consistent with PC, neuroimaging studies reported prediction error responses in sensory and higher-order areas. However, recent studies of neuronal spiking suggest that genuine prediction errors emerge in prefrontal cortex (PFC). This implies that predictive processing is a more cognitive than sensory-based mechanism – an observation that challenges PC and better aligns with a framework we call predictive routing (PR).

Evidences of the dimensionality reduction and augmentation performed by the brain

Casper Kerrén, Daniel Reznik, Christian F. Doeller, Benjamin J. Griffiths, Exploring the role of dimensionality transformation in episodic memory, Trends in Cognitive Sciences, Volume 29, Issue 7, 2025, Pages 614-626, 10.1016/j.tics.2025.01.007.

Episodic memory must accomplish two adversarial goals: encoding and storing a multitude of experiences without exceeding the finite neuronal structure of the brain, and recalling memories in vivid detail. Dimensionality reduction and expansion (‘dimensionality transformation’) enable the brain to meet these demands. Reduction compresses sensory input into simplified, storable codes, while expansion reconstructs vivid details. Although these processes are essential to memory, their neural mechanisms for episodic memory remain unclear. Drawing on recent insights from cognitive psychology, systems neuroscience, and neuroanatomy, we propose two accounts of how dimensionality transformation occurs in the brain: structurally (via corticohippocampal pathways) and functionally (through neural oscillations). By examining cross-species evidence, we highlight neural mechanisms that may support episodic memory and identify crucial questions for future research.

Hierarchical optimization based on learning from mistakes

L. Zhang, B. Garg, P. Sridhara, R. Hosseini and P. Xie, Learning From Mistakes: A Multilevel Optimization Framework, IEEE Transactions on Artificial Intelligence, vol. 6, no. 6, pp. 1651-1663, June 2025, 10.1109/TAI.2025.3534151.

Bi-level optimization methods in machine learning are popularly effective in subdomains of neural architecture search, data reweighting, etc. However, most of these methods do not factor in variations in learning difficulty, which limits their performance in real-world applications. To address the above problems, we propose a framework that imitates the learning process of humans. In human learning, learners usually focus more on the topics where mistakes have been made in the past to deepen their understanding and master the knowledge. Inspired by this effective human learning technique, we propose a multilevel optimization framework, learning from mistakes (LFM), for machine learning. We formulate LFM as a three-stage optimization problem: 1) the learner learns, 2) the learner relearns based on the mistakes made before, and 3) the learner validates his learning. We develop an efficient algorithm to solve the optimization problem. We further apply our method to differentiable neural architecture search and data reweighting. Extensive experiments on CIFAR-10, CIFAR-100, ImageNet, and other related datasets powerfully demonstrate the effectiveness of our approach. The code of LFM is available at: https://github.com/importZL/LFM.

The brain is organized to minimize energy consumption while maximizing computation

Sharna D. Jamadar, Anna Behler, Hamish Deery, Michael Breakspear, The metabolic costs of cognition, Trends in Cognitive Sciences, Volume 29, Issue 6, 2025, Pages 541-555, 10.1016/j.tics.2024.11.010.

Cognition and behavior are emergent properties of brain systems that seek to maximize complex and adaptive behaviors while minimizing energy utilization. Different species reconcile this trade-off in different ways, but in humans the outcome is biased towards complex behaviors and hence relatively high energy use. However, even in energy-intensive brains, numerous parsimonious processes operate to optimize energy use. We review how this balance manifests in both homeostatic processes and task-associated cognition. We also consider the perturbations and disruptions of metabolism in neurocognitive diseases.

A possible explanation of the origin of the concept of number and some arithmetical operations based on language concepts

Stanislas Dehaene, Mathias Sablé-Meyer, Lorenzo Ciccione, Origins of numbers: a shared language-of-thought for arithmetic and geometry?, Trends in Cognitive Sciences, Volume 29, Issue 6, 2025, Pages 526-540, 10.1016/j.tics.2025.03.001.

Concepts of exact number are often thought to originate from counting and the successor function, or from a refinement of the approximate number system (ANS). We argue here for a third origin: a shared language-of-thought (LoT) for geometry and arithmetic that involves primitives of repetition, concatenation, and recursive embedding. Applied to sets, those primitives engender concepts of exact integers through recursive applications of additions and multiplications. Links between geometry and arithmetic also explain the emergence of higher-level notions (squares, primes, etc.). Under our hypothesis, understanding a number means having one or several mental expressions for it, and their minimal description length (MDL) determines how easily they can be mentally manipulated. Several historical, developmental, linguistic, and brain imaging phenomena provide preliminary support for our proposal.

Detecting novelties in the model of the world within MCTS

Bryan Loyall, Avi Pfeffer, James Niehaus, Michael Harradon, Paola Rizzo, Alex Gee, Joe Campolongo, Tyler Mayer, John Steigerwald, Coltrane: A domain-independent system for characterizing and planning in novel situations, Artificial Intelligence, Volume 345, 2025, 10.1016/j.artint.2025.104336.

AI systems operating in open-world environments must be able to adapt to impactful changes in the world, immediately when they occur, and be able to do this across the many types of changes that can occur. We are seeking to create methods to extend traditional AI systems so that they can (1) immediately recognize changes in how the world works that are impactful to task accomplishment; (2) rapidly characterize the nature of the change using the limited observations that are available when the change is first detected; (3) adapt to the change as well as feasible to accomplish the system’s tasks given the available observations; and (4) continue to improve the characterization and adaptation as additional observations are available. In this paper, we describe Coltrane, a domain-independent system for characterizing and planning in novel situations that uses only natural domain descriptions to generate its novelty-handling behavior, without any domain-specific anticipation of the novelty. Coltrane’s characterization method is based on probabilistic program synthesis of perturbations to programs expressed in a traditional programming language describing domain transition models. Its planning method is based on incorporating novel domain models in an MCTS search algorithm and on automatically adapting the heuristics used. Both a formal external evaluation and our own demonstrations show that Coltrane is capable of accurately characterizing interesting forms of novelty and of adapting its behavior to restore its performance to pre-novelty levels and even beyond.