Category Archives: Cognitive Sciences

On the abstraction of actions

Bita Banihashemi, Giuseppe De Giacomo, Yves Lespérance, Abstracting situation calculus action theories, Artificial Intelligence, Volume 348, 2025 10.1016/j.artint.2025.104407.

We develop a general framework for agent abstraction based on the situation calculus and the ConGolog agent programming language. We assume that we have a high-level specification and a low-level specification of the agent, both represented as basic action theories. A refinement mapping specifies how each high-level action is implemented by a low-level ConGolog program and how each high-level fluent can be translated into a low-level formula. We define a notion of sound abstraction between such action theories in terms of the existence of a suitable bisimulation between their respective models. Sound abstractions have many useful properties that ensure that we can reason about the agent’s actions (e.g., executability, projection, and planning) at the abstract level, and refine and concretely execute them at the low level. We also characterize the notion of complete abstraction where all actions (including exogenous ones) that the high level thinks can happen can in fact occur at the low level. To facilitate verifying that one has a sound/complete abstraction relative to a mapping, we provide a set of necessary and sufficient conditions. Finally, we identify a set of basic action theory constraints that ensure that for any low-level action sequence, there is a unique high-level action sequence that it refines. This allows us to track/monitor what the low-level agent is doing and describe it in abstract terms (i.e., provide high-level explanations, for instance, to a client or manager).

Learning representations from RL based on symmetries

Alexander Dean, Eduardo Alonso, Esther Mondragón, MAlgebras of actions in an agent’s representations of the world, Artificial Intelligence, Volume 348, 2025, 10.1016/j.tics.2025.06.009.

Learning efficient representations allows robust processing of data, data that can then be generalised across different tasks and domains, and it is thus paramount in various areas of Artificial Intelligence, including computer vision, natural language processing and reinforcement learning, among others. Within the context of reinforcement learning, we propose in this paper a mathematical framework to learn representations by extracting the algebra of the transformations of worlds from the perspective of an agent. As a starting point, we use our framework to reproduce representations from the symmetry-based disentangled representation learning (SBDRL) formalism proposed by [1] and prove that, although useful, they are restricted to transformations that respond to the properties of algebraic groups. We then generalise two important results of SBDRL –the equivariance condition and the disentangling definition– from only working with group-based symmetry representations to working with representations capturing the transformation properties of worlds for any algebra, using examples common in reinforcement learning and generated by an algorithm that computes their corresponding Cayley tables. Finally, we combine our generalised equivariance condition and our generalised disentangling definition to show that disentangled sub-algebras can each have their own individual equivariance conditions, which can be treated independently, using category theory. In so doing, our framework offers a rich formal tool to represent different types of symmetry transformations in reinforcement learning, extending the scope of previous proposals and providing Artificial Intelligence developers with a sound foundation to implement efficient applications.

Short letter with evidences of the use of models in mammal decision making, relating it to reinforcement learning

Ivo Jacobs, Tomas Persson, Peter Gärdenfors, Model-based animal cognition slips through the sequence bottleneck, Trends in Cognitive Sciences, Volume 29, Issue 10, 2025, Pages 872-873, 10.1016/j.tics.2025.06.009.

In a recent article in TiCS, Lind and Jon-And argued that the sequence memory of animals constitutes a cognitive bottleneck, the ‘sequence bottleneck’, and that mental simulations require faithful representation of sequential information. They therefore concluded that animals cannot perform mental simulations, and that behavioral and neurobiological studies suggesting otherwise are best interpreted as results of associative learning. Through examples of predictive maps, cognitive control, and active sleep, we illustrate the overwhelming evidence that mammals and birds make model-based simulations, which suggests the sequence bottleneck to be more limited in scope than proposed by Lind and Jon-And […]

There is a response to this paper.

Model-based RL that addresses the problem of building models that can produce off-distribution data more safely

X. -Y. Liu et al., DOMAIN: Mildly Conservative Model-Based Offline Reinforcement Learning, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 55, no. 10, pp. 7142-7155, Oct. 2025, 10.1109/TSMC.2025.3578666.

Model-based reinforcement learning (RL), which learns an environment model from the offline dataset and generates more out-of-distribution model data, has become an effective approach to the problem of distribution shift in offline RL. Due to the gap between the learned and actual environment, conservatism should be incorporated into the algorithm to balance accurate offline data and imprecise model data. The conservatism of current algorithms mostly relies on model uncertainty estimation. However, uncertainty estimation is unreliable and leads to poor performance in certain scenarios, and the previous methods ignore differences between the model data, which brings great conservatism. To address the above issues, this article proposes a mildly conservative model-based offline RL algorithm (DOMAIN) without estimating model uncertainty, and designs the adaptive sampling distribution of model samples, which can adaptively adjust the model data penalty. In this article, we theoretically demonstrate that the Q value learned by the DOMAIN outside the region is a lower bound of the true Q value, the DOMAIN is less conservative than previous model-based offline RL algorithms, and has the guarantee of safety policy improvement. The results of extensive experiments show that DOMAIN outperforms prior RL algorithms and the average performance has improved by 1.8% on the D4RL benchmark.

Related: 10.1109/TSMC.2025.3583392

Accelerating recognition processes of images through NNs that do not vary their weights

Yanli Yang, A brain-inspired projection contrastive learning network for instantaneous learning, Engineering Applications of Artificial Intelligence, Volume 158, 2025 , 10.1016/j.engappai.2025.111524.

The biological brain can learn quickly and efficiently, while the learning of artificial neural networks is astonishing time-consuming and energy-consuming. Biosensory information is quickly projected to the memory areas to be identified or to be signed with a label through biological neural networks. Inspired by the fast learning of biological brains, a projection contrastive learning model is designed for the instantaneous learning of samples. This model is composed of an information projection module for rapid information representation and a contrastive learning module for neural manifold disentanglement. An algorithm instance of projection contrastive learning is designed to process some machinery vibration signals and is tested on several public datasets. The test on a mixed dataset containing 1426 training samples and 14,260 testing samples shows that the running time of our algorithm is approximately 37 s and that the average processing time is approximately 2.31 ms per sample, which is comparable to the processing speed of a human vision system. A prominent feature of this algorithm is that it can track the decision-making process to provide an explanation of outputs in addition to its fast running speed.

A review of cognitive costs of decision making

Christin Schulze, Ada Aka, Daniel M. Bartels, Stefan F. Bucher, Jake R. Embrey, Todd M. Gureckis, Gerald Häubl, Mark K. Ho, Ian Krajbich, Alexander K. Moore, Gabriele Oettingen, Joan D.K. Ongchoco, Ryan Oprea, Nicholas Reinholtz, Ben R. Newell, A timeline of cognitive costs in decision-making, Trends in Cognitive Sciences, Volume 29, Issue 9, 2025, Pages 827-839, 10.1016/j.tics.2025.04.004.

Recent research from economics, psychology, cognitive science, computer science, and marketing is increasingly interested in the idea that people face cognitive costs when making decisions. Reviewing and synthesizing this research, we develop a framework of cognitive costs that organizes concepts along a temporal dimension and maps out when costs occur in the decision-making process and how they impact decisions. Our unifying framework broadens the scope of research on cognitive costs to a wider timeline of cognitive processing. We identify implications and recommendations emerging from our framework for intervening on behavior to tackle some of the most pressing issues of our day, from improving health and saving decisions to mitigating the consequences of climate change.

Social learning is compatible with reward-based decision making

David Schultner, Lucas Molleman, Björn Lindström, Reward is enough for social learning, Trends in Cognitive Sciences, Volume 29, Issue 9, 2025, Pages 787-789, 10.1016/j.tics.2025.06.012.

Adaptive behaviour relies on selective social learning, yet the mechanisms underlying this capacity remain debated. A new account demonstrates that key strategies can emerge through reward-based learning of social features, explaining the widely observed flexibility of social learning and illuminating the cognitive basis of cultural evolution.

Improving the generalization of robotic RL by inspiration in the humman motion control system

P. Zhang, Z. Hua and J. Ding, A Central Motor System Inspired Pretraining Reinforcement Learning for Robotic Control, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 55, no. 9, pp. 6285-6298, Sept. 2025, 10.1109/TSMC.2025.3577698.

Robots typically encounter diverse tasks, bringing a significant challenge for motion control. Pretraining reinforcement learning (PRL) enables robots to adapt quickly to various tasks by exploiting reusable skills. The existing PRL methods often rely on datasets and human expert knowledge, struggle to discover diverse and dynamic skills, and exhibit generalization and adaptability to different types of robots and downstream tasks. This article proposes a novel PRL algorithm based on the central motor system mechanisms, which can discover diverse and dynamic skills without relying on data and expert knowledge, effectively enabling robots to tackle different types of downstream tasks. Inspired by the cerebellum’s role in balance control and skill storage within the central motor system, an intrinsic fused reward is introduced to explore dynamic skills and eliminate dependence on data and expert knowledge during pretraining. Drawing from the basal ganglia’s function in motor programming, a discrete skill encoding method is designed to increase the diversity of discovered skills, improving the performance of complex robots in challenging environments. Furthermore, incorporating the basal ganglia’s role in motor regulation, a skill activity function is proposed to generate skills at varying dynamic levels, thereby improving the adaptability of robots in multiple downstream tasks. The effectiveness of the proposed algorithm has been demonstrated through simulation experiments on four different morphological robots across multiple downstream tasks.

How biology uses primary rewards coming from basic physiological signals and proxy rewards, more immediate (predicting primary rewards) as shaping rewards

Lilian A. Weber, Debbie M. Yee, Dana M. Small, and Frederike H. Petzschner, The interoceptive origin of reinforcement learning, IEEE Robotics and Automation Letters, vol. 10, no. 8, pp. 7723-7730, Aug. 2025, 10.1016/j.tics.2025.05.008.

Rewards play a crucial role in sculpting all motivated behavior. Traditionally, research on reinforcement learning has centered on how rewards guide learning and decision-making. Here, we examine the origins of rewards themselves. Specifically, we discuss that the critical signal sustaining reinforcement for food is generated internally and subliminally during the process of digestion. As such, a shift in our understanding of primary rewards as an immediate sensory gratification to a state-dependent evaluation of an action’s impact on vital phys- iological processes is called for. We integrate this perspective into a revised reinforcement learning framework that recognizes the subliminal nature of bio-logical rewards and their dependency on internal states and goals.

Including “fear” and “curiosity” in RL applied to robot navigation

D. Hu, L. Mo, J. Wu and C. Huang, “Feariosity”-Guided Reinforcement Learning for Safe and Efficient Autonomous End-to-End Navigation, IEEE Robotics and Automation Letters, vol. 10, no. 8, pp. 7723-7730, Aug. 2025, 10.1109/LRA.2025.3577523.

End-to-end navigation strategies using reinforcement learning (RL) can improve the adaptability and autonomy of Autonomous ground vehicles (AGVs) in complex environments. However, RL still faces challenges in data efficiency and safety. Neuroscientific and psychological research shows that during exploration, the brain balances between fear and curiosity, a critical process for survival and adaptation in dangerous environments. Inspired by this scientific insight, we propose the “Feariosity” model, which integrates fear and curiosity model to simulate the complex psychological dynamics organisms experience during exploration. Based on this model, we developed an innovative policy constraint method that evaluates potential hazards and applies necessary safety constraints while encouraging exploration of unknown areas. Additionally, we designed a new experience replay mechanism that quantifies the threat and unknown level of data, optimizing their usage probability. Extensive experiments in both simulation and real-world scenarios demonstrate that the proposed method significantly improves data efficiency, asymptotic performance during training. Furthermore, it achieves higher success rates, driving efficiency, and robustness in deployment. This also highlights the key role of mimicking biological neural and psychological mechanisms in improving the safety and efficiency through RL.