Tag Archives: Reward Generation

Inclusion of LLMs in multiple task learning for generating rewards

Z. Lin, Y. Chen and Z. Liu, AutoSkill: Hierarchical Open-Ended Skill Acquisition for Long-Horizon Manipulation Tasks via Language-Modulated Rewards, IEEE Transactions on Cognitive and Developmental Systems, vol. 17, no. 5, pp. 1141-1152, Oct. 2025, 10.1109/TCDS.2025.3551298.

A desirable property of generalist robots is the ability to both bootstrap diverse skills and solve new long-horizon tasks in open-ended environments without human intervention. Recent advancements have shown that large language models (LLMs) encapsulate vast-scale semantic knowledge about the world to enable long-horizon robot planning. However, they are typically restricted to reasoning high-level instructions and lack world grounding, which makes it difficult for them to coordinately bootstrap and acquire new skills in unstructured environments. To this end, we propose AutoSkill, a hierarchical system that empowers the physical robot to automatically learn to cope with new long-horizon tasks by growing an open-ended skill library without hand-crafted rewards. AutoSkill consists of two key components: 1) an in-context skill chain generation and new skill bootstrapping guided by LLMs that inform the robot of discrete and interpretable skill instructions for skill retrieval and augmentation within the skill library; and 2) a zero-shot language-modulated reward scheme in conjunction with a meta prompter facilitates online new skill acquisition via expert-free supervision aligned with proposed skill directives. Extensive experiments conducted in both simulated and realistic environments demonstrate AutoSkill’s superiority over other LLM-based planners as well as hierarchical methods in expediting online learning for novel manipulation tasks.

Improving the generalization of robotic RL by inspiration in the humman motion control system

P. Zhang, Z. Hua and J. Ding, A Central Motor System Inspired Pretraining Reinforcement Learning for Robotic Control, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 55, no. 9, pp. 6285-6298, Sept. 2025, 10.1109/TSMC.2025.3577698.

Robots typically encounter diverse tasks, bringing a significant challenge for motion control. Pretraining reinforcement learning (PRL) enables robots to adapt quickly to various tasks by exploiting reusable skills. The existing PRL methods often rely on datasets and human expert knowledge, struggle to discover diverse and dynamic skills, and exhibit generalization and adaptability to different types of robots and downstream tasks. This article proposes a novel PRL algorithm based on the central motor system mechanisms, which can discover diverse and dynamic skills without relying on data and expert knowledge, effectively enabling robots to tackle different types of downstream tasks. Inspired by the cerebellum’s role in balance control and skill storage within the central motor system, an intrinsic fused reward is introduced to explore dynamic skills and eliminate dependence on data and expert knowledge during pretraining. Drawing from the basal ganglia’s function in motor programming, a discrete skill encoding method is designed to increase the diversity of discovered skills, improving the performance of complex robots in challenging environments. Furthermore, incorporating the basal ganglia’s role in motor regulation, a skill activity function is proposed to generate skills at varying dynamic levels, thereby improving the adaptability of robots in multiple downstream tasks. The effectiveness of the proposed algorithm has been demonstrated through simulation experiments on four different morphological robots across multiple downstream tasks.

How biology uses primary rewards coming from basic physiological signals and proxy rewards, more immediate (predicting primary rewards) as shaping rewards

Lilian A. Weber, Debbie M. Yee, Dana M. Small, and Frederike H. Petzschner, The interoceptive origin of reinforcement learning, IEEE Robotics and Automation Letters, vol. 10, no. 8, pp. 7723-7730, Aug. 2025, 10.1016/j.tics.2025.05.008.

Rewards play a crucial role in sculpting all motivated behavior. Traditionally, research on reinforcement learning has centered on how rewards guide learning and decision-making. Here, we examine the origins of rewards themselves. Specifically, we discuss that the critical signal sustaining reinforcement for food is generated internally and subliminally during the process of digestion. As such, a shift in our understanding of primary rewards as an immediate sensory gratification to a state-dependent evaluation of an action’s impact on vital phys- iological processes is called for. We integrate this perspective into a revised reinforcement learning framework that recognizes the subliminal nature of bio-logical rewards and their dependency on internal states and goals.