Tag Archives: Motivational Robotics

Reinforcement learning to learn the model of the world intrinsically motivated

Todd Hester, Peter Stone, Intrinsically motivated model learning for developing curious robots, Artificial Intelligence, Volume 247, June 2017, Pages 170-186, ISSN 0004-3702, DOI: 10.1016/j.artint.2015.05.002.

Reinforcement Learning (RL) agents are typically deployed to learn a specific, concrete task based on a pre-defined reward function. However, in some cases an agent may be able to gain experience in the domain prior to being given a task. In such cases, intrinsic motivation can be used to enable the agent to learn a useful model of the environment that is likely to help it learn its eventual tasks more efficiently. This paradigm fits robots particularly well, as they need to learn about their own dynamics and affordances which can be applied to many different tasks. This article presents the texplore with Variance-And-Novelty-Intrinsic-Rewards algorithm (texplore-vanir), an intrinsically motivated model-based RL algorithm. The algorithm learns models of the transition dynamics of a domain using random forests. It calculates two different intrinsic motivations from this model: one to explore where the model is uncertain, and one to acquire novel experiences that the model has not yet been trained on. This article presents experiments demonstrating that the combination of these two intrinsic rewards enables the algorithm to learn an accurate model of a domain with no external rewards and that the learned model can be used afterward to perform tasks in the domain. While learning the model, the agent explores the domain in a developing and curious way, progressively learning more complex skills. In addition, the experiments show that combining the agent’s intrinsic rewards with external task rewards enables the agent to learn faster than using external rewards alone. We also present results demonstrating the applicability of this approach to learning on robots.

Novelty detection as a way for enhancing learning capabilities of a robot, and a brief but interesting survey of motivational theories and their difference with attention

Y. Gatsoulis, T.M. McGinnity, Intrinsically motivated learning systems based on biologically-inspired novelty detection, Robotics and Autonomous Systems, Volume 68, June 2015, Pages 12-20, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.02.006.

Intrinsic motivations play an important role in human learning, particularly in the early stages of childhood development, and ideas from this research field have influenced robotic learning and adaptability. In this paper we investigate one specific type of intrinsic motivation, that of novelty detection and we discuss the reasons that make it a powerful facility for continuous learning. We formulate and present one original type of biologically inspired novelty detection architecture and implement it on a robotic system engaged in a perceptual classification task. The results of real-world robot experiments we conducted show how this original architecture conforms to behavioural observations and demonstrate its effectiveness in terms of focusing the system’s attention in areas that are potential for effective learning.