Tag Archives: Skill Learning

Learning basic motion skills through modeling them as parameterized modules (learned by demonstration and babbling), and a nice state of the art of the development of motion skills

René Felix Reinhart, Autonomous exploration of motor skills by skill babbling, Auton Robot (2017) 41:1521–1537, DOI: 10.1007/s10514-016-9613-x.

Autonomous exploration of motor skills is a key capability of learning robotic systems. Learning motor skills can be formulated as inverse modeling problem, which targets at finding an inverse model that maps desired outcomes in some task space, e.g., via points of a motion, to appropriate actions, e.g., motion control policy parameters. In this paper, autonomous exploration of motor skills is achieved by incrementally learning inverse models starting from an initial demonstration. The algorithm is referred to as skill babbling, features sample-efficient learning, and scales to high-dimensional action spaces. Skill babbling extends ideas of goal-directed exploration, which organizes exploration in the space of goals. The proposed approach provides a modular framework for autonomous skill exploration by separating the learning of the inverse model from the exploration mechanism and a model of achievable targets, i.e. the workspace. The effectiveness of skill babbling is demonstrated for a range of motor tasks comprising the autonomous bootstrapping of inverse kinematics and parameterized motion primitives.

Developmental approach for a robot manipulator that learns in several bootstrapped stages, strongly inspired in infant development

Ugur, E.; Nagai, Y.; Sahin, E.; Oztop, E., Staged Development of Robot Skills: Behavior Formation, Affordance Learning and Imitation with Motionese, Autonomous Mental Development, IEEE Transactions on , vol.7, no.2, pp.119,139, June 2015, DOI: 10.1109/TAMD.2015.2426192.

Inspired by infant development, we propose a three staged developmental framework for an anthropomorphic robot manipulator. In the first stage, the robot is initialized with a basic reach-and- enclose-on-contact movement capability, and discovers a set of behavior primitives by exploring its movement parameter space. In the next stage, the robot exercises the discovered behaviors on different objects, and learns the caused effects; effectively building a library of affordances and associated predictors. Finally, in the third stage, the learned structures and predictors are used to bootstrap complex imitation and action learning with the help of a cooperative tutor. The main contribution of this paper is the realization of an integrated developmental system where the structures emerging from the sensorimotor experience of an interacting real robot are used as the sole building blocks of the subsequent stages that generate increasingly more complex cognitive capabilities. The proposed framework includes a number of common features with infant sensorimotor development. Furthermore, the findings obtained from the self-exploration and motionese guided human-robot interaction experiments allow us to reason about the underlying mechanisms of simple-to-complex sensorimotor skill progression in human infants.

Example of application of bayesian network learning and inference to robotics, and a brief but useful related work on learning by imitation

Dan Song; Ek, C.H.; Huebner, K.; Kragic, D., Task-Based Robot Grasp Planning Using Probabilistic Inference, Robotics, IEEE Transactions on , vol.31, no.3, pp.546,561, June 2015, DOI: 10.1109/TRO.2015.2409912.

Grasping and manipulating everyday objects in a goal-directed manner is an important ability of a service robot. The robot needs to reason about task requirements and ground these in the sensorimotor information. Grasping and interaction with objects are challenging in real-world scenarios, where sensorimotor uncertainty is prevalent. This paper presents a probabilistic framework for the representation and modeling of robot-grasping tasks. The framework consists of Gaussian mixture models for generic data discretization, and discrete Bayesian networks for encoding the probabilistic relations among various task-relevant variables, including object and action features as well as task constraints. We evaluate the framework using a grasp database generated in a simulated environment including a human and two robot hand models. The generative modeling approach allows the prediction of grasping tasks given uncertain sensory data, as well as object and grasp selection in a task-oriented manner. Furthermore, the graphical model framework provides insights into dependencies between variables and features relevant for object grasping.

Neurological evidences of the hierarchical arrangement of the process of motor skill learning

Jörn Diedrichsen, Katja Kornysheva, Motor skill learning between selection and execution, Trends in Cognitive Sciences, Volume 19, Issue 4, April 2015, Pages 227-233, ISSN 1364-6613, DOI: 10.1016/j.tics.2015.02.003.

Learning motor skills evolves from the effortful selection of single movement elements to their combined fast and accurate production. We review recent trends in the study of skill learning which suggest a hierarchical organization of the representations that underlie such expert performance, with premotor areas encoding short sequential movement elements (chunks) or particular component features (timing/spatial organization). This hierarchical representation allows the system to utilize elements of well-learned skills in a flexible manner. One neural correlate of skill development is the emergence of specialized neural circuits that can produce the required elements in a stable and invariant fashion. We discuss the challenges in detecting these changes with fMRI.