A nice survey on active learning, in particular for robotics

Annalisa T. Taylor, Thomas A. Berrueta, Todd D. Murphey, Active learning in robotics: A review of control principles, . Mechatronics, Volume 77, 2021 DOI: 10.1016/j.mechatronics.2021.102576.

Active learning is a decision-making process. In both abstract and physical settings, active learning demands
both analysis and action. This is a review of active learning in robotics, focusing on methods amenable to
the demands of embodied learning systems. Robots must be able to learn efficiently and flexibly through
continuous online deployment. This poses a distinct set of control-oriented challenges??one must choose
suitable measures as objectives, synthesize real-time control, and produce analyses that guarantee performance
and safety with limited knowledge of the environment or robot itself. In this work, we survey the fundamental
components of robotic active learning systems. We discuss classes of learning tasks that robots typically
encounter, measures with which they gauge the information content of observations, and algorithms for
generating action plans. Moreover, we provide a variety of examples ?? from environmental mapping to
nonparametric shape estimation ?? that highlight the qualitative differences between learning tasks, information
measures, and control techniques. We conclude with a discussion of control-oriented open challenges, including
safety-constrained learning and distributed learning.

NOTES:

  • RL can be considered one of the areas within computational learning theory, that usually ignore physical embodiment aspects of the learning agent. However, that is only so when RL explores through decision-making, not when it explores randomly, without much purpose of enhancing learning itself through its actions.
  • RL caveats (particularly Deep RL): their large data requirements, lack of generalizability between tasks, as well as their inability to learn incrementally and guarantee
    safety.
  • Bayesian filters can be seen as learner systems: they learn parameters of objects (pose) or environments (maps) aided by some models. However, they are more active learners when they use the robot actions to improve that parameter learning.
  • Gaussian processes can be effective in learning those models when no parameterical form is available or much first-principle knowledge, for instance, when the robot has to learn the model only observing a small part of the environment (local).
  • Entropy/information, Fisher’s information (conditional information) and ergodicity are the main ways of measuring information gain in active learning.

Comments are closed.

Post Navigation