Author Archives: Juan-antonio Fernández-madrigal

Survey on methods for learning from demonstration in robotics

M. Tavassoli, S. Katyara, M. Pozzi, N. Deshpande, D. G. Caldwell and D. Prattichizzo, Learning Skills From Demonstrations: A Trend From Motion Primitives to Experience Abstraction, IEEE Transactions on Cognitive and Developmental Systems, vol. 16, no. 1, pp. 57-74, Feb. 20248 DOI: 10.1109/TCDS.2023.3296166.

The uses of robots are changing from static environments in factories to encompass novel concepts such as human\u2013robot collaboration in unstructured settings. Preprogramming all the functionalities for robots becomes impractical, and hence, robots need to learn how to react to new events autonomously, just like humans. However, humans, unlike machines, are naturally skilled in responding to unexpected circumstances based on either experiences or observations. Hence, embedding such anthropoid behaviors into robots entails the development of neuro-cognitive models that emulate motor skills under a robot learning paradigm. Effective encoding of these skills is bound to the proper choice of tools and techniques. This survey paper studies different motion and behavior learning methods ranging from movement primitives (MPs) to experience abstraction (EA), applied to different robotic tasks. These methods are scrutinized and then experimentally benchmarked by reconstructing a standard pick-n-place task. Apart from providing a standard guideline for the selection of strategies and algorithms, this article aims to draw a perspective on their possible extensions and improvements.

On the complexities of RL when it confronts the real (natural) world

Toby Wise, Kara Emery, Angela Radulescu, Naturalistic reinforcement learning, Trends in Cognitive Sciences, Volume 28, Issue 2, 2024, Pages 144-158 DOI: 10.1016/j.tics.2023.08.016.

Humans possess a remarkable ability to make decisions within real-world environments that are expansive, complex, and multidimensional. Human cognitive computational neuroscience has sought to exploit reinforcement learning (RL) as a framework within which to explain human decision-making, often focusing on constrained, artificial experimental tasks. In this article, we review recent efforts that use naturalistic approaches to determine how humans make decisions in complex environments that better approximate the real world, providing a clearer picture of how humans navigate the challenges posed by real-world decisions. These studies purposely embed elements of naturalistic complexity within experimental paradigms, rather than focusing on simplification, generating insights into the processes that likely underpin humans\u2019 ability to navigate complex, multidimensional real-world environments so successfully.

On the need of interacting with the real world to acquire meaning

Giovanni Pezzulo, Thomas Parr, Paul Cisek, Andy Clark, Karl Friston, Generating meaning: active inference and the scope and limits of passive AI, Trends in Cognitive Sciences, Volume 28, Issue 2, 2024, Pages 97-112, DOI: 10.1016/j.tics.2023.10.002.

Prominent accounts of sentient behavior depict brains as generative models of organismic interaction with the world, evincing intriguing similarities with current advances in generative artificial intelligence (AI). However, because they contend with the control of purposive, life-sustaining sensorimotor interactions, the generative models of living organisms are inextricably anchored to the body and world. Unlike the passive models learned by generative AI systems, they must capture and control the sensory consequences of action. This allows embodied agents to intervene upon their worlds in ways that constantly put their best models to the test, thus providing a solid bedrock that is \u2013 we argue \u2013 essential to the development of genuine understanding. We review the resulting implications and consider future directions for generative AI.

On the relations between symbolic and subsymbolic systems in AI

Giuseppe Marra, Sebastijan Duman\u010di\u0107, Robin Manhaeve, Luc De Raedt, From statistical relational to neurosymbolic artificial intelligence: A survey, Artificial Intelligence, Volume 328, 2024 DOI: 10.1016/j.artint.2023.104062.

This survey explores the integration of learning and reasoning in two different fields of artificial intelligence: neurosymbolic and statistical relational artificial intelligence. Neurosymbolic artificial intelligence (NeSy) studies the integration of symbolic reasoning and neural networks, while statistical relational artificial intelligence (StarAI) focuses on integrating logic with probabilistic graphical models. This survey identifies seven shared dimensions between these two subfields of AI. These dimensions can be used to characterize different NeSy and StarAI systems. They are concerned with (1) the approach to logical inference, whether model or proof-based; (2) the syntax of the used logical theories; (3) the logical semantics of the systems and their extensions to facilitate learning; (4) the scope of learning, encompassing either parameter or structure learning; (5) the presence of symbolic and subsymbolic representations; (6) the degree to which systems capture the original logic, probabilistic, and neural paradigms; and (7) the classes of learning tasks the systems are applied to. By positioning various NeSy and StarAI systems along these dimensions and pointing out similarities and differences between them, this survey contributes fundamental concepts for understanding the integration of learning and reasoning.

Estimating speed from inertial data by dealing with noise and outliers

W. Xu, X. Peng and L. Kneip, Tight Fusion of Events and Inertial Measurements for Direct Velocity Estimation, IEEE Transactions on Robotics, vol. 40, pp. 240-256, 2024 DOI: 10.1109/TRO.2023.3333108.

Traditional visual-inertial state estimation targets absolute camera poses and spatial landmark locations while first-order kinematics are typically resolved as an implicitly estimated substate. However, this poses a risk in velocity-based control scenarios, as the quality of the estimation of kinematics depends on the stability of absolute camera and landmark coordinates estimation. To address this issue, we propose a novel solution to tight visual\u2013inertial fusion directly at the level of first-order kinematics by employing a dynamic vision sensor instead of a normal camera. More specifically, we leverage trifocal tensor geometry to establish an incidence relation that directly depends on events and camera velocity, and demonstrate how velocity estimates in highly dynamic situations can be obtained over short-time intervals. Noise and outliers are dealt with using a nested two-layer random sample consensus (RANSAC) scheme. In addition, smooth velocity signals are obtained from a tight fusion with preintegrated inertial signals using a sliding window optimizer. Experiments on both simulated and real data demonstrate that the proposed tight event-inertial fusion leads to continuous and reliable velocity estimation in highly dynamic scenarios independently of absolute coordinates. Furthermore, in extreme cases, it achieves more stable and more accurate estimation of kinematics than traditional, point-position-based visual-inertial odometry.

Particle grid maps

G. Chen, W. Dong, P. Peng, J. Alonso-Mora and X. Zhu, Continuous Occupancy Mapping in Dynamic Environments Using Particles, IEEE Transactions on Robotics, vol. 40, pp. 64-84, 2024 DOI: 10.1109/TRO.2023.3323841.

Particle-based dynamic occupancy maps were proposed in recent years to model the obstacles in dynamic environments. Current particle-based maps describe the occupancy status in discrete grid form and suffer from the grid size problem, wherein a large grid size is unfavorable for motion planning while a small grid size lowers efficiency and causes gaps and inconsistencies. To tackle this problem, this article generalizes the particle-based map into continuous space and builds an efficient 3-D egocentric local map. A dual-structure subspace division paradigm, composed of a voxel subspace division and a novel pyramid-like subspace division, is proposed to propagate particles and update the map efficiently with the consideration of occlusions. The occupancy status at an arbitrary point in the map space can then be estimated with the weights of the particles. To reduce the noise in modeling static and dynamic obstacles simultaneously, an initial velocity estimation approach and a mixture model are utilized. Experimental results show that our map can effectively and efficiently model both dynamic obstacles and static obstacles. Compared to the state-of-the-art grid-form particle-based map, our map enables continuous occupancy estimation and substantially improves the mapping performance at different resolutions.

Offline RL in robotics

L. Yao, B. Zhao, X. Xu, Z. Wang, P. K. Wong and Y. Hu, Efficient Incremental Offline Reinforcement Learning With Sparse Broad Critic Approximation, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 1, pp. 156-169, Jan. 2024 DOI: 10.1109/TSMC.2023.3305498.

Offline reinforcement learning (ORL) has been getting increasing attention in robot learning, benefiting from its ability to avoid hazardous exploration and learn policies directly from precollected samples. Approximate policy iteration (API) is one of the most commonly investigated ORL approaches in robotics, due to its linear representation of policies, which makes it fairly transparent in both theoretical and engineering analysis. One open problem of API is how to design efficient and effective basis functions. The broad learning system (BLS) has been extensively studied in supervised and unsupervised learning in various applications. However, few investigations have been conducted on ORL. In this article, a novel incremental ORL approach with sparse broad critic approximation (BORL) is proposed with the advantages of BLS, which approximates the critic function in a linear manner with randomly projected sparse and compact features and dynamically expands its broad structure. The BORL is the first extension of API with BLS in the field of robotics and ORL. The approximation ability and convergence performance of BORL are also analyzed. Comprehensive simulation studies are then conducted on two benchmarks, and the results demonstrate that the proposed BORL can obtain comparable or better performance than conventional API methods without laborious hyperparameter fine-tuning work. To further demonstrate the effectiveness of BORL in practical robotic applications, a variable force tracking problem in robotic ultrasound scanning (RUSS) is investigated, and a learning-based adaptive impedance control (LAIC) algorithm is proposed based on BORL. The experimental results demonstrate the advantages of LAIC compared with conventional force tracking methods.

See also: X. Wang, D. Hou, L. Huang and Y. Cheng, “Offline\u2013Online Actor\u2013Critic,” in IEEE Transactions on Artificial Intelligence, vol. 5, no. 1, pp. 61-69, Jan. 2024, doi: 10.1109/TAI.2022.3225251

Hierarchical Deep-RL for continuous and large state spaces

A. P. Pope et al. Hierarchical Reinforcement Learning for Air Combat at DARPA’s AlphaDogfight Trials, EEE Transactions on Artificial Intelligence, vol. 4, no. 6, pp. 1371-1385, Dec. 2023 DOI: 10.1109/TAI.2022.3222143.

Autonomous control in high-dimensional, continuous state spaces is a persistent and important challenge in the fields of robotics and artificial intelligence. Because of high risk and complexity, the adoption of AI for autonomous combat systems has been a long-standing difficulty. In order to address these issues, DARPA’s AlphaDogfight Trials (ADT) program sought to vet the feasibility of and increase trust in AI for autonomously piloting an F-16 in simulated air-to-air combat. Our submission to ADT solves the high-dimensional, continuous control problem using a novel hierarchical deep reinforcement learning approach consisting of a high-level policy selector and a set of separately trained low-level policies specialized for excelling in specific regions of the state space. Both levels of the hierarchy are trained using off-policy, maximum entropy methods with expert knowledge integrated through reward shaping. Our approach outperformed human expert pilots and achieved a second-place rank in the ADT championship event.

Visibility graphs for robot path planning is still in use!

Junlin Ou, Seong Hyeon Hong, Ge Song, Yi Wang, Hybrid path planning based on adaptive visibility graph initialization and edge computing for mobile robots, Engineering Applications of Artificial Intelligence, Volume 126, Part D, 2023 DOI: 10.1016/j.engappai.2023.107110.

This paper presents a new initialization method that combines adaptive visibility graphs and the A* algorithm to improve the exploration, accuracy, and computing efficiency of hybrid path planning for mobile robots. First, segments/links in the full visibility graphs are removed randomly in an iterative and adaptive manner, yielding adaptive visibility graphs. Then the A* algorithm is applied to find the shortest paths in these adaptive visibility graphs. Next, high-quality paths featuring low fitness values are chosen to initialize the subsequent heuristic optimization in hybrid path planning. Specifically, in the present study, the genetic algorithm (GA) is implemented on a CPU/GPU edge computing device (Jetson AGX Xavier) to exploit its massively parallel processing threads, and the strategy for judicious CPU/GPU resource utilization is also developed. Numerical experiments are conducted to determine proper hyperparameters and configure GA with balanced performance. Various optimal paths with differential consideration of practical factors for robot path planning are obtained by the proposed method. Compared to the other benchmark methods, ours significantly improves the diversity of initial path and exploration, optimization accuracy, and computing speed (within 5�s with most less than 2�s). Furthermore, real-time experiments are carried out to demonstrate the effectiveness and application of the proposed algorithm on mobile robots.

Review of NNs for solving manipulator inverse kinematics

Daniel Cagigas-Mu�iz, Artificial Neural Networks for inverse kinematics problem in articulated robots, Engineering Applications of Artificial Intelligence,
Volume 126, Part D, 2023 DOI: 10.1016/j.engappai.2023.107175.

The inverse kinematics problem in articulated robots implies to obtain joint rotation angles using the robot end effector position and orientation tool. Unlike the problem of direct kinematics, in inverse kinematics there are no systematic methods for solving the problem. Moreover, solving the inverse kinematics problem is particularly complicated for certain morphologies of articulated robots. Machine learning techniques and, more specifically, artificial neural networks (ANNs) have been proposed in the scientific literature to solve this problem. However, there are some limitations in the performance of ANNs. In this study, different techniques that involve ANNs are proposed and analyzed. The results show that the proposed original bootstrap sampling and hybrid methods can substantially improve the performance of approaches that use only one ANN. Although all of these improvements do not solve completely the inverse kinematics problem in articulated robots, they do lay the foundations for the design and development of future more effective and efficient controllers. Therefore, the source code and documentation of this research are also publicly available to practitioners interested in adapting and improving these methods to any industrial robot or articulated robot.