Author Archives: Juan-antonio Fernández-madrigal

Survey on POMDPs for robotics

M. Lauri, D. Hsu and J. Pajarinen, Partially Observable Markov Decision Processes in Robotics: A Survey, IEEE Transactions on Robotics, vol. 39, no. 1, pp. 21-40, Feb. 2023 DOI: 10.1109/TRO.2022.3200138.

Noisy sensing, imperfect control, and environment changes are defining characteristics of many real-world robot tasks. The partially observable Markov decision process (POMDP) provides a principled mathematical framework for modeling and solving robot decision and control tasks under uncertainty. Over the last decade, it has seen many successful applications, spanning localization and navigation, search and tracking, autonomous driving, multirobot systems, manipulation, and human\u2013robot interaction. This survey aims to bridge the gap between the development of POMDP models and algorithms at one end and application to diverse robot decision tasks at the other. It analyzes the characteristics of these tasks and connects them with the mathematical and algorithmic properties of the POMDP framework for effective modeling and solution. For practitioners, the survey provides some of the key task characteristics in deciding when and how to apply POMDPs to robot tasks successfully. For POMDP algorithm designers, the survey provides new insights into the unique challenges of applying POMDPs to robot systems and points to promising new directions for further research.

Review of RL applied to robotic manipulation

��igo Elguea-Aguinaco, Antonio Serrano-Mu�oz, Dimitrios Chrysostomou, Ibai Inziarte-Hidalgo, Simon B�gh, Nestor Arana-Arexolaleiba, A review on reinforcement learning for contact-rich robotic manipulation tasks, Robotics and Computer-Integrated Manufacturing, Volume 81, 2023 DOI: 10.1016/j.rcim.2022.102517.

Research and application of reinforcement learning in robotics for contact-rich manipulation tasks have exploded in recent years. Its ability to cope with unstructured environments and accomplish hard-to-engineer behaviors has led reinforcement learning agents to be increasingly applied in real-life scenarios. However, there is still a long way ahead for reinforcement learning to become a core element in industrial applications. This paper examines the landscape of reinforcement learning and reviews advances in its application in contact-rich tasks from 2017 to the present. The analysis investigates the main research for the most commonly selected tasks for testing reinforcement learning algorithms in both rigid and deformable object manipulation. Additionally, the trends around reinforcement learning associated with serial manipulators are explored as well as the various technological challenges that this machine learning control technique currently presents. Lastly, based on the state-of-the-art and the commonalities among the studies, a framework relating the main concepts of reinforcement learning in contact-rich manipulation tasks is proposed. The final goal of this review is to support the robotics community in future development of systems commanded by reinforcement learning, discuss the main challenges of this technology and suggest future research directions in the domain.

Mapping unseen rooms by deducing them from known environment structure

Matteo Luperto, Federico Amadelli, Moreno Di Berardino, Francesco Amigoni, Mapping beyond what you can see: Predicting the layout of rooms behind closed doors, Robotics and Autonomous Systems, Volume 159, 2023 DOI: 10.1016/j.robot.2022.104282.

The availability of maps of indoor environments is often fundamental for autonomous mobile robots to efficiently operate in industrial, office, and domestic applications. When robots build such maps, some areas of interest could be inaccessible, for instance, due to closed doors. As a consequence, these areas are not represented in the maps, possibly causing limitations in robot localization and navigation. In this paper, we provide a method that completes 2D grid maps by adding the predicted layout of the rooms behind closed doors. The main idea of our approach is to exploit the underlying geometrical structure of indoor environments to estimate the shape of unobserved rooms. Results show that our method is accurate in completing maps also when large portions of environments cannot be accessed by the robot during map building. We experimentally validate the quality of the completed maps by using them to perform path planning tasks.

Using stochastic bits instead of binary logic

H. Li and Y. Chen, Hybrid Logic Computing of Binary and Stochastic, IEEE Embedded Systems Letters, vol. 14, no. 4, pp. 171-174, Dec. 2022 DOI: 10.1109/LES.2022.3170457.

Binary logic is applied internally to almost all digital signal processing and computer systems, because binary logic is direct implemented in CMOS circuits. Stochastic logic is achieved through its particular representation of data, which uses the probability of the logic level being ON to represent data. Stochastic logic computing is a type of logic computation based on stochastic bit stream instead of the binary numbers. This letter proposes a hybrid computing system of binary logic and stochastic logic, called hybrid logic. The study discusses how to generate hybrid logic circuits, and demonstrates the properties of hybrid logic circuits.

Including safety learning in RL for improving the sim-to-lab gap

Kai-Chieh Hsu, Allen Z. Ren, Duy P. Nguyen, Anirudha Majumdar, Jaime F. Fisac, Sim-to-Lab-to-Real: Safe reinforcement learning with shielding and generalization guarantees, Artificial Intelligence, Volume 314, 2023 DOI: 10.1016/j.artint.2022.103811.

Safety is a critical component of autonomous systems and remains a challenge for learning-based policies to be utilized in the real world. In particular, policies learned using reinforcement learning often fail to generalize to novel environments due to unsafe behavior. In this paper, we propose Sim-to-Lab-to-Real to bridge the reality gap with a probabilistically guaranteed safety-aware policy distribution. To improve safety, we apply a dual policy setup where a performance policy is trained using the cumulative task reward and a backup (safety) policy is trained by solving the Safety Bellman Equation based on Hamilton-Jacobi (HJ) reachability analysis. In Sim-to-Lab transfer, we apply a supervisory control scheme to shield unsafe actions during exploration; in Lab-to-Real transfer, we leverage the Probably Approximately Correct (PAC)-Bayes framework to provide lower bounds on the expected performance and safety of policies in unseen environments. Additionally, inheriting from the HJ reachability analysis, the bound accounts for the expectation over the worst-case safety in each environment. We empirically study the proposed framework for ego-vision navigation in two types of indoor environments with varying degrees of photorealism. We also demonstrate strong generalization performance through hardware experiments in real indoor spaces with a quadrupedal robot. See https://sites.google.com/princeton.edu/sim-to-lab-to-real for supplementary material.

Image resizing for achieving real-time in embedded AI

Hu, Y., Liu, S., Abdelzaher, T. et al. Real-time task scheduling with image resizing for criticality-based machine perception, Real-Time Syst 58, 430\u2013455 (2022) DOI: 10.1007/s11241-022-09387-6.

This paper extends a previous conference publication that proposed a real-time task scheduling framework for criticality-based machine perception, leveraging image resizing as the tool to control the accuracy and execution time trade-off. Criticality-based machine perception reduces the computing demand of on-board AI-based machine inference pipelines (that run on embedded hardware) in applications such as autonomous drones and cars. By segmenting inputs, such as individual video frames, into smaller parts and allowing the downstream AI-based perception module to process some segments ahead of (or at a higher quality than) others, limited machine resources are spent more judiciously on more important parts of the input (e.g., on foreground objects in lieu of backgrounds). In recent work, we explored the use of image resizing as a way to offer a middle ground between full-resolution processing and dropping, thus allowing more flexibility in handling less important parts of the input. In this journal extension, we make the following contributions: (i) We relax a limiting assumption of our prior work; namely, the need for a \u201cperfect sensor” to identify which parts of the image are more critical. Instead, we investigate the use of real LiDAR measurements for quick-and-dirty image segmentation ahead of AI-based processing. (ii) We explore another dimension of freedom in the scheduler: namely, merging several nearby objects into a consolidated segment for downstream processing. We formulate the scheduling problem as an optimal resize-merge problem and design a solution for it. Experiments on an AI-powered embedded platform with a real-world driving dataset demonstrate the practicality and effectiveness of our proposed framework.

Review of emotions in AI

G. Assun��o, B. Patr�o, M. Castelo-Branco and P. Menezes, An Overview of Emotion in Artificial Intelligence, IEEE Transactions on Artificial Intelligence, vol. 3, no. 6, pp. 867-886, Dec. 2022 DOI: 10.1109/TAI.2022.3159614.

The field of artificial intelligence (AI) has gained immense traction over the past decade, producing increasingly successful applications as research strives to understand and exploit neural processing specifics. Nonetheless emotion, despite its demonstrated significance to reinforcement, social integration, and general development, remains a largely stigmatized and consequently disregarded topic by most engineers and computer scientists. In this article, we endorse emotion\u2019s value for the advancement of artificial cognitive processing, as well as explore real-world use cases of emotion-augmented AI. A schematization is provided on the psychological-neurophysiologic basics of emotion in order to bridge the interdisciplinary gap preventing emulation and integration in AI methodology, as well as exploitation by current systems. In addition, we overview three major subdomains of AI greatly benefiting from emotion, and produce a systematic survey of meaningful yet recent contributions to each area. To conclude, we address crucial challenges and promising research paths for the future of emotion in AI with the hope that more researchers will develop an interest for the topic and find it easier to develop their own contributions.

A brief summary of the state of the art in time series clustering

Hailin Li, Zechen Liu, Xiaoji Wan, Time series clustering based on complex network with synchronous matching states, Expert Systems with Applications, Volume 211, 2023 DOI: 10.1016/j.eswa.2022.118543.

Due to the extensive existence of time series in various fields, more and more research on time series data mining, especially time series clustering, has been done in recent years. Clustering technology can extract valuable information and potential patterns from time series data. This paper proposes a time series Clustering method based on Synchronous matching of Complex networks (CSC). This method uses density peak clustering algorithm to identify the state of each time point and obtains the state sequence according to the timeline of the original time series. State sequences is a new method to represent time series. By comparing two state sequences synchronously, the length of state sequence with step is calculated and the similarity is presented, which forms a new method to calculate the similarity of time series. Based on the obtained time series similarity, the relationship network of time series is constructed. Simultaneously, the community discovery technology is applied to cluster the relationship network and further achieve the complete time series clustering. The detailed process and simulation experiments of CSC method are given. Experimental results on different datasets show that CSC method is superior to other traditional time series clustering methods.

More robust KF through the use of skewed distributions

M. Bai, Y. Huang, B. Chen and Y. Zhang, A Novel Robust Kalman Filtering Framework Based on Normal-Skew Mixture Distribution, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 11, pp. 6789-6805, Nov. 2022 DOI: 10.1109/TSMC.2021.3098299.

In this article, a novel normal-skew mixture (NSM) distribution is presented to model the normal and/or heavy-tailed and/or skew nonstationary distributed noises. The NSM distribution can be formulated as a hierarchically Gaussian presentation by leveraging a Bernoulli distributed random variable. Based on this, a novel robust Kalman filtering framework can be developed utilizing the variational Bayesian method, where the one-step prediction and measurement-likelihood densities are modeled as NSM distributions. For implementation, several exemplary robust Kalman filters (KFs) are derived based on some specific cases of NSM distribution. The relationships between some existing robust KFs and the presented framework are also revealed. The superiority of the proposed robust Kalman filtering framework is validated by a target tracking simulation example.

Using CNNs trained with image data to predict time series data

Aniello De Santo, Antonino Ferraro, Antonio Galli, Vincenzo Moscato, Giancarlo Sperl�, Evaluating time series encoding techniques for Predictive Maintenance, Expert Systems with Applications, Volume 210, 2022 DOI: 10.1016/j.eswa.2022.118435.

Predictive Maintenance has become an important component in modern industrial scenarios, as a way to minimize down-times and fault rate for different equipment. In this sense, while machine learning and deep learning approaches are promising due to their accurate predictive abilities, their data-heavy requirements make them significantly limited in real world applications. Since one of the main issues to overcome is lack of consistent training data, recent work has explored the possibility of adapting well-known deep-learning models for image recognition, by exploiting techniques to encode time series as images. In this paper, we propose a framework for evaluating some of the best known time series encoding techniques, together with Convolutional Neural Network-based image classifiers applied to predictive maintenance tasks. We conduct an extensive empirical evaluation of these approaches for the failure prediction task on two real-world datasets (PAKDD2020 Alibaba AI OPS Competition and NASA bearings), also comparing their performances with respect to the state-of-the-art approaches. We further discuss advantages and limitation of the exploited models when coupled with proper data augmentation techniques.