Category Archives: Robotics

A survey on visual SLAM in robotics

Iman Abaspur Kazerouni, Luke Fitzgerald, Gerard Dooly, Daniel Toal, A survey of state-of-the-art on visual SLAM, Expert Systems with Applications, Volume 205, 2022 DOI: 10.1016/j.eswa.2022.117734.

This paper is an overview to Visual Simultaneous Localization and Mapping (V-SLAM). We discuss the basic definitions in the SLAM and vision system fields and provide a review of the state-of-the-art methods utilized for mobile robot\u2019s vision and SLAM. This paper covers topics from the basic SLAM methods, vision sensors, machine vision algorithms for feature extraction and matching, Deep Learning (DL) methods and datasets for Visual Odometry (VO) and Loop Closure (LC) in V-SLAM applications. Several feature extraction and matching algorithms are simulated to show a better vision of feature-based techniques.

See also:

Jun Cheng, Liyan Zhang, Qihong Chen, Xinrong Hu, Jingcao Cai, “A review of visual SLAM methods for autonomous driving vehicles,” Engineering Applications of Artificial Intelligence, Volume 114, 2022, 104992, ISSN 0952-1976, https://doi.org/10.1016/j.engappai.2022.104992.

Tianyao Zhang, Xiaoguang Hu, Jin Xiao, Guofeng Zhang, “A survey of visual navigation: From geometry to embodied AI,” Engineering Applications of Artificial Intelligence, Volume 114, 2022, 105036, ISSN 0952-1976, https://doi.org/10.1016/j.engappai.2022.105036.

Kalman filter with time delays

H. Zhu, J. Mi, Y. Li, K. -V. Yuen and H. Leung, VB-Kalman Based Localization for Connected Vehicles With Delayed and Lost Measurements: Theory and Experiments, EEE/ASME Transactions on Mechatronics, vol. 27, no. 3, pp. 1370-1378, June 2022 DOI: 10.1109/TMECH.2021.3095096.

Traditionally, connected vehicles (CVs) share their own sensor data that relies on the satellite with their surrounding vehicles by vehicle-to-vehicle (V2V) communication. However, the satellite-based signal sometimes may be lost due to environmental factors. Time-delays and packet dropouts may occur randomly by V2V communication. To ensure the reliability and accuracy of localization for CVs, a novel variational Bayesian (VB)-Kalman method is developed for unknown and time varying probabilities of delayed and lost measurements. In this VB-Kalman localization method, two random variables are introduced to indicate whether a measurement is delayed and available, respectively. A hierarchical model is then formulated and its parameters and state are simultaneously estimated by the VB technique. Experimental results validate the proposed method for the localization of CVs in practice.

NOTE: See also S. Guo, Y. Liu, Y. Zheng and T. Ersal, “A Delay Compensation Framework for Connected Testbeds,” in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 7, pp. 4163-4176, July 2022, doi: 10.1109/TSMC.2021.3091974.

A tutorial on the integration of ROS with some interesting software, such as Jupyter

T. Fischer, W. Vollprecht, S. Traversaro, S. Yen, C. Herrero and M. Milford, A RoboStack Tutorial: Using the Robot Operating System Alongside the Conda and Jupyter Data Science Ecosystems, IEEE Robotics & Automation Magazine, vol. 29, no. 2, pp. 65-74, June 2022 DOI: 10.1109/MRA.2021.3128367.

The Robot Operating System (ROS) has become the de facto standard middleware in the robotics community [1] . ROS bundles everything, from low-level drivers to tools that transform among coordinate systems, to state-of-the-art perception and control algorithms. One of ROS\u2019s key merits is the rich ecosystem of standardized tools to build and distribute ROS-based software.

Reconstructing indoor map layouts from geometrical data

Matteo Luperto, Francesco Amigoni, Reconstruction and prediction of the layout of indoor environments from two-dimensional metric maps, Engineering Applications of Artificial Intelligence, Volume 113, 2022 DOI: 10.1016/j.engappai.2022.104910.

Metric maps, like occupancy grids, are one of the most common ways to represent indoor environments in autonomous mobile robotics. Although they are effective for navigation and localization, metric maps contain little knowledge about the structure of the buildings they represent. In this paper, we propose a method that identifies the structure of indoor environments from 2D metric maps by retrieving their layout, namely an abstract geometrical representation that models walls as line segments and rooms as polygons. The method works by finding regularities within a building, abstracting from the possibly noisy information of the metric map, and uses such knowledge to reconstruct the layout of the observed part and to predict a possible layout of the partially observed portion of the building. Thus, differently of other methods from the state of the art, our method can be applied both to fully observed environments and, most significantly, to partially observed ones. Experimental results show that our approach performs effectively and robustly on different types of input metric maps and that the predicted layout is increasingly more accurate when the input metric map is increasingly more complete. The layout returned by our method can be exploited in several tasks, such as semantic mapping, place categorization, path planning, human\u2013robot communication, and task allocation.

Continuous POMDPs through belief state sparsification, applied to active SLAM

Elimelech K, Indelman V. Simplified decision making in the belief space using belief sparsification. The International Journal of Robotics Research. 2022;41(5):470-496 DOI: 10.1177/02783649221076381.

In this work, we introduce a new and efficient solution approach for the problem of decision making under uncertainty, which can be formulated as decision making in a belief space, over a possibly high-dimensional state space. Typically, to solve a decision problem, one should identify the optimal action from a set of candidates, according to some objective. We claim that one can often generate and solve an analogous yet simplified decision problem, which can be solved more efficiently. A wise simplification method can lead to the same action selection, or one for which the maximal loss in optimality can be guaranteed. Furthermore, such simplification is separated from the state inference and does not compromise its accuracy, as the selected action would finally be applied on the original state. First, we present the concept for general decision problems and provide a theoretical framework for a coherent formulation of the approach. We then practically apply these ideas to decision problems in the belief space, which can be simplified by considering a sparse approximation of their initial belief. The scalable belief sparsification algorithm we provide is able to yield solutions which are guaranteed to be consistent with the original problem. We demonstrate the benefits of the approach in the solution of a realistic active-SLAM problem and manage to significantly reduce computation time, with no loss in the quality of solution. This work is both fundamental and practical and holds numerous possible extensions.

Hybridizing model-free and model-based in continuous RL, and a nice review of current research and benchmarks in robotics

Pinosky A, Abraham I, Broad A, Argall B, Murphey TD. Hybrid control for combining model-based and model-free reinforcement learning The International Journal of Robotics Research. 2023;42(6):337-355 DOI: 10.1177/02783649221083331.

We develop an approach to improve the learning capabilities of robotic systems by combining learned predictive models with experience-based state-action policy mappings. Predictive models provide an understanding of the task and the dynamics, while experience-based (model-free) policy mappings encode favorable actions that override planned actions. We refer to our approach of systematically combining model-based and model-free learning methods as hybrid learning. Our approach efficiently learns motor skills and improves the performance of predictive models and experience-based policies. Moreover, our approach enables policies (both model-based and model-free) to be updated using any off-policy reinforcement learning method. We derive a deterministic method of hybrid learning by optimally switching between learning modalities. We adapt our method to a stochastic variation that relaxes some of the key assumptions in the original derivation. Our deterministic and stochastic variations are tested on a variety of robot control benchmark tasks in simulation as well as a hardware manipulation task. We extend our approach for use with imitation learning methods, where experience is provided through demonstrations, and we test the expanded capability with a real-world pick-and-place task. The results show that our method is capable of improving the performance and sample efficiency of learning motor skills in a variety of experimental domains.

How plans influence sensors

McFassel G, Shell DA. Reactivity and statefulness: Action-based sensors, plans, and necessary state. The International Journal of Robotics Research. 2023;42(6):385-411 DOI: 10.1177/02783649221078874.

Typically to a roboticist, a plan is the outcome of other work, a synthesized object that realizes ends defined by some problem; plans qua plans are seldom treated as first-class objects of study. Plans designate functionality: a plan can be viewed as defining a robot\u2019s behavior throughout its execution. This informs and reveals many other aspects of the robot\u2019s design, including: necessary sensors and action choices, history, state, task structure, and how to define progress. Interrogating sets of plans helps in comprehending the ways in which differing executions influence the interrelationships between these various aspects. Revisiting Erdmann\u2019s theory of action-based sensors, a classical approach for characterizing fundamental information requirements, we show how plans (in their role of designating behavior) influence sensing requirements. Using an algorithm for enumerating plans, we examine how some plans for which no action-based sensor exists can be transformed into sets of sensors through the identification and handling of features that preclude the existence of action-based sensors. We are not aware of those obstructing features having been previously identified. Action-based sensors may be treated as standalone reactive plans; we relate them to the set of all possible plans through a lattice structure. This lattice reveals a boundary between plans with action-based sensors and those without. Some plans, specifically those that are not reactive plans and require some notion of internal state, can never have associated action-based sensors. Even so, action-based sensors can serve as a framework to explore and interpret how such plans make use of state.

POMDPs in robotics: QMDP-Net as a counterpart for the Partially Observable Markov Decision Process (POMDP) whose transition, observation, and reward functions are initially unknown

Collins N, Kurniawati H. Locally connected interrelated network: A forward propagation primitive, The International Journal of Robotics Research. 2023;42(6):371-384 DOI: 10.1177/02783649221093092.

End-to-end learning for planning is a promising approach for finding good robot strategies in situations where the state transition, observation, and reward functions are initially unknown. Many neural network architectures for this approach have shown positive results. Across these networks, seemingly small components have been used repeatedly in different architectures, which means improving the efficiency of these components has great potential to improve the overall performance of the network. This paper aims to improve one such component: The forward propagation module. In particular, we propose Locally Connected Interrelated Network (LCI-Net) \u2013 a novel type of locally connected layer with unshared but interrelated weights \u2013 to improve the efficiency of learning stochastic transition models for planning and propagating information via the learned transition models. LCI-Net is a small differentiable neural network module that can be plugged into various existing architectures. For evaluation purposes, we apply LCI-Net to VIN and QMDP-Net. VIN is an end-to-end neural network for solving Markov Decision Processes (MDPs) whose transition and reward functions are initially unknown, while QMDP-Net is its counterpart for the Partially Observable Markov Decision Process (POMDP) whose transition, observation, and reward functions are initially unknown. Simulation tests on benchmark problems involving 2D and 3D navigation and grasping indicate promising results: Changing only the forward propagation module alone with LCI-Net improves VIN\u2019s and QMDP-Net generalisation capability by more than 3� and 10�, respectively.

Modifications of Q-learning for better learning of robot navigation

Ee Soong Low, Pauline Ong, Cheng Yee Low, Rosli Omar, Modified Q-learning with distance metric and virtual target on path planning of mobile robot, Expert Systems with Applications, Volume 199, 2022, DOI: 10.1016/j.eswa.2022.117191.

Path planning is an essential element in mobile robot navigation. One of the popular path planners is Q-learning \u2013 a type of reinforcement learning that learns with little or no prior knowledge of the environment. Despite the successful implementation of Q-learning reported in numerous studies, its slow convergence associated with the curse of dimensionality may limit the performance in practice. To solve this problem, an Improved Q-learning (IQL) with three modifications is introduced in this study. First, a distance metric is added to Q-learning to guide the agent moves towards the target. Second, the Q function of Q-learning is modified to overcome dead-ends more effectively. Lastly, the virtual target concept is introduced in Q-learning to bypass dead-ends. Experimental results across twenty types of navigation maps show that the proposed strategies accelerate the learning speed of IQL in comparison with the Q-learning. Besides, performance comparison with seven well-known path planners indicates its efficiency in terms of the path smoothness, time taken, shortest distance and total distance used.

A nice summary of SLAM in robotics with Lidar and Cameras

Chghaf, M., Rodriguez, S. & Ouardi, A.E. Camera, LiDAR and Multi-modal SLAM Systems for Autonomous Ground Vehicles: a Survey J Intell Robot Syst 105, 2 (2022) DOI: 10.1007/s10846-022-01582-8.

Simultaneous Localization and Mapping (SLAM) have been widely studied over the last years for autonomous vehicles. SLAM achieves its purpose by constructing a map of the unknown environment while keeping track of the location. A major challenge, which is paramount during the design of SLAM systems, lies in the efficient use of onboard sensors to perceive the environment. The most widely applied algorithms are camera-based SLAM and LiDAR-based SLAM. Recent research focuses on the fusion of camera-based and LiDAR-based frameworks that show promising results. In this paper, we present a study of commonly used sensors and the fundamental theories behind SLAM algorithms. The study then presents the hardware architectures used to process these algorithms and the performance obtained when possible. Secondly, we highlight state-of-the-art methodologies in each modality and in the multi-modal framework. A brief comparison followed by future challenges is then underlined. Additionally, we provide insights to possible fusion approaches that can increase the robustness and accuracy of modern SLAM algorithms; hence allowing the hardware-software co-design of embedded systems taking into account the algorithmic complexity and the embedded architectures and real-time constraints.