Category Archives: Mobile Robot Mapping

Predicting changes in the environment through time series for better robot navigation

Yanbo Wang, Yaxian Fan, Jingchuan Wang, Weidong Chen, Long-term navigation for autonomous robots based on spatio-temporal map prediction, Robotics and Autonomous Systems, Volume 179, 2024 DOI: 10.1016/j.robot.2024.104724.

The robotics community has witnessed a growing demand for long-term navigation of autonomous robots in diverse environments, including factories, homes, offices, and public places. The core challenge in long-term navigation for autonomous robots lies in effectively adapting to varying degrees of dynamism in the environment. In this paper, we propose a long-term navigation method for autonomous robots based on spatio-temporal map prediction. The time series model is introduced to learn the changing patterns of different environmental structures or objects on multiple time scales based on the historical maps and forecast the future maps for long-term navigation. Then, an improved global path planning algorithm is performed based on the time-variant predicted cost maps. During navigation, the current observations are fused with the predicted map through a modified Bayesian filter to reduce the impact of prediction errors, and the updated map is stored for future predictions. We run simulation and conduct several weeks of experiments in multiple scenarios. The results show that our algorithm is effective and robust for long-term navigation in dynamic environments.

Particle grid maps

G. Chen, W. Dong, P. Peng, J. Alonso-Mora and X. Zhu, Continuous Occupancy Mapping in Dynamic Environments Using Particles, IEEE Transactions on Robotics, vol. 40, pp. 64-84, 2024 DOI: 10.1109/TRO.2023.3323841.

Particle-based dynamic occupancy maps were proposed in recent years to model the obstacles in dynamic environments. Current particle-based maps describe the occupancy status in discrete grid form and suffer from the grid size problem, wherein a large grid size is unfavorable for motion planning while a small grid size lowers efficiency and causes gaps and inconsistencies. To tackle this problem, this article generalizes the particle-based map into continuous space and builds an efficient 3-D egocentric local map. A dual-structure subspace division paradigm, composed of a voxel subspace division and a novel pyramid-like subspace division, is proposed to propagate particles and update the map efficiently with the consideration of occlusions. The occupancy status at an arbitrary point in the map space can then be estimated with the weights of the particles. To reduce the noise in modeling static and dynamic obstacles simultaneously, an initial velocity estimation approach and a mixture model are utilized. Experimental results show that our map can effectively and efficiently model both dynamic obstacles and static obstacles. Compared to the state-of-the-art grid-form particle-based map, our map enables continuous occupancy estimation and substantially improves the mapping performance at different resolutions.

Review of High Definition (HD) maps

Zhibin Bao, Sabir Hossain, Haoxiang Lang, Xianke Lin, A review of high-definition map creation methods for autonomous driving, Engineering Applications of Artificial Intelligence, Volume 122, 2023 DOI: 10.1016/j.engappai.2023.106125.

Autonomous driving has been among the most popular and challenging topics in the past few years. Among all modules in autonomous driving, High-definition (HD) map has drawn lots of attention in recent years due to its high precision and informative level in localization. Since localization is a significant module for automated vehicles to navigate an unknown environment, it has immediately become one of the most critical components of autonomous driving. Big organizations like HERE, NVIDIA, and TomTom have created HD maps for different scenes and purposes for autonomous driving. However, such HD maps are not open-source and are only available for internal research or automotive companies. Even though researchers have proposed various methods to create HD maps using different types of sensor data, there are few papers that review and summarize those methods. New researchers do not have a clear insight into the current state of HD map creation methods to work on their HD map research. Due to the reason above, reviewing, classifying, comparing, and summarizing the state-of-the-art techniques for HD map creation is necessary. This paper reviews recent HD map creation methods that leverage both 2D and 3D map generation. This review introduces the concept of HD maps and their usefulness in autonomous driving and gives a detailed overview of HD map creation methods. We will also discuss the limitations of the current HD map creation methods to motivate future research. Additionally, a chronological overview is created with the most recent HD map creation methods in this paper.

Mapping unseen rooms by deducing them from known environment structure

Matteo Luperto, Federico Amadelli, Moreno Di Berardino, Francesco Amigoni, Mapping beyond what you can see: Predicting the layout of rooms behind closed doors, Robotics and Autonomous Systems, Volume 159, 2023 DOI: 10.1016/j.robot.2022.104282.

The availability of maps of indoor environments is often fundamental for autonomous mobile robots to efficiently operate in industrial, office, and domestic applications. When robots build such maps, some areas of interest could be inaccessible, for instance, due to closed doors. As a consequence, these areas are not represented in the maps, possibly causing limitations in robot localization and navigation. In this paper, we provide a method that completes 2D grid maps by adding the predicted layout of the rooms behind closed doors. The main idea of our approach is to exploit the underlying geometrical structure of indoor environments to estimate the shape of unobserved rooms. Results show that our method is accurate in completing maps also when large portions of environments cannot be accessed by the robot during map building. We experimentally validate the quality of the completed maps by using them to perform path planning tasks.

Using results from belief-based planning for Bayesian inference in robotics

Farhi, E.I., Indelman, V., Bayesian incremental inference update by re-using calculations from belief space planning: a new paradigm, Auton Robot 46, 783\u2013816 (2022). DOI: 10.1007/s10514-022-10045-w.

Inference and decision making under uncertainty are key processes in every autonomous system and numerous robotic problems. In recent years, the similarities between inference and decision making triggered much work, from developing unified computational frameworks to pondering about the duality between the two. In spite of these efforts, inference and control, as well as inference and belief space planning (BSP) are still treated as two separate processes. In this paper we propose a paradigm shift, a novel approach which deviates from conventional Bayesian inference and utilizes the similarities between inference and BSP. We make the key observation that inference can be efficiently updated using predictions made during the decision making stage, even in light of inconsistent data association between the two. We developed a two staged process that implements our novel approach and updates inference using calculations from the precursory planning phase. Using autonomous navigation in an unknown environment along with iSAM2 efficient methodologies as a test case, we benchmarked our novel approach against standard Bayesian inference, both with synthetic and real-world data (KITTI dataset). Results indicate that not only our approach improves running time by at least a factor of two while providing the same estimation accuracy, but it also alleviates the computational burden of state dimensionality and loop closures.

Reconstructing indoor map layouts from geometrical data

Matteo Luperto, Francesco Amigoni, Reconstruction and prediction of the layout of indoor environments from two-dimensional metric maps, Engineering Applications of Artificial Intelligence, Volume 113, 2022 DOI: 10.1016/j.engappai.2022.104910.

Metric maps, like occupancy grids, are one of the most common ways to represent indoor environments in autonomous mobile robotics. Although they are effective for navigation and localization, metric maps contain little knowledge about the structure of the buildings they represent. In this paper, we propose a method that identifies the structure of indoor environments from 2D metric maps by retrieving their layout, namely an abstract geometrical representation that models walls as line segments and rooms as polygons. The method works by finding regularities within a building, abstracting from the possibly noisy information of the metric map, and uses such knowledge to reconstruct the layout of the observed part and to predict a possible layout of the partially observed portion of the building. Thus, differently of other methods from the state of the art, our method can be applied both to fully observed environments and, most significantly, to partially observed ones. Experimental results show that our approach performs effectively and robustly on different types of input metric maps and that the predicted layout is increasingly more accurate when the input metric map is increasingly more complete. The layout returned by our method can be exploited in several tasks, such as semantic mapping, place categorization, path planning, human\u2013robot communication, and task allocation.

Leveraging embodiment: finding an optimal viewpoint in the robot environment for improving scene description

Tan, Sinan, Guo, Di, Liu, Huaping, Zhang, Xinyu, Sun, Fuchun Embodied scene description, Autonomous Robots 46(1) DOI: 10.1007/s10514-021-10014-9.

Embodiment is an important characteristic for all intelligent agents, while existing scene description tasks mainly focus on analyzing images passively and the semantic understanding of the scenario is separated from the interaction between the agent and the environment. In this work, we propose the Embodied Scene Description, which exploits the embodiment ability of the agent to find an optimal viewpoint in its environment for scene description tasks. A learning framework with the paradigms of imitation learning and reinforcement learning is established to teach the intelligent agent to generate corresponding sensorimotor activities. The proposed framework is tested on both the AI2Thor dataset and a real-world robotic platform for different scene description tasks, demonstrating the effectiveness and scalability of the developed method. Also, a mobile application is developed, which can be used to assist visually-impaired people to better understand their surroundings.

A grammar for symbolic robot maps that allows for mapping unknown spaces

B. Talbot, F. Dayoub, P. Corke and G. Wyeth, Robot Navigation in Unseen Spaces Using an Abstract Map, IEEE Transactions on Cognitive and Developmental Systems, vol. 13, no. 4, pp. 791-805, Dec. 2021 DOI: 10.1109/TCDS.2020.2993855.

Human navigation in built environments depends on symbolic spatial information which has unrealized potential to enhance robot navigation capabilities. Information sources, such as labels, signs, maps, planners, spoken directions, and navigational gestures communicate a wealth of spatial information to the navigators of built environments; a wealth of information that robots typically ignore. We present a robot navigation system that uses the same symbolic spatial information employed by humans to purposefully navigate in unseen built environments with a level of performance comparable to humans. The navigation system uses a novel data structure called the abstract map to imagine malleable spatial models for unseen spaces from spatial symbols. Sensorimotor perceptions from a robot are then employed to provide purposeful navigation to symbolic goal locations in the unseen environment. We show how a dynamic system can be used to create malleable spatial models for the abstract map, and provide an open-source implementation to encourage future work in the area of symbolic navigation. The symbolic navigation performance of humans and a robot is evaluated in a real-world built environment. This article concludes with a qualitative analysis of human navigation strategies, providing further insights into how the symbolic navigation capabilities of robots in unseen built environments can be improved in the future.

Grid maps extended with confidence information

Ali-akbar Agha-mohammadi, Eric Heiden, Karol Hausman, Confidence-rich grid mapping,. The International Journal of Robotics Research, 38(12–13), 1352–1374, DOI: 10.1177/0278364919839762.

Representing the environment is a fundamental task in enabling robots to act autonomously in unknown environments. In this work, we present confidence-rich mapping (CRM), a new algorithm for spatial grid-based mapping of the 3D environment. CRM augments the occupancy level at each voxel by its confidence value. By explicitly storing and evolving confidence values using the CRM filter, CRM extends traditional grid mapping in three ways: first, it partially maintains the probabilistic dependence among voxels; second, it relaxes the need for hand-engineering an inverse sensor model and proposes the concept of sensor cause model that can be derived in a principled manner from the forward sensor model; third, and most importantly, it provides consistent confidence values over the occupancy estimation that can be reliably used in collision risk evaluation and motion planning. CRM runs online and enables mapping environments where voxels might be partially occupied. We demonstrate the performance of the method on various datasets and environments in simulation and on physical systems. We show in real-world experiments that, in addition to achieving maps that are more accurate than traditional methods, the proposed filtering scheme demonstrates a much higher level of consistency between its error and the reported confidence, hence, enabling a more reliable collision risk evaluation for motion planning.

Interesting account of robots that have non-rich sensors but have to do mapping and other modern stuff

Ma, F., Carlone, L., Ayaz, U., & Karaman, S. Sparse depth sensing for resource-constrained robots. The International Journal of Robotics Research, 38(8), 935 DOI: 10.1177/0278364919850296.

We consider the case in which a robot has to navigate in an unknown environment, but does not have enough on-board power or payload to carry a traditional depth sensor (e.g., a 3D lidar) and thus can only acquire a few (point-wise) depth measurements. We address the following question: is it possible to reconstruct the geometry of an unknown environment using sparse and incomplete depth measurements? Reconstruction from incomplete data is not possible in general, but when the robot operates in man-made environments, the depth exhibits some regularity (e.g., many planar surfaces with only a few edges); we leverage this regularity to infer depth from a small number of measurements. Our first contribution is a formulation of the depth reconstruction problem that bridges robot perception with the compressive sensing literature in signal processing. The second contribution includes a set of formal results that ascertain the exactness and stability of the depth reconstruction in 2D and 3D problems, and completely characterize the geometry of the profiles that we can reconstruct. Our third contribution is a set of practical algorithms for depth reconstruction: our formulation directly translates into algorithms for depth estimation based on convex programming. In real-world problems, these convex programs are very large and general-purpose solvers are relatively slow. For this reason, we discuss ad-hoc solvers that enable fast depth reconstruction in real problems. The last contribution is an extensive experimental evaluation in 2D and 3D problems, including Monte Carlo runs on simulated instances and testing on multiple real datasets. Empirical results confirm that the proposed approach ensures accurate depth reconstruction, outperforms interpolation-based strategies, and performs well even when the assumption of a structured environment is violated.