Category Archives: Robotics

UWB for SLAM

H. A. G. C. Premachandra, R. Liu, C. Yuen and U. -X. Tan, UWB Radar SLAM: An Anchorless Approach in Vision Denied Indoor Environments, IEEE Robotics and Automation Letters, vol. 8, no. 9, pp. 5299-5306, Sept. 2023 DOI: 10.1109/LRA.2023.3293354.

LiDAR and cameras are frequently used as sensors for simultaneous localization and mapping (SLAM). However, these sensors are prone to failure under low visibility (e.g. smoke) or places with reflective surfaces (e.g. mirrors). On the other hand, electromagnetic waves exhibit better penetration properties when the wavelength increases, thus are not affected by low visibility. Hence, this letter presents ultra-wideband (UWB) radar as an alternative to the existing sensors. UWB is generally known to be used in anchor-tag SLAM systems. One or more anchors are installed in the environment and the tags are attached to the robots. Although this method performs well under low visibility, modifying the existing infrastructure is not always feasible. UWB has also been used in peer-to-peer ranging collaborative SLAM systems. However, this requires more than a single robot and does not include mapping in the mentioned environment like smoke. Therefore, the presented approach in this letter solely depends on the UWB transceivers mounted on-board. In addition, an extended Kalman filter (EKF) SLAM is used to solve the SLAM problem at the back-end. Experiments were conducted and demonstrated that the proposed UWB-based radar SLAM is able to map natural point landmarks inside an indoor environment while improving robot localization.

Using “empowerment” to better select actions in RL when there are only sparse rewards

Dai, S., Xu, W., Hofmann, A. et al. An empowerment-based solution to robotic manipulation tasks with sparse rewards, Auton Robot 47, 617\u2013633 (2023) DOI: 10.1007/s10514-023-10087-8.

In order to provide adaptive and user-friendly solutions to robotic manipulation, it is important that the agent can learn to accomplish tasks even if they are only provided with very sparse instruction signals. To address the issues reinforcement learning algorithms face when task rewards are sparse, this paper proposes an intrinsic motivation approach that can be easily integrated into any standard reinforcement learning algorithm and can allow robotic manipulators to learn useful manipulation skills with only sparse extrinsic rewards. Through integrating and balancing empowerment and curiosity, this approach shows superior performance compared to other state-of-the-art intrinsic exploration approaches during extensive empirical testing. When combined with other strategies for tackling the exploration challenge, e.g. curriculum learning, our approach is able to further improve the exploration efficiency and task success rate. Qualitative analysis also shows that when combined with diversity-driven intrinsic motivations, this approach can help manipulators learn a set of diverse skills which could potentially be applied to other more complicated manipulation tasks and accelerate their learning process.

Using Deep RL (TRPO) for selecting best interest points in the environment for path planning

Jie Fan, Xudong Zhang, Yuan Zou, Hierarchical path planner for unknown space exploration using reinforcement learning-based intelligent frontier selection, Expert Systems with Applications, Volume 230, 2023 DOI: 10.1016/j.eswa.2023.120630.

Path planning in unknown environments is extremely useful for some specific tasks, such as exploration of outer space planets, search and rescue in disaster areas, home sweeping services, etc. However, existing frontier-based path planners suffer from insufficient exploration, while reinforcement learning (RL)-based ones are confronted with problems in efficient training and effective searching. To overcome the above problems, this paper proposes a novel hierarchical path planner for unknown space exploration using RL-based intelligent frontier selection. Firstly, by decomposing the path planner into three-layered architecture (including the perception layer, planning layer, and control layer) and using edge detection to find potential frontiers to track, the path search space is shrunk from the whole map to a handful of points of interest, which significantly saves the computational resources in both training and execution processes. Secondly, one of the advanced RL algorithms, trust region policy optimization (TRPO), is used as a judge to select the best frontier for the robot to track, which ensures the optimality of the path planner with a shorter path length. The proposed method is validated through simulation and compared with both classic and state-of-the-art methods. Results show that the training process could be greatly accelerated compared with the traditional deep-Q network (DQN). Moreover, the proposed method has 4.2%\u201314.3% improvement in exploration region rate and achieves the highest exploration completeness.

Monte Carlo Tree Search (MTCS) with hybrid discrete-continuous beliefs, applied to robotics

M. Barenboim, M. Shienman and V. Indelman, Monte Carlo Planning in Hybrid Belief POMDPs, IEEE Robotics and Automation Letters, vol. 8, no. 8, pp. 4410-4417, Aug. 2023 DOI: 10.1109/LRA.2023.3282773.

Real-world problems often require reasoning about hybrid beliefs, over both discrete and continuous random variables. Yet, such a setting has hardly been investigated in the context of planning. Moreover, existing online partially observable Markov decision processes (POMDPs) solvers do not support hybrid beliefs directly. In particular, these solvers do not address the added computational burden due to an increasing number of hypotheses with the planning horizon, which can grow exponentially. As part of this work, we present a novel algorithm, Hybrid Belief Monte Carlo Planning (HB-MCP) that utilizes the Monte Carlo Tree Search (MCTS) algorithm to solve a POMDP while maintaining a hybrid belief. We illustrate how the upper confidence bound (UCB) exploration bonus can be leveraged to guide the growth of hypotheses trees alongside the belief trees. We then evaluate our approach in highly aliased simulated environments where unresolved data association leads to multi-modal belief hypotheses.

A survey of guided RL for improving its application on robotics

J. E�er, N. Bach, C. Jestel, O. Urbann and S. Kerner, Guided Reinforcement Learning: A Review and Evaluation for Efficient and Effective Real-World Robotics [Survey], IEEE Robotics & Automation Magazine, vol. 30, no. 2, pp. 67-85, June 2023 DOI: 10.1109/MRA.2022.3207664.

Recent successes aside, reinforcement learning (RL) still faces significant challenges in its application to the real-world robotics domain. Guiding the learning process with additional knowledge offers a potential solution, thus leveraging the strengths of data- and knowledge-driven approaches. However, this field of research encompasses several disciplines and hence would benefit from a structured overview.

In this article, we propose a concept of guided RL that provides a systematic approach toward accelerating the training process and improving performance for real-world robotics settings. We introduce a taxonomy that structures guided RL approaches and shows how different sources of knowledge can be integrated into the learning pipeline in a practical way. Based on this, we describe available approaches in this field and quantitatively evaluate their specific impact in terms of efficiency, effectiveness, and sim-to-real transfer within the robotics domain.

Comprehensive survey of the history and state of the art of active SLAM

J. A. Placed et al., A Survey on Active Simultaneous Localization and Mapping: State of the Art and New Frontiers, IEEE Transactions on Robotics, vol. 39, no. 3, pp. 1686-1705 DOI: 10.1109/TRO.2023.3248510.

Active simultaneous localization and mapping (SLAM) is the problem of planning and controlling the motion of a robot to build the most accurate and complete model of the surrounding environment. Since the first foundational work in active perception appeared, more than three decades ago, this field has received increasing attention across different scientific communities. This has brought about many different approaches and formulations, and makes a review of the current trends necessary and extremely valuable for both new and experienced researchers. In this article, we survey the state of the art in active SLAM and take an in-depth look at the open challenges that still require attention to meet the needs of modern applications. After providing a historical perspective, we present a unified problem formulation and review the well-established modular solution scheme, which decouples the problem into three stages that identify, select, and execute potential navigation actions. We then analyze alternative approaches, including belief-space planning and deep reinforcement learning techniques, and review related work on multirobot coordination. This article concludes with a discussion of new research directions, addressing reproducible research, active spatial perception, and practical applications, among other topics.

Review of High Definition (HD) maps

Zhibin Bao, Sabir Hossain, Haoxiang Lang, Xianke Lin, A review of high-definition map creation methods for autonomous driving, Engineering Applications of Artificial Intelligence, Volume 122, 2023 DOI: 10.1016/j.engappai.2023.106125.

Autonomous driving has been among the most popular and challenging topics in the past few years. Among all modules in autonomous driving, High-definition (HD) map has drawn lots of attention in recent years due to its high precision and informative level in localization. Since localization is a significant module for automated vehicles to navigate an unknown environment, it has immediately become one of the most critical components of autonomous driving. Big organizations like HERE, NVIDIA, and TomTom have created HD maps for different scenes and purposes for autonomous driving. However, such HD maps are not open-source and are only available for internal research or automotive companies. Even though researchers have proposed various methods to create HD maps using different types of sensor data, there are few papers that review and summarize those methods. New researchers do not have a clear insight into the current state of HD map creation methods to work on their HD map research. Due to the reason above, reviewing, classifying, comparing, and summarizing the state-of-the-art techniques for HD map creation is necessary. This paper reviews recent HD map creation methods that leverage both 2D and 3D map generation. This review introduces the concept of HD maps and their usefulness in autonomous driving and gives a detailed overview of HD map creation methods. We will also discuss the limitations of the current HD map creation methods to motivate future research. Additionally, a chronological overview is created with the most recent HD map creation methods in this paper.

Improving safety in deep RL in the case of autonomous driving

Eduardo Candela, Olivier Doustaly, Leandro Parada, Felix Feng, Yiannis Demiris, Panagiotis Angeloudis, Risk-aware controller for autonomous vehicles using model-based collision prediction and reinforcement learning, Artificial Intelligence, Volume 320, 2023 DOI: 10.1016/j.artint.2023.103923.

Autonomous Vehicles (AVs) have the potential to save millions of lives and increase the efficiency of transportation services. However, the successful deployment of AVs requires tackling multiple challenges related to modeling and certifying safety. State-of-the-art decision-making methods usually rely on end-to-end learning or imitation learning approaches, which still pose significant safety risks. Hence the necessity of risk-aware AVs that can better predict and handle dangerous situations. Furthermore, current approaches tend to lack explainability due to their reliance on end-to-end Deep Learning, where significant causal relationships are not guaranteed to be learned from data. This paper introduces a novel risk-aware framework for training AV agents using a bespoke collision prediction model and Reinforcement Learning (RL). The collision prediction model is based on Gaussian Processes and vehicle dynamics, and is used to generate the RL state vector. Using an explicit risk model increases the post-hoc explainability of the AV agent, which is vital for reaching and certifying the high safety levels required for AVs and other safety-sensitive applications. Experimental results obtained with a simulator and state-of-the-art RL algorithms show that the risk-aware RL framework decreases average collision rates by 15%, makes AVs more robust to sudden harsh braking situations, and achieves better performance in both safety and speed when compared to a standard rule-based method (the Intelligent Driver Model). Moreover, the proposed collision prediction model outperforms other models in the literature.

See also: https://doi.org/10.1016/j.artint.2023.103922
And also: https://doi.org/10.1177/02783649231169492

Embedding actual knowledge into Deep Learning to improve its reliability

Lutter M, Peters J., Combining physics and deep learning to learn continuous-time dynamics models, The International Journal of Robotics Research. 2023;42(3):83-107 DOI: 10.1177/02783649231169492.

Deep learning has been widely used within learning algorithms for robotics. One disadvantage of deep networks is that these networks are black-box representations. Therefore, the learned approximations ignore the existing knowledge of physics or robotics. Especially for learning dynamics models, these black-box models are not desirable as the underlying principles are well understood and the standard deep networks can learn dynamics that violate these principles. To learn dynamics models with deep networks that guarantee physically plausible dynamics, we introduce physics-inspired deep networks that combine first principles from physics with deep learning. We incorporate Lagrangian mechanics within the model learning such that all approximated models adhere to the laws of physics and conserve energy. Deep Lagrangian Networks (DeLaN) parametrize the system energy using two networks. The parameters are obtained by minimizing the squared residual of the Euler\u2013Lagrange differential equation. Therefore, the resulting model does not require specific knowledge of the individual system, is interpretable, and can be used as a forward, inverse, and energy model. Previously these properties were only obtained when using system identification techniques that require knowledge of the kinematic structure. We apply DeLaN to learning dynamics models and apply these models to control simulated and physical rigid body systems. The results show that the proposed approach obtains dynamics models that can be applied to physical systems for real-time control. Compared to standard deep networks, the physics-inspired models learn better models and capture the underlying structure of the dynamics.

Using proprioceptive, internal perceptions, in robots, with RL

Agnese Augello, Salvatore Gaglio, Ignazio Infantino, Umberto Maniscalco, Giovanni Pilato, Filippo Vella, Roboception and adaptation in a cognitive robot, Robotics and Autonomous Systems, Volume 164, 2023 DOI: 10.1016/j.robot.2023.104400.

In robotics, perception is usually oriented at understanding what is happening in the external world, while few works pay attention to what is occurring in the robot\u2019s body. In this work, we propose an artificial somatosensory system, embedded in a cognitive architecture, that enables a robot to perceive the sensations from its embodiment while executing a task. We called these perceptions roboceptions, and they let the robot act according to its own physical needs in addition to the task demands. Physical information is processed by the robot to behave in a balanced way, determining the most appropriate trade-off between the achievement of the task and its well being. The experiments show the integration of information from the somatosensory system and the choices that lead to the accomplishment of the task.