Category Archives: Robot Motion Planning

Mapless (egocentric) navigation with hierarchical RL that includes a good survey of current RL approaches for that task

Yan Gao, Feiqiang Lin, Boliang Cai, Jing Wu, Changyun Wei, Raphael Grech, Ze Ji, Mapless navigation via Hierarchical Reinforcement Learning with memory-decaying novelty, Robotics and Autonomous Systems, Volume 182, 2024, DOI: 10.1016/j.robot.2024.104815.

Hierarchical Reinforcement Learning (HRL) has shown superior performance for mapless navigation tasks. However, it remains limited in unstructured environments that might contain terrains like long corridors and dead corners, which can lead to local minima. This is because most HRL-based mapless navigation methods employ a simplified reward setting and exploration strategy. In this work, we propose a novel reward function for training the high-level (HL) policy, which contains two components: extrinsic reward and intrinsic reward. The extrinsic reward encourages the robot to move towards the target location, while the intrinsic reward is computed based on novelty, episode memory and memory decaying, making the agent capable of accomplishing spontaneous exploration. We also design a novel neural network structure that incorporates an LSTM network to augment the agent with memory and reasoning capabilities. We test our method in unknown environments and specific scenarios prone to the local minimum problem to evaluate the navigation performance and local minimum resolution ability. The results show that our method significantly increases the success rate when compared to advanced RL-based methods, achieving a maximum improvement of nearly 28%. Our method demonstrates effective improvement in addressing the local minimum issue, especially in cases where the baselines fail completely. Additionally, numerous ablation studies consistently confirm the effectiveness of our proposed reward function and neural network structure.

Robot exploration through decision-making + gaussian processes

Stephens, A., Budd, M., Staniaszek, M. et al. Planning under uncertainty for safe robot exploration using Gaussian process prediction, Auton Robot 48, 18 (2024) DOI: 10.1007/s10514-024-10172-6.

The exploration of new environments is a crucial challenge for mobile robots. This task becomes even more complex with the added requirement of ensuring safety. Here, safety refers to the robot staying in regions where the values of certain environmental conditions (such as terrain steepness or radiation levels) are within a predefined threshold. We consider two types of safe exploration problems. First, the robot has a map of its workspace, but the values of the environmental features relevant to safety are unknown beforehand and must be explored. Second, both the map and the environmental features are unknown, and the robot must build a map whilst remaining safe. Our proposed framework uses a Gaussian process to predict the value of the environmental features in unvisited regions. We then build a Markov decision process that integrates the Gaussian process predictions with the transition probabilities of the environmental model. The Markov decision process is then incorporated into an exploration algorithm that decides which new region of the environment to explore based on information value, predicted safety, and distance from the current position of the robot. We empirically evaluate the effectiveness of our framework through simulations and its application on a physical robot in an underground environment.

Predicting changes in the environment through time series for better robot navigation

Yanbo Wang, Yaxian Fan, Jingchuan Wang, Weidong Chen, Long-term navigation for autonomous robots based on spatio-temporal map prediction, Robotics and Autonomous Systems, Volume 179, 2024 DOI: 10.1016/j.robot.2024.104724.

The robotics community has witnessed a growing demand for long-term navigation of autonomous robots in diverse environments, including factories, homes, offices, and public places. The core challenge in long-term navigation for autonomous robots lies in effectively adapting to varying degrees of dynamism in the environment. In this paper, we propose a long-term navigation method for autonomous robots based on spatio-temporal map prediction. The time series model is introduced to learn the changing patterns of different environmental structures or objects on multiple time scales based on the historical maps and forecast the future maps for long-term navigation. Then, an improved global path planning algorithm is performed based on the time-variant predicted cost maps. During navigation, the current observations are fused with the predicted map through a modified Bayesian filter to reduce the impact of prediction errors, and the updated map is stored for future predictions. We run simulation and conduct several weeks of experiments in multiple scenarios. The results show that our algorithm is effective and robust for long-term navigation in dynamic environments.

A review of state-of-the-art path planning methods applied to autonomous driving

Mohamed Reda, Ahmed Onsy, Amira Y. Haikal, Ali Ghanbari, Path planning algorithms in the autonomous driving system: A comprehensive review, Robotics and Autonomous Systems, Volume 174, 2024 DOI: 10.1016/j.robot.2024.104630.

This comprehensive review focuses on the Autonomous Driving System (ADS), which aims to reduce human errors that are the reason for about 95% of car accidents. The ADS consists of six stages: sensors, perception, localization, assessment, path planning, and control. We explain the main state-of-the-art techniques used in each stage, analyzing 275 papers, with 162 specifically on path planning due to its complexity, NP-hard optimization nature, and pivotal role in ADS. This paper categorizes path planning techniques into three primary groups: traditional (graph-based, sampling-based, gradient-based, optimization-based, interpolation curve algorithms), machine and deep learning, and meta-heuristic optimization, detailing their advantages and drawbacks. Findings show that meta-heuristic optimization methods, representing 23% of our study, are preferred for being general problem solvers capable of handling complex problems. In addition, they have faster convergence and reduced risk of local minima. Machine and deep learning techniques, accounting for 25%, are favored for their learning capabilities and fast responses to known scenarios. The trend towards hybrid algorithms (27%) combines various methods, merging each algorithm’s benefits and overcoming the other’s drawbacks. Moreover, adaptive parameter tuning is crucial to enhance efficiency, applicability, and balancing the search capability. This review sheds light on the future of path planning in autonomous driving systems, helping to tackle current challenges and unlock the full capabilities of autonomous vehicles.

Visibility graphs for robot path planning is still in use!

Junlin Ou, Seong Hyeon Hong, Ge Song, Yi Wang, Hybrid path planning based on adaptive visibility graph initialization and edge computing for mobile robots, Engineering Applications of Artificial Intelligence, Volume 126, Part D, 2023 DOI: 10.1016/j.engappai.2023.107110.

This paper presents a new initialization method that combines adaptive visibility graphs and the A* algorithm to improve the exploration, accuracy, and computing efficiency of hybrid path planning for mobile robots. First, segments/links in the full visibility graphs are removed randomly in an iterative and adaptive manner, yielding adaptive visibility graphs. Then the A* algorithm is applied to find the shortest paths in these adaptive visibility graphs. Next, high-quality paths featuring low fitness values are chosen to initialize the subsequent heuristic optimization in hybrid path planning. Specifically, in the present study, the genetic algorithm (GA) is implemented on a CPU/GPU edge computing device (Jetson AGX Xavier) to exploit its massively parallel processing threads, and the strategy for judicious CPU/GPU resource utilization is also developed. Numerical experiments are conducted to determine proper hyperparameters and configure GA with balanced performance. Various optimal paths with differential consideration of practical factors for robot path planning are obtained by the proposed method. Compared to the other benchmark methods, ours significantly improves the diversity of initial path and exploration, optimization accuracy, and computing speed (within 5�s with most less than 2�s). Furthermore, real-time experiments are carried out to demonstrate the effectiveness and application of the proposed algorithm on mobile robots.

Pure pursuit with linear velocity regulation

Macenski, S., Singh, S., Mart�n, F. et al. Regulated pure pursuit for robot path tracking, Auton Robot 47, 685\u2013694 (2023) DOI: 10.1007/s10514-023-10097-6.

The accelerated deployment of service robots have spawned a number of algorithm variations to better handle real-world conditions. Many local trajectory planning techniques have been deployed on practical robot systems successfully. While most formulations of Dynamic Window Approach and Model Predictive Control can progress along paths and optimize for additional criteria, the use of pure path tracking algorithms is still commonplace. Decades later, Pure Pursuit and its variants continues to be one of the most commonly utilized classes of local trajectory planners. However, few Pure Pursuit variants have been proposed with schema for variable linear velocities\u2014they either assume a constant velocity or fails to address the point at all. This paper presents a variant of Pure Pursuit designed with additional heuristics to regulate linear velocities, built atop the existing Adaptive variant. The Regulated Pure Pursuit algorithm makes incremental improvements on state of the art by adjusting linear velocities with particular focus on safety in constrained and partially observable spaces commonly negotiated by deployed robots. We present experiments with the Regulated Pure Pursuit algorithm on industrial-grade service robots. We also provide a high-quality reference implementation that is freely included ROS 2 Nav2 framework at https://github.com/ros-planning/navigation2 for fast evaluation.

Using Deep RL (TRPO) for selecting best interest points in the environment for path planning

Jie Fan, Xudong Zhang, Yuan Zou, Hierarchical path planner for unknown space exploration using reinforcement learning-based intelligent frontier selection, Expert Systems with Applications, Volume 230, 2023 DOI: 10.1016/j.eswa.2023.120630.

Path planning in unknown environments is extremely useful for some specific tasks, such as exploration of outer space planets, search and rescue in disaster areas, home sweeping services, etc. However, existing frontier-based path planners suffer from insufficient exploration, while reinforcement learning (RL)-based ones are confronted with problems in efficient training and effective searching. To overcome the above problems, this paper proposes a novel hierarchical path planner for unknown space exploration using RL-based intelligent frontier selection. Firstly, by decomposing the path planner into three-layered architecture (including the perception layer, planning layer, and control layer) and using edge detection to find potential frontiers to track, the path search space is shrunk from the whole map to a handful of points of interest, which significantly saves the computational resources in both training and execution processes. Secondly, one of the advanced RL algorithms, trust region policy optimization (TRPO), is used as a judge to select the best frontier for the robot to track, which ensures the optimality of the path planner with a shorter path length. The proposed method is validated through simulation and compared with both classic and state-of-the-art methods. Results show that the training process could be greatly accelerated compared with the traditional deep-Q network (DQN). Moreover, the proposed method has 4.2%\u201314.3% improvement in exploration region rate and achieves the highest exploration completeness.

Real-time approach to POMDPs for robot navigation

P. Cai and D. Hsu, Closing the Planning\u2013Learning Loop With Application to Autonomous Driving, IEEE Transactions on Robotics, vol. 39, no. 2, pp. 998-1011, April 2023 DOI: 10.1109/TRO.2022.3210767.

Real-time planning under uncertainty is critical for robots operating in complex dynamic environments. Consider, for example, an autonomous robot vehicle driving in dense, unregulated urban traffic of cars, motorcycles, buses, etc. The robot vehicle has to plan in both short and long terms, in order to interact with many traffic participants of uncertain intentions and drive effectively. Planning explicitly over a long time horizon, however, incurs prohibitive computational cost and is impractical under real-time constraints. To achieve real-time performance for large-scale planning, this work introduces a new algorithm Learning from Tree Search for Driving (LeTS-Drive), which integrates planning and learning in a closed loop, and applies it to autonomous driving in crowded urban traffic in simulation. Specifically, LeTS-Drive learns a policy and its value function from data provided by an online planner, which searches a sparsely sampled belief tree; the online planner in turn uses the learned policy and value functions as heuristics to scale up its run-time performance for real-time robot control. These two steps are repeated to form a closed loop so that the planner and the learner inform each other and improve in synchrony. The algorithm learns on its own in a self-supervised manner, without human effort on explicit data labeling. Experimental results demonstrate that LeTS-Drive outperforms either planning or learning alone, as well as open-loop integration of planning and learning.

A RRT-based method that addresses combined task and motion planning

Riccardo Caccavale, Alberto Finzi, A rapidly-exploring random trees approach to combined task and motion planning, Robotics and Autonomous Systems, Volume 157, 2022 DOI: 10.1016/j.robot.2022.104238.

Task and motion planning in robotics are typically addressed by separated intertwined methods. Task planners generate abstract high-level actions to be executed, while motion planners provide the associated discrete movements in the configuration space satisfying kinodynamic constraints. However, these two planning processes are strictly dependent, therefore the problem of combining task and motion planning with a uniform approach is very relevant. In this work, we tackle this issue by proposing a RRT-based method that addresses combined task and motion planning. Our approach relies on a combined metric space where both symbolic (task) and sub-symbolic (motion) spaces are represented. The associated notion of distance is then exploited by a RRT-based planner to generate a plan that includes both symbolic actions and feasible movements in the configuration space. The proposed method is assessed in several case studies provided by a real-world hospital logistic scenario, where an omni-directional mobile robot is involved in navigation and transportation tasks.

New algorithms for optimal path planning with certain optimality guarantees

Strub MP, Gammell JD. Adaptively Informed Trees (AIT*) and Effort Informed Trees (EIT*): Asymmetric bidirectional sampling-based path planning, The International Journal of Robotics Research. 2022;41(4):390-417 DOI: 10.1177/02783649211069572.

Optimal path planning is the problem of finding a valid sequence of states between a start and goal that optimizes an objective. Informed path planning algorithms order their search with problem-specific knowledge expressed as heuristics and can be orders of magnitude more efficient than uninformed algorithms. Heuristics are most effective when they are both accurate and computationally inexpensive to evaluate, but these are often conflicting characteristics. This makes the selection of appropriate heuristics difficult for many problems. This paper presents two almost-surely asymptotically optimal sampling-based path planning algorithms to address this challenge, Adaptively Informed Trees (AIT*) and Effort Informed Trees (EIT*). These algorithms use an asymmetric bidirectional search in which both searches continuously inform each other. This allows AIT* and EIT* to improve planning performance by simultaneously calculating and exploiting increasingly accurate, problem-specific heuristics. The benefits of AIT* and EIT* relative to other sampling-based algorithms are demonstrated on 12 problems in abstract, robotic, and biomedical domains optimizing path length and obstacle clearance. The experiments show that AIT* and EIT* outperform other algorithms on problems optimizing obstacle clearance, where a priori cost heuristics are often ineffective, and still perform well on problems minimizing path length, where such heuristics are often effective.