Category Archives: Mobile Robot Localization

Analyzing effects of loads and terrain on wheel shapes in order to reduce errors in position estimation of a mobile wheeled robot

Smieszek, M., Dobrzanska, M. & Dobrzanski, P. , The impact of load on the wheel rolling radius and slip in a small mobile platform. Auton Robot (2019) 43: 2095, DOI: 10.1007/s10514-019-09857-0.

Automated guided vehicles are used in a variety of applications. Their major purpose is to replace humans in onerous, monotonous and sometimes dangerous operations. Such vehicles are controlled and navigated by application-specific software. In the case of vehicles used in multiple environments and operating conditions, such as the vehicles which are the subject of this study, a reasonable approach is required when selecting the navigation system. The vehicle may travel around an enclosed hall and around an open yard. The pavement surface may be smooth or uneven. Vehicle wheels should be flexible and facilitate the isolation and absorption of vibrations in order to reduce the effect of surface unevenness to the load. Another important factor affecting the operating conditions are changes to vehicle load resulting from the distribution of the load and the weight carried. Considering all of the factors previously mentioned, the vehicle’s navigation and control system is required to meet two opposing criteria. One of them is low price and simplicity, the other is ensuring the required accuracy when following the preset route. In the course of this study, a methodology was developed and tested which aims to obtain a satisfactory compromise between those two conflicting criteria. During the study a vehicle made in Technical University of Rzeszow was used. The results of the experimental research have been analysed. The results of the analysis provided a foundation for the development of a methodology leading to a reduction in navigation errors. Movement simulations for the proposed vehicle system demonstrated the potential for a significant reduction in the number of positioning errors.

An orientation sensor for robot navigation that uses the sky

Julien Dupeyroux, Stéphane Viollet, Julien R. Serres, An ant-inspired celestial compass applied to autonomous outdoor robot navigation, Robotics and Autonomous Systems, Volume 117, 2019, Pages 40-56, DOI: 10.1016/j.robot.2019.04.007.

Desert ants use the polarization of skylight and a combination of stride and ventral optic flow integration processes to track the nest and food positions when traveling, achieving outstanding performances. Navigation sensors such as global positioning systems and inertial measurement units still have disadvantages such as their low resolution and drift. Taking our inspiration from ants, we developed a 2-pixel celestial compass which computes the heading angle of a mobile robot in the ultraviolet range. The output signals obtained with this optical compass were investigated under various weather and ultraviolet conditions and compared with those obtained on a magnetometer in the vicinity of our laboratory. After being embedded on-board the robot, the sensor was first used to compensate for random yaw disturbances. We then used the compass to keep the Hexabot robot’s heading angle constant in a straight forward walking task over a flat terrain while its walking movements were imposing yaw disturbances. Experiments performed under various meteorological conditions showed the occurrence of steady state heading angle errors ranging from 0.3∘ (with a clear sky) to 2.9∘ (under changeable sky conditions). The compass was also tested under canopies and showed a strong ability to determine the robot’s heading while most of the sky was hidden by the foliage. Lastly, a waterproof, mono-pixel version of the sensor was designed and successfully tested in a preliminary underwater benchmark test. These results suggest this new optical compass shows great precision and reliability in a wide range of outdoor conditions, which makes it highly suitable for autonomous robotic outdoor navigation tasks. A celestial compass and a minimalistic optic flow sensor called M2APix (based on Michaelis–Menten Auto-adaptive Pixels) were therefore embedded on-board our latest insectoid robot called AntBot, to complete the previously mentioned ant-like homing navigation processes. First the robot was displaced manually and made to return to its starting-point on the basis of its absolute knowledge of the coordinates of this point. Lastly, AntBot was tested in fully autonomous navigation experiments, in which it explored its environment and then returned to base using the same sensory modes as those on which desert ants rely. AntBot produced robust, precise localization performances with a homing error as small as 0.7% of the entire trajectory.

Comparison of map-matching methods

Héber Sobreira, Carlos M. Costa, Ivo Sousa, Luis Rocha, José Lima, P. C. M. A. Farias, Paulo Costa, A. Paulo Moreira, Map-Matching Algorithms for Robot Self-Localization: A Comparison Between Perfect Match, Iterative Closest Point and Normal Distributions Transform, Journal of Intelligent & Robotic Systems, March 2019, Volume 93, Issue 3–4, pp 533–546 DOI: 10.1007/s10846-017-0765-5.

The self-localization of mobile robots in the environment is one of the most fundamental problems in the robotics navigation field. It is a complex and challenging problem due to the high requirements of autonomous mobile vehicles, particularly with regard to the algorithms accuracy, robustness and computational efficiency. In this paper, we present a comparison of three of the most used map-matching algorithms applied in localization based on natural landmarks: our implementation of the Perfect Match (PM) and the Point Cloud Library (PCL) implementation of the Iterative Closest Point (ICP) and the Normal Distribution Transform (NDT). For the purpose of this comparison we have considered a set of representative metrics, such as pose estimation accuracy, computational efficiency, convergence speed, maximum admissible initialization error and robustness to the presence of outliers in the robots sensors data. The test results were retrieved using our ROS natural landmark public dataset, containing several tests with simulated and real sensor data. The performance and robustness of the Perfect Match is highlighted throughout this article and is of paramount importance for real-time embedded systems with limited computing power that require accurate pose estimation and fast reaction times for high speed navigation. Moreover, we added to PCL a new algorithm for performing correspondence estimation using lookup tables that was inspired by the PM approach to solve this problem. This new method for computing the closest map point to a given sensor reading proved to be 40 to 60 times faster than the existing k-d tree approach in PCL and allowed the Iterative Closest Point algorithm to perform point cloud registration 5 to 9 times faster.

Selecting the best visual cues in the next future for reducing the computational cost of localization under limited computational resources

L. Carlone and S. Karaman, Attention and Anticipation in Fast Visual-Inertial Navigation, IEEE Transactions on Robotics, vol. 35, no. 1, pp. 1-20, Feb. 2019 DOI: 10.1109/TRO.2018.2872402.

We study a visual-inertial navigation (VIN) problem in which a robot needs to estimate its state using an on-board camera and an inertial sensor, without any prior knowledge of the external environment. We consider the case in which the robot can allocate limited resources to VIN, due to tight computational constraints. Therefore, we answer the following question: under limited resources, what are the most relevant visual cues to maximize the performance of VIN? Our approach has four key ingredients. First, it is task-driven, in that the selection of the visual cues is guided by a metric quantifying the VIN performance. Second, it exploits the notion of anticipation, since it uses a simplified model for forward-simulation of robot dynamics, predicting the utility of a set of visual cues over a future time horizon. Third, it is efficient and easy to implement, since it leads to a greedy algorithm for the selection of the most relevant visual cues. Fourth, it provides formal performance guarantees: we leverage submodularity to prove that the greedy selection cannot be far from the optimal (combinatorial) selection. Simulations and real experiments on agile drones show that our approach ensures state-of-the-art VIN performance while maintaining a lean processing time. In the easy scenarios, our approach outperforms appearance-based feature selection in terms of localization errors. In the most challenging scenarios, it enables accurate VIN while appearance-based feature selection fails to track robot’s motion during aggressive maneuvers.

Filters with quaternions for localization

Rangaprasad Arun Srivatsan, Mengyun Xu, Nicolas Zevallos, and Howie Choset, Probabilistic pose estimation using a Bingham distribution-based linear filter, The International Journal of Robotics Research DOI: 10.1177/0278364918778353.

Pose estimation is central to several robotics applications such as registration, hand–eye calibration, and simultaneous localization and mapping (SLAM). Online pose estimation methods typically use Gaussian distributions to describe the uncertainty in the pose parameters. Such a description can be inadequate when using parameters such as unit quaternions that are not unimodally distributed. A Bingham distribution can effectively model the uncertainty in unit quaternions, as it has antipodal symmetry, and is defined on a unit hypersphere. A combination of Gaussian and Bingham distributions is used to develop a truly linear filter that accurately estimates the distribution of the pose parameters. The linear filter, however, comes at the cost of state-dependent measurement uncertainty. Using results from stochastic theory, we show that the state-dependent measurement uncertainty can be evaluated exactly. To show the broad applicability of this approach, we derive linear measurement models for applications that use position, surface-normal, and pose measurements. Experiments assert that this approach is robust to initial estimation errors as well as sensor noise. Compared with state-of-the-art methods, our approach takes fewer iterations to converge onto the correct pose estimate. The efficacy of the formulation is illustrated with a number of examples on standard datasets as well as real-world experiments.

Omnidirectional localization

Milad Ramezani, Kourosh Khoshelham, Clive Fraser, Pose estimation by Omnidirectional Visual-Inertial Odometry,Robotics and Autonomous Systems,
Volume 105, 2018, Pages 26-37, DOI: 10.1016/j.robot.2018.03.007.

In this paper, a novel approach to ego-motion estimation is proposed based on visual and inertial sensors, named Omnidirectional Visual-Inertial Odometry (OVIO). The proposed approach combines omnidirectional visual features with inertial measurements within the Multi-State Constraint Kalman Filter (MSCKF). In contrast with other visual inertial odometry methods that use visual features captured by perspective cameras, the proposed approach utilizes spherical images obtained by an omnidirectional camera to obtain more accurate estimates of the position and orientation of the camera. Because the standard perspective model is unsuitable for omnidirectional cameras, a measurement model on a plane tangent to the unit sphere rather than on the image plane is defined. The key hypothesis of OVIO is that a wider field of view allows the incorporation of more visual features from the surrounding environment, thereby improving the accuracy and robustness of the ego-motion estimation. Moreover, by using an omnidirectional camera, a situation where there is not enough texture is less likely to arise. Experimental evaluation of OVIO using synthetic and real video sequences captured by a fish-eye camera in both indoor and outdoor environments shows the superior performance of the proposed OVIO as compared to the MSCKF using a perspective camera in both positioning and attitude estimation.

Mapping the wifi signal for robot localization both precisely and accurately through a complex model of the signal

Renato Miyagusuku, Atsushi Yamashita, Hajime Asama, Precise and accurate wireless signal strength mappings using Gaussian processes and path loss models, Robotics and Autonomous Systems, Volume 103, 2018, Pages 134-150, DOI: 10.1016/j.robot.2018.02.011.

In this work, we present a new modeling approach that generates precise (low variance) and accurate (low mean error) wireless signal strength mappings. In robot localization, these mappings are used to compute the likelihood of locations conditioned to new sensor measurements. Therefore, both mean and variance predictions are required. Gaussian processes have been successfully used for learning highly accurate mappings. However, they generalize poorly at locations far from their training inputs, making those predictions have high variance (low precision). In this work, we address this issue by incorporating path loss models, which are parametric functions that although lacking in accuracy, generalize well. Path loss models are used together with Gaussian processes to compute mean predictions and most importantly, to bound Gaussian processes’ predicted variances. Through extensive testing done with our open source framework, we demonstrate the ability of our approach to generating precise and accurate mappings, and the increased localization accuracy of Monte Carlo localization algorithms when using them; with all our datasets and software been made readily available online for the community.

First end-to-end implementation of (monocular) Visual Odometry with deep neural networks, including output with the uncertainty of the result

Sen Wang, Ronald Clark, Hongkai Wen, and Niki Trigoni, End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks, The International Journal of Robotics Research Vol 37, Issue 4-5, pp. 513 – 542, DOI: 0.1177/0278364917734298.

This paper studies visual odometry (VO) from the perspective of deep learning. After tremendous efforts in the robotics and computer vision communities over the past few decades, state-of-the-art VO algorithms have demonstrated incredible performance. However, since the VO problem is typically formulated as a pure geometric problem, one of the key features still missing from current VO systems is the capability to automatically gain knowledge and improve performance through learning. In this paper, we investigate whether deep neural networks can be effective and beneficial to the VO problem. An end-to-end, sequence-to-sequence probabilistic visual odometry (ESP-VO) framework is proposed for the monocular VO based on deep recurrent convolutional neural networks. It is trained and deployed in an end-to-end manner, that is, directly inferring poses and uncertainties from a sequence of raw images (video) without adopting any modules from the conventional VO pipeline. It can not only automatically learn effective feature representation encapsulating geometric information through convolutional neural networks, but also implicitly model sequential dynamics and relation for VO using deep recurrent neural networks. Uncertainty is also derived along with the VO estimation without introducing much extra computation. Extensive experiments on several datasets representing driving, flying and walking scenarios show competitive performance of the proposed ESP-VO to the state-of-the-art methods, demonstrating a promising potential of the deep learning technique for VO and verifying that it can be a viable complement to current VO systems.

Automatic hierarchization for the recognition of places in images

Chen Fan, Zetao Chen, Adam Jacobson, Xiaoping Hu, Michael Milford, Biologically-inspired visual place recognition with adaptive multiple scales,Robotics and Autonomous Systems, Volume 96, 2017, Pages 224-237, DOI: 10.1016/j.robot.2017.07.015.

In this paper we present a novel adaptive multi-scale system for performing visual place recognition. Unlike recent previous multi-scale place recognition systems that use manually pre-fixed scales, we present a system that adaptively selects the spatial scales. This approach differs from previous multi-scale methods, where place recognition is performed through a non-optimized distance metric in a fixed and pre-determined scale space. Instead, we learn an optimized distance metric which creates a new recognition space for clustering images with similar features while separating those with different features. Consequently, the method exploits the natural spatial scales present in the operating environment. With these adaptive scales, a hierarchical recognition mechanism with multiple parallel channels is then proposed. Each channel performs place recognition from a coarse match to a fine match. We present specific techniques for training each channel to recognize places at varying spatial scales and for combining the place recognition hypotheses from these parallel channels. We also conduct a systematic series of experiments and parameter studies that determine the effect on performance of using different numbers of combined recognition channels. The results demonstrate that the adaptive multi-scale approach outperforms the previous fixed multi-scale approach and is capable of producing better than state of the art performance compared to existing robotic navigation algorithms. The system complexity is linear in the number of places in the reference static map and can realize the online place recognition in mobile robotics on typical dataset sizes We analyze the results and provide theoretical analysis of the performance improvements. Finally, we discuss interesting insights gained with respect to future work in robotics and neuroscience in this area.

Using EKF to estimate the state of a quadcopter in SE(3)

Goodarzi, F.A. & Lee, Global Formulation of an Extended Kalman Filter on SE(3) for Geometric Control of a Quadrotor UAV, J Intell Robot Syst (2017) 88: 395, DOI: 10.1007/s10846-017-0525-6.

An extended Kalman filter (EKF) is developed on the special Euclidean group, S E(3) for geometric control of a quadrotor UAV. It is obtained by performing an intrinsic form of linearization on S E(3) to estimate the state of the quadrotor from noisy measurements. The proposed estimator considers all of the coupling effects between rotational and translational dynamics, and it is developed in a coordinate-free fashion. The desirable features of the proposed EKF are illustrated by numerical examples and experimental results for several scenarios. The proposed estimation scheme on S E(3) has been unprecedented and these results can be particularly useful for aggressive maneuvers in GPS denied environments or in situations where parts of onboard sensors fail.