Category Archives: Mobile Robot Localization

Filters with quaternions for localization

Rangaprasad Arun Srivatsan, Mengyun Xu, Nicolas Zevallos, and Howie Choset, Probabilistic pose estimation using a Bingham distribution-based linear filter, The International Journal of Robotics Research DOI: 10.1177/0278364918778353.

Pose estimation is central to several robotics applications such as registration, hand–eye calibration, and simultaneous localization and mapping (SLAM). Online pose estimation methods typically use Gaussian distributions to describe the uncertainty in the pose parameters. Such a description can be inadequate when using parameters such as unit quaternions that are not unimodally distributed. A Bingham distribution can effectively model the uncertainty in unit quaternions, as it has antipodal symmetry, and is defined on a unit hypersphere. A combination of Gaussian and Bingham distributions is used to develop a truly linear filter that accurately estimates the distribution of the pose parameters. The linear filter, however, comes at the cost of state-dependent measurement uncertainty. Using results from stochastic theory, we show that the state-dependent measurement uncertainty can be evaluated exactly. To show the broad applicability of this approach, we derive linear measurement models for applications that use position, surface-normal, and pose measurements. Experiments assert that this approach is robust to initial estimation errors as well as sensor noise. Compared with state-of-the-art methods, our approach takes fewer iterations to converge onto the correct pose estimate. The efficacy of the formulation is illustrated with a number of examples on standard datasets as well as real-world experiments.

Omnidirectional localization

Milad Ramezani, Kourosh Khoshelham, Clive Fraser, Pose estimation by Omnidirectional Visual-Inertial Odometry,Robotics and Autonomous Systems,
Volume 105, 2018, Pages 26-37, DOI: 10.1016/j.robot.2018.03.007.

In this paper, a novel approach to ego-motion estimation is proposed based on visual and inertial sensors, named Omnidirectional Visual-Inertial Odometry (OVIO). The proposed approach combines omnidirectional visual features with inertial measurements within the Multi-State Constraint Kalman Filter (MSCKF). In contrast with other visual inertial odometry methods that use visual features captured by perspective cameras, the proposed approach utilizes spherical images obtained by an omnidirectional camera to obtain more accurate estimates of the position and orientation of the camera. Because the standard perspective model is unsuitable for omnidirectional cameras, a measurement model on a plane tangent to the unit sphere rather than on the image plane is defined. The key hypothesis of OVIO is that a wider field of view allows the incorporation of more visual features from the surrounding environment, thereby improving the accuracy and robustness of the ego-motion estimation. Moreover, by using an omnidirectional camera, a situation where there is not enough texture is less likely to arise. Experimental evaluation of OVIO using synthetic and real video sequences captured by a fish-eye camera in both indoor and outdoor environments shows the superior performance of the proposed OVIO as compared to the MSCKF using a perspective camera in both positioning and attitude estimation.

Mapping the wifi signal for robot localization both precisely and accurately through a complex model of the signal

Renato Miyagusuku, Atsushi Yamashita, Hajime Asama, Precise and accurate wireless signal strength mappings using Gaussian processes and path loss models, Robotics and Autonomous Systems, Volume 103, 2018, Pages 134-150, DOI: 10.1016/j.robot.2018.02.011.

In this work, we present a new modeling approach that generates precise (low variance) and accurate (low mean error) wireless signal strength mappings. In robot localization, these mappings are used to compute the likelihood of locations conditioned to new sensor measurements. Therefore, both mean and variance predictions are required. Gaussian processes have been successfully used for learning highly accurate mappings. However, they generalize poorly at locations far from their training inputs, making those predictions have high variance (low precision). In this work, we address this issue by incorporating path loss models, which are parametric functions that although lacking in accuracy, generalize well. Path loss models are used together with Gaussian processes to compute mean predictions and most importantly, to bound Gaussian processes’ predicted variances. Through extensive testing done with our open source framework, we demonstrate the ability of our approach to generating precise and accurate mappings, and the increased localization accuracy of Monte Carlo localization algorithms when using them; with all our datasets and software been made readily available online for the community.

First end-to-end implementation of (monocular) Visual Odometry with deep neural networks, including output with the uncertainty of the result

Sen Wang, Ronald Clark, Hongkai Wen, and Niki Trigoni, End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks, The International Journal of Robotics Research Vol 37, Issue 4-5, pp. 513 – 542, DOI: 0.1177/0278364917734298.

This paper studies visual odometry (VO) from the perspective of deep learning. After tremendous efforts in the robotics and computer vision communities over the past few decades, state-of-the-art VO algorithms have demonstrated incredible performance. However, since the VO problem is typically formulated as a pure geometric problem, one of the key features still missing from current VO systems is the capability to automatically gain knowledge and improve performance through learning. In this paper, we investigate whether deep neural networks can be effective and beneficial to the VO problem. An end-to-end, sequence-to-sequence probabilistic visual odometry (ESP-VO) framework is proposed for the monocular VO based on deep recurrent convolutional neural networks. It is trained and deployed in an end-to-end manner, that is, directly inferring poses and uncertainties from a sequence of raw images (video) without adopting any modules from the conventional VO pipeline. It can not only automatically learn effective feature representation encapsulating geometric information through convolutional neural networks, but also implicitly model sequential dynamics and relation for VO using deep recurrent neural networks. Uncertainty is also derived along with the VO estimation without introducing much extra computation. Extensive experiments on several datasets representing driving, flying and walking scenarios show competitive performance of the proposed ESP-VO to the state-of-the-art methods, demonstrating a promising potential of the deep learning technique for VO and verifying that it can be a viable complement to current VO systems.

Automatic hierarchization for the recognition of places in images

Chen Fan, Zetao Chen, Adam Jacobson, Xiaoping Hu, Michael Milford, Biologically-inspired visual place recognition with adaptive multiple scales,Robotics and Autonomous Systems, Volume 96, 2017, Pages 224-237, DOI: 10.1016/j.robot.2017.07.015.

In this paper we present a novel adaptive multi-scale system for performing visual place recognition. Unlike recent previous multi-scale place recognition systems that use manually pre-fixed scales, we present a system that adaptively selects the spatial scales. This approach differs from previous multi-scale methods, where place recognition is performed through a non-optimized distance metric in a fixed and pre-determined scale space. Instead, we learn an optimized distance metric which creates a new recognition space for clustering images with similar features while separating those with different features. Consequently, the method exploits the natural spatial scales present in the operating environment. With these adaptive scales, a hierarchical recognition mechanism with multiple parallel channels is then proposed. Each channel performs place recognition from a coarse match to a fine match. We present specific techniques for training each channel to recognize places at varying spatial scales and for combining the place recognition hypotheses from these parallel channels. We also conduct a systematic series of experiments and parameter studies that determine the effect on performance of using different numbers of combined recognition channels. The results demonstrate that the adaptive multi-scale approach outperforms the previous fixed multi-scale approach and is capable of producing better than state of the art performance compared to existing robotic navigation algorithms. The system complexity is linear in the number of places in the reference static map and can realize the online place recognition in mobile robotics on typical dataset sizes We analyze the results and provide theoretical analysis of the performance improvements. Finally, we discuss interesting insights gained with respect to future work in robotics and neuroscience in this area.

Using EKF to estimate the state of a quadcopter in SE(3)

Goodarzi, F.A. & Lee, Global Formulation of an Extended Kalman Filter on SE(3) for Geometric Control of a Quadrotor UAV, J Intell Robot Syst (2017) 88: 395, DOI: 10.1007/s10846-017-0525-6.

An extended Kalman filter (EKF) is developed on the special Euclidean group, S E(3) for geometric control of a quadrotor UAV. It is obtained by performing an intrinsic form of linearization on S E(3) to estimate the state of the quadrotor from noisy measurements. The proposed estimator considers all of the coupling effects between rotational and translational dynamics, and it is developed in a coordinate-free fashion. The desirable features of the proposed EKF are illustrated by numerical examples and experimental results for several scenarios. The proposed estimation scheme on S E(3) has been unprecedented and these results can be particularly useful for aggressive maneuvers in GPS denied environments or in situations where parts of onboard sensors fail.

Improving orientation estimation in a mobile robot for doing better odometry

M.T. Sabet, H.R. Mohammadi Daniali, A.R. Fathi, E. Alizadeh, Experimental analysis of a low-cost dead reckoning navigation system for a land vehicle using a robust AHRS, Robotics and Autonomous Systems, Volume 95, 2017, Pages 37-51, DOI: 10.1016/j.robot.2017.05.010.

In navigation and motion control of an autonomous vehicle, estimation of attitude and heading is an important issue especially when the localization sensors such as GPS are not available and the vehicle is navigated by the dead reckoning (DR) strategies. In this paper, based on a new modeling framework an Extended Kalman Filter (EKF) is utilized for estimation of attitude, heading and gyroscope sensor bias using a low-cost MEMS inertial sensor. The algorithm is developed for accurate estimation of attitude and heading in the presence of external disturbances including external body accelerations and magnetic disturbances. In this study using the proposed attitude and heading reference system (AHRS) and an odometer sensor, a low-cost aided DR navigation system has been designed. The proposed algorithm application is evaluated by experimental tests in different acceleration bound and existence of external magnetic disturbances for a land vehicle. The results indicate that the roll, pitch and heading are estimated by mean value errors about 0.83%, 0.68% and 1.13%, respectively. Moreover, they indicate that a relative navigation error about 3% of the traveling distance can be achieved using the developed approach in during GPS outages.

Identification of beacons for localization by using LEDs with light patterns as IDs

G. Simon, G. Zachár and G. Vakulya, Lookup: Robust and Accurate Indoor Localization Using Visible Light Communication, IEEE Transactions on Instrumentation and Measurement, vol. 66, no. 9, pp. 2337-2348, DOI: 10.1109/TIM.2017.2707878.

A novel indoor localization system is presented, where LED beacons are utilized to determine the position of the target sensor, including a camera, an inclinometer, and a magnetometer. The beacons, which can be a part of the existing lighting infrastructure, transmit their identifiers for long distances using visible light communication techniques. The sensor is able to sense and detect the high-frequency (flicker free) code by properly undersampling the transmitted signal. The localization is performed using novel geometric and consensus-based techniques, which tolerate well measurement inaccuracies and sporadic outliers. The performance of the system is analyzed using simulations and real measurements. According to large-scale tests in realistic environments, the accuracy of the proposed system is in the low decimeter range.

A novel particle filter algorithm with an adaptive number of particles, and a curious and interesting table I about the pros and cons of different sensors

T. de J. Mateo Sanguino and F. Ponce Gómez, “Toward Simple Strategy for Optimal Tracking and Localization of Robots With Adaptive Particle Filtering,” in IEEE/ASME Transactions on Mechatronics, vol. 21, no. 6, pp. 2793-2804, Dec. 2016.DOI: 10.1109/TMECH.2016.2531629.

The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter – DAPF) is to provide higher number of particles during the initial searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named Kullback-Leiber distance.

Globally optimal ICP

J. Yang, H. Li, D. Campbell and Y. Jia, “Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 11, pp. 2241-2254, Nov. 1 2016. DOI: 10.1109/TPAMI.2015.2513405.

The Iterative Closest Point (ICP) algorithm is one of the most widely used methods for point-set registration. However, being based on local iterative optimization, ICP is known to be susceptible to local minima. Its performance critically relies on the quality of the initialization and only local optimality is guaranteed. This paper presents the first globally optimal algorithm, named Go-ICP, for Euclidean (rigid) registration of two 3D point-sets under the $L_2$ error metric defined in ICP. The Go-ICP method is based on a branch-and-bound scheme that searches the entire 3D motion space $SE(3)$ . By exploiting the special structure of $SE(3)$ geometry, we derive novel upper and lower bounds for the registration error function. Local ICP is integrated into the BnB scheme, which speeds up the new method while guaranteeing global optimality. We also discuss extensions, addressing the issue of outlier robustness. The evaluation demonstrates that the proposed method is able to produce reliable registration results regardless of the initialization. Go-ICP can be applied in scenarios where an optimal solution is desirable or where a good initialization is not always available.