Tag Archives: Recursive Bayesian Estimation

Using results from belief-based planning for Bayesian inference in robotics

Farhi, E.I., Indelman, V., Bayesian incremental inference update by re-using calculations from belief space planning: a new paradigm, Auton Robot 46, 783\u2013816 (2022). DOI: 10.1007/s10514-022-10045-w.

Inference and decision making under uncertainty are key processes in every autonomous system and numerous robotic problems. In recent years, the similarities between inference and decision making triggered much work, from developing unified computational frameworks to pondering about the duality between the two. In spite of these efforts, inference and control, as well as inference and belief space planning (BSP) are still treated as two separate processes. In this paper we propose a paradigm shift, a novel approach which deviates from conventional Bayesian inference and utilizes the similarities between inference and BSP. We make the key observation that inference can be efficiently updated using predictions made during the decision making stage, even in light of inconsistent data association between the two. We developed a two staged process that implements our novel approach and updates inference using calculations from the precursory planning phase. Using autonomous navigation in an unknown environment along with iSAM2 efficient methodologies as a test case, we benchmarked our novel approach against standard Bayesian inference, both with synthetic and real-world data (KITTI dataset). Results indicate that not only our approach improves running time by at least a factor of two while providing the same estimation accuracy, but it also alleviates the computational burden of state dimensionality and loop closures.

Real-time and Bayesian-enabled ICP for mobile robot localization and mapping in a Bayesian framework

Maken FA, Ramos F, Ott L. , Bayesian iterative closest point for mobile robot localization, The International Journal of Robotics Research. 2022;41(9-10):851-874 DOI: 10.1177/02783649221101417.

Accurate localization of a robot in a known environment is a fundamental capability for successfully performing path planning, manipulation, and grasping tasks. Particle filters, also known as Monte Carlo localization (MCL), are a commonly used method to determine the robot\u2019s pose within its environment. For ground robots, noisy wheel odometry readings are typically used as a motion model to predict the vehicle\u2019s location. Such a motion model requires tuning of various parameters based on terrain and robot type. However, such an ego-motion estimation is not always available for all platforms. Scan matching using the iterative closest point (ICP) algorithm is a popular alternative approach, providing ego-motion estimates for localization. Iterative closest point computes a point estimate of the transformation between two poses given point clouds captured at these locations. Being a point estimate method, ICP does not deal with the uncertainties in the scan alignment process, which may arise due to sensor noise, partial overlap, or the existence of multiple solutions. Another challenge for ICP is the high computational cost required to align two large point clouds, limiting its applicability to less dynamic problems. In this paper, we address these challenges by leveraging recent advances in probabilistic inference. Specifically, we first address the run-time issue and propose SGD-ICP, which employs stochastic gradient descent (SGD) to solve the optimization problem of ICP. Next, we leverage SGD-ICP to obtain a distribution over transformations and propose a Markov Chain Monte Carlo method using stochastic gradient Langevin dynamics (SGLD) updates. Our ICP variant, termed Bayesian-ICP, is a full Bayesian solution to the problem. To demonstrate the benefits of Bayesian-ICP for mobile robotic applications, we propose an adaptive motion model employing Bayesian-ICP to produce proposal distributions for Monte Carlo Localization. Experiments using both Kinect and 3D LiDAR data show that our proposed SGD-ICP method achieves the same solution quality as standard ICP while being significantly more efficient. We then demonstrate empirically that Bayesian-ICP can produce accurate distributions over pose transformations and is fast enough for online applications. Finally, using Bayesian-ICP as a motion model alleviates the need to tune the motion model parameters from odometry, resulting in better-calibrated localization uncertainty.

The problems of the initial state in filtering and its effects in the estimation

He Kong, Mao Shan, Daobilige Su, Yongliang Qiao, Abdullah Al-Azzawi, Salah Sukkarieh, Filtering for systems subject to unknown inputs without a priori initial information, . Automatica, Volume 120, 2020 DOI: 10.1016/j.automatica.2020.109122.

The last few decades have witnessed much development in filtering of systems with Gaussian noises and arbitrary unknown inputs. Nonetheless, there are still some important design questions that warrant thorough discussions. Especially, the existing literature has shown that for unbiased and minimum variance estimation of the state and the unknown input, the initial guess of the state has to be unbiased. This clearly raises the question of whether and under what conditions one can design an unbiased and minimum variance filter, without making such a stringent assumption. The above-mentioned question will be investigated systematically in this paper, i.e., design of the filter is sought to be independent of a priori information about the initial conditions. In particular, for both cases with and without direct feedthrough, we establish necessary and sufficient conditions for unbiased and minimum variance estimation of the state/unknown input, independently of a priori initial conditions, respectively. When the former conditions do not hold, we carry out a thorough analysis of all possible scenarios. For each scenario, we present detailed discussions regarding whether and what can be achieved in terms of unbiased estimation, independently of a priori initial conditions. Extensions to the case with time-delays, conceptually like Kalman smoothing where future measurements are allowed in estimation, will also be presented, amongst others.

Shunyi Zhao, Biao Huang, Trial-and-error or avoiding a guess? Initialization of the Kalman filter, . Automatica, Volume 121, 2020 DOI: 10.1016/j.automatica.2020.109184.

As a recursive state estimation algorithm, the Kalman filter (KF) assumes initial state distribution is known a priori, while in practice the initial distribution is commonly treated as design parameters. In this paper, we will answer three questions concerning initialization: (1) At each time step, how does the KF respond to measurements, control signals, and more importantly, initial states? (2) What is the price (in terms of accuracy) one has to pay if inaccurate initial states are used? and (3) Can we find a better strategy rather than through guessing to improve the performance of KF in the initial estimation phase when the initial condition is unknown? To these ends, the classical recursive KF is first transformed into an equivalent but batch form, from which the responses of the KF to measurements, control signal, and initial state can be clearly separated and observed. Based on this, we isolate the initial distribution by dividing the original state into two parts and reconstructing a new state-space model. An initialization algorithm is then proposed by employing the Bayesian inference technique to estimate all the unknown variables simultaneously. By analyzing its performance, an improved version is further developed. Two simulation examples demonstrate that the proposed initialization approaches can be considered as competitive alternatives of various existing initialization methods when initial condition is unknown.

Hybrid Monte Carlo + Interval-based localization

Weiss, R., Glösekötter, P., Prestes, E. et al., Hybridisation of Sequential Monte Carlo Simulation with Non-linear Bounded-error State Estimation Applied to Global Localisation of Mobile Robots, J Intell Robot Syst 99, 335–357 (2020) DOI: 10.1007/s10846-019-01118-7.

Accurate self-localisation is a fundamental ability of any mobile robot. In Monte Carlo localisation, a probability distribution over a space of possible hypotheses accommodates the inherent uncertainty in the position estimate, whereas bounded-error localisation provides a region that is guaranteed to contain the robot. However, this guarantee is accompanied by a constant probability over the confined region and therefore the information yield may not be sufficient for certain practical applications. Four hybrid localisation algorithms are proposed, combining probabilistic filtering with non-linear bounded-error state estimation based on interval analysis. A forward-backward contractor and the Set Inverter via Interval Analysis are hybridised with a bootstrap filter and an unscented particle filter, respectively. The four algorithms are applied to global localisation of an underwater robot, using simulated distance measurements to distinguishable landmarks. As opposed to previous hybrid methods found in the literature, the bounded-error state estimate is not maintained throughout the whole estimation process. Instead, it is only computed once in the beginning, when solving the wake-up robot problem, and after kidnapping of the robot, which drastically reduces the computational cost when compared to the existing algorithms. It is shown that the novel algorithms can solve the wake-up robot problem as well as the kidnapped robot problem more accurately than the two conventional probabilistic filters.

Improving sensory information, diagnosis and fault tolerance by using multiple sensors and sensor fusion, with a good related work section (2.3) on fault tolerance on data fusion

Kaci Bader, Benjamin Lussier, Walter Schön, A fault tolerant architecture for data fusion: A real application of Kalman filters for mobile robot localization, Robotics and Autonomous Systems, Volume 88, February 2017, Pages 11-23, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.11.015.

Multisensor perception has an important role in robotics and autonomous systems, providing inputs for critical functions including obstacle detection and localization. It is starting to appear in critical applications such as drones and ADASs (Advanced Driver Assistance Systems). However, this kind of complex system is difficult to validate comprehensively. In this paper we look at multisensor perception systems in relation to an alternative dependability method, namely fault tolerance. We propose an approach for tolerating faults in multisensor data fusion that is based on the more traditional method of duplication–comparison, and that offers detection and recovery services. We detail an example implementation using Kalman filter data fusion for mobile robot localization. We demonstrate its effectiveness in this case study using real data and fault injection.

Using the Bingham distribution of probability, which is defined on a d-dimensional sphere to be antipodally symmetric, to address the problem of angle periodicity in [0,2pi] when estimating orientation in a recursive filter

Gilitschenski, I.; Kurz, G.; Julier, S.J.; Hanebeck, U.D., Unscented Orientation Estimation Based on the Bingham Distribution, in Automatic Control, IEEE Transactions on , vol.61, no.1, pp.172-177, Jan. 2016, DOI: 10.1109/TAC.2015.2423831.

In this work, we develop a recursive filter to estimate orientation in 3D, represented by quaternions, using directional distributions. Many closed-form orientation estimation algorithms are based on traditional nonlinear filtering techniques, such as the extended Kalman filter (EKF) or the unscented Kalman filter (UKF). These approaches assume the uncertainties in the system state and measurements to be Gaussian-distributed. However, Gaussians cannot account for the periodic nature of the manifold of orientations and thus small angular errors have to be assumed and ad hoc fixes must be used. In this work, we develop computationally efficient recursive estimators that use the Bingham distribution. This distribution is defined on the hypersphere and is inherently more suitable for periodic problems. As a result, these algorithms are able to consistently estimate orientation even in the presence of large angular errors. Furthermore, handling of nontrivial system functions is performed using an entirely deterministic method which avoids any random sampling. A scheme reminiscent of the UKF is proposed for the nonlinear manifold of orientations. It is the first deterministic sampling scheme that truly reflects the nonlinear manifold of orientations.

Comparison of EKF and UKF for robot localization and a method of selection of a subset of the available sonar sensors

Luigi D’Alfonso, Walter Lucia, Pietro Muraca, Paolo Pugliese, Mobile robot localization via EKF and UKF: A comparison based on real data, Robotics and Autonomous Systems, Volume 74, Part A, December 2015, Pages 122-127, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.07.007.

In this work we compare the performance of two well known filters for nonlinear models, the Extended Kalman Filter and the Unscented Kalman Filter, in estimating the position and orientation of a mobile robot. The two filters fuse the measurements taken by ultrasonic sensors located onboard the robot. The experimental results on real data show a substantial equivalence of the two filters, although in principle the approximating properties of the UKF are much better. A switching sensors activation policy is also devised, which allows to obtain an accurate estimate of the robot state using only a fraction of the available sensors, with a relevant saving of battery power.

One of the first thorough studies of Monte Carlo Localization with line-segment maps

Biswajit Sarkar, Surojit Saha, Prabir K. Pal, A novel method for computation of importance weights in Monte Carlo localization on line segment-based maps, Robotics and Autonomous Systems, Volume 74, Part A, December 2015, Pages 51-65, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.07.001.

Monte Carlo localization is a powerful and popular approach in mobile robot localization. Line segment-based maps provide a compact and scalable representation of indoor environments for mobile robot navigation. But Monte Carlo localization has seldom been studied in the context of line segment-based maps. A key step of the approach–and one that can endow it with or rob it of the attributes of accuracy, robustness and efficiency–is the computation of the so called importance weight associated with each particle. In this paper, we propose a new method for the computation of importance weights on maps represented with line segments, and extensively study its performance in pose tracking. We also compare our method with three other methods reported in the literature and present the results and insights thus gathered. The comparative study, conducted using both simulated and real data, on maps built from real data available in the public domain clearly establish that the proposed method is more accurate, robust and efficient than the other methods.

A clarification and systematization of UKF

Menegaz, H.M.T.; Ishihara, J.Y.; Borges, G.A.; Vargas, A.N., A Systematization of the Unscented Kalman Filter Theory, in Automatic Control, IEEE Transactions on , vol.60, no.10, pp.2583-2598, Oct. 2015 DOI: 10.1109/TAC.2015.2404511.

In this paper, we propose a systematization of the (discrete-time) Unscented Kalman Filter (UKF) theory. We gather all available UKF variants in the literature, present corrections to theoretical inconsistencies, and provide a tool for the construction of new UKF’s in a consistent way. This systematization is done, mainly, by revisiting the concepts of Sigma-Representation, Unscented Transformation (UT), Scaled Unscented Transformation (SUT), UKF, and Square-Root Unscented Kalman Filter (SRUKF). Inconsistencies are related to 1) matching the order of the transformed covariance and cross-covariance matrices of both the UT and the SUT; 2) multiple UKF definitions; 3) issue with some reduced sets of sigma points described in the literature; 4) the conservativeness of the SUT; 5) the scaling effect of the SUT on both its transformed covariance and cross-covariance matrices; and 6) possibly ill-conditioned results in SRUKF’s. With the proposed systematization, the symmetric sets of sigma points in the literature are formally justified, and we are able to provide new consistent variations for UKF’s, such as the Scaled SRUKF’s and the UKF’s composed by the minimum number of sigma points. Furthermore, our proposed SRUKF has improved computational properties when compared to state-of-the-art methods.

Modelling ECGs with sums of gaussians and estimating them through switching Kalman Filters and the likelihood of each mode

Oster, J.; Behar, J.; Sayadi, O.; Nemati, S.; Johnson, A.E.W.; Clifford, G.D., Semisupervised ECG Ventricular Beat Classification With Novelty Detection Based on Switching Kalman Filters, in Biomedical Engineering, IEEE Transactions on , vol.62, no.9, pp.2125-2134, Sept. 2015, DOI: 10.1109/TBME.2015.2402236.

Automatic processing and accurate diagnosis of pathological electrocardiogram (ECG) signals remains a challenge. As long-term ECG recordings continue to increase in prevalence, driven partly by the ease of remote monitoring technology usage, the need to automate ECG analysis continues to grow. In previous studies, a model-based ECG filtering approach to ECG data from healthy subjects has been applied to facilitate accurate online filtering and analysis of physiological signals. We propose an extension of this approach, which models not only normal and ventricular heartbeats, but also morphologies not previously encountered. A switching Kalman filter approach is introduced to enable the automatic selection of the most likely mode (beat type), while simultaneously filtering the signal using appropriate prior knowledge. Novelty detection is also made possible by incorporating a third mode for the detection of unknown (not previously observed) morphologies, and denoted as X-factor. This new approach is compared to state-of-the-art techniques for the ventricular heartbeat classification in the MIT-BIH arrhythmia and Incart databases. F1 scores of 98.3% and 99.5% were found on each database, respectively, which are superior to other published algorithms’ results reported on the same databases. Only 3% of all the beats were discarded as X-factor, and the majority of these beats contained high levels of noise. The proposed technique demonstrates accurate beat classification in the presence of previously unseen (and unlearned) morphologies and noise, and provides an automated method for morphological analysis of arbitrary (unknown) ECG leads.