Category Archives: Mathematics

Variation of the Newton-Rhapson algorithm that copes with noise, with some illustrative applications such as robotics

D. Fu et al. Modified Newton Integration Algorithm With Noise Tolerance Applied to Robotics, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 4, pp. 2134-2144 DOI: 10.1109/TSMC.2021.3049386.

Currently, the Newton\u2013Raphson iterative algorithm has been extensively employed in the fields of basic research and engineering. However, when noise components exist in a system, its performance is largely affected. To remedy shortcomings that the conventional computing methods have encountered in a noisy workspace, a novel modified Newton integration (MNI) algorithm is proposed in this article. In addition, the steady-state error of the proposed MNI algorithm is smaller than that of the Newton\u2013Raphson algorithm under a noise-free or noisy workspace. To lay the foundations for the corresponding theoretical analyses, the proposed MNI algorithm is first converted into a homogeneous linear equation with a residual term. Then, the related theoretical analyses are carried out, which indicate that the MNI algorithm possesses noise-tolerance ability under various noisy environments. Finally, multiple computer simulations and physical experiments on robot control applications are performed to verify the feasibility and advantage of the proposed MNI algorithm.

New algorithms for outlier detection with applications in robotics

P. Antonante, V. Tzoumas, H. Yang and L. Carlone, Outlier-Robust Estimation: Hardness, Minimally Tuned Algorithms, and Applications, IEEE Transactions on Robotics, vol. 38, no. 1, pp. 281-301, Feb. 2022 DOI: 10.1109/TRO.2021.3094984.

Nonlinear estimation in robotics and vision is typically plagued with outliers due to wrong data association or incorrect detections from signal processing and machine learning methods. This article introduces two unifying formulations for outlier-robust estimation, generalized maximum consensus ( $\text{G}$ – $\text{MC}$ ) and generalized truncated least squares ( $\text{G-TLS}$ ), and investigates fundamental limits, practical algorithms, and applications. Our first contribution is a proof that outlier-robust estimation is inapproximable: In the worst case, it is impossible to (even approximately) find the set of outliers, even with slower-than-polynomial-time algorithms (particularly, algorithms running in quasi-polynomial time). As a second contribution, we review and extend two general-purpose algorithms. The first, adaptive trimming ( $\text{ADAPT}$ ), is combinatorial and is suitable for $\text{G}$ – $\text{MC}$ ; the second, graduated nonconvexity ( $\text{GNC}$ ), is based on homotopy methods and is suitable for $\text{G-TLS}$ . We extend $\text{ADAPT}$ and $\text{GNC}$ to the case where the user does not have prior knowledge of the inlier-noise statistics (or the statistics may vary over time) and is unable to guess a reasonable threshold to separate inliers from outliers (as the one commonly used in RANdom SAmple Consensus $(\text{RANSAC})$ . We propose the first minimally tuned algorithms for outlier rejection, which dynamically decide how to separate inliers from outliers. Our third contribution is an evaluation of the proposed algorithms on robot perception problems: mesh registration, image-based object detection ( shape alignment ), and pose graph optimization. $\text{ADAPT}$ and $\text{GNC}$ execute in real time, are deterministic, outperform $\text{RANSAC}$ , and are robust up to 80\u201390% outliers. Their minimally tuned versions also compare favorably with the state of the art, even though they do not rely on a noise bound for the inliers.

A really nice comparison of different outlier detection methods

Hamzeh Alimohammadi, Shengnan Nancy Chen, Performance evaluation of outlier detection techniques in production timeseries: A systematic review and meta-analysis, Expert Systems with Applications, Volume 191, 2022 DOI: 10.1016/j.eswa.2021.116371.

Time-series data have been extensively collected and analyzed in many disciplines, such as stock market, medical diagnosis, meteorology, and oil and gas industry. Numerous data in these disciplines are sequence of observations measured as functions of time, which can be further used for different applications via analytical or data analytics techniques (e.g., to forecast future price, climate change, etc.). However, presence of outliers can cause significant uncertainties to interpretation results; hence, it is essential to remove the outliers accurately and efficiently before conducting any further analysis. A total of 17 techniques that belong to statistical, regression-based, and machine learning (ML) based categories for outlier detection in timeseries are applied to the oil and gas production data analysis. 15 of these methods are utilized for production data analysis for the first time. Two state-of-the-art and high-performance techniques are then selected for data cleaning which require minimum control and time complexity. Moreover, performances of these techniques are evaluated based on several metrics including the accuracy, precision, recall, F1 score, and Cohen\u2019s Kappa to rank the techniques. Results show that eight unsupervised algorithms outperform the rest of the methods based on the synthetic case study with known outliers. For example, accuracies of the eight shortlisted methods are in the range of 0.83\u20130.99 with a precision between 0.83 and 0.98, compared to 0.65\u20130.82 and 0.07\u20130.77 for the others. In addition, ML-based techniques perform better than statistical techniques. Our experimental results on real field data further indicate that the k-nearest neighbor (KNN) and Fulford-Blasingame methods are superior to other outlier detection frameworks for outlier detection in production data, followed by four others including density-based spatial clustering of applications with noise (DBSCAN), and angle-based outlier detection (ABOD). Even though the techniques are examined with oil and gas production data, but the same data cleaning workflow can be used to detect timeseries\u2019 outliers in other disciplines.

Steffensen Value Iteration as an alternative to Value Iteration for faster convergence

Y. Cheng, L. Chen, C. L. P. Chen and X. Wang, Off-Policy Deep Reinforcement Learning Based on Steffensen Value Iteration, IEEE Transactions on Cognitive and Developmental Systems, vol. 13, no. 4, pp. 1023-1032, Dec. 2021 DOI: 10.1109/TCDS.2020.3034452.

As an important machine learning method, deep reinforcement learning (DRL) has been rapidly developed in recent years and has achieved breakthrough results in many fields, such as video games, natural language processing, and robot control. However, due to the inherit trial-and-error learning mechanism of reinforcement learning and the time-consuming training of deep neural network itself, the convergence speed of DRL is very slow and consequently limits the real applications of DRL. In this article, aiming to improve the convergence speed of DRL, we proposed a novel Steffensen value iteration (SVI) method by applying the Steffensen iteration to the value function iteration of off-policy DRL from the perspective of fixed-point iteration. The proposed SVI is theoretically proved to be convergent and have a faster convergence speed than Bellman value iteration. The proposed SVI has versatility, which can be easily combined with existing off-policy RL algorithms. In this article, we proposed two speedy off-policy DRLs by combining SVI with DDQN and TD3, respectively, namely, SVI-DDQN and SVI-TD3. Experiments on several discrete-action and continuous-action tasks from the Atari 2600 and MuJoCo platforms demonstrated that our proposed SVI-based DRLs can achieve higher average reward in a shorter time than the comparative algorithm.

A general model of abstraction of graphs

Christer Bäckström, Peter Jonsson, A framework for analysing state-abstraction methods, Artificial Intelligence, Volume 302, 2022 DOI: 10.1016/j.artint.2021.103608.

Abstraction has been used in combinatorial search and action planning from the very beginning of AI. Many different methods and formalisms for state abstraction have been proposed in the literature, but they have been designed from various points of view and with varying purposes. Hence, these methods have been notoriously difficult to analyse and compare in a structured way. In order to improve upon this situation, we present a coherent and flexible framework for modelling abstraction (and abstraction-like) methods based on graph transformations. The usefulness of the framework is demonstrated by applying it to problems in both search and planning. We model six different abstraction methods from the planning literature and analyse their intrinsic properties. We show how to capture many search abstraction concepts (such as avoiding backtracking between levels) and how to put them into a broader context. We also use the framework to identify and investigate connections between refinement and heuristics—two concepts that have usually been considered as unrelated in the literature. This provides new insights into various topics, e.g. Valtorta’s theorem and spurious states. We finally extend the framework with composition of transformations to accommodate for abstraction hierarchies, and other multi-level concepts. We demonstrate the latter by modelling and analysing the merge-and-shrink abstraction method.

Cubature (fixed point representation of uncertainties, as in UKF) Kalman Filter

Juan-Carlos Santos-León, Ramón Orive, Daniel Acosta, Leopoldo Acosta, The Cubature Kalman Filter revisited, . Automatica, Volume 127, 2021 DOI: 10.1016/j.automatica.2021.109541.

In this paper, the construction and effectiveness of the so-called Cubature Kalman Filter (CKF) is revisited, as well as its extensions for higher degrees of precision. In this sense, some stable (with respect to the dimension) cubature rules with a quasi-optimal number of nodes are built, and their numerical performance is checked in comparison with other known formulas. All these cubature rules are suitably placed in the mathematical framework of numerical integration in several variables. A method based on the discretization of higher order partial derivatives by certain divided differences is used to provide stable rules of degrees d=5 and d=7, though it can also be applied for higher dimensions. The application of these old and new formulas to the filter algorithm is tested by means of some examples.

Linear regression when not only Y is perturbed by noise, but also the very model is assumed to have noise

Sophie M. Fosson, Vito Cerone, Diego Regruto, Sparse linear regression from perturbed data, . Automatica, Volume 122, 2020, DOI: 10.1016/j.automatica.2020.109284.

The problem of sparse linear regression is relevant in the context of linear system identification from large datasets. When data are collected from real-world experiments, measurements are always affected by perturbations or low-precision representations. However, the problem of sparse linear regression from fully-perturbed data is scarcely studied in the literature, due to its mathematical complexity. In this paper, we show that, by assuming bounded perturbations, this problem can be tackled by solving low-complex ℓ2 and ℓ1 minimization problems. Both theoretical guarantees and numerical results are illustrated.

Including uncertainty into the model of a KF to provide robust estimators

Shaolin Ji, Chuiliu Kong, Chuanfeng Sun, A robust Kalman–Bucy filtering problem, . Automatica, Volume 122, 2020, DOI: 10.1016/j.automatica.2020.109252.

A generalized Kalman–Bucy model under model uncertainty and a corresponding robust problem are studied in this paper. We find that this robust problem is equivalent to an estimated problem under a sublinear operator. By Girsanov transformation and the minimax theorem, we prove that this problem can be reformulated as a classical Kalman–Bucy filtering problem under a new probability measure. The equation which governs the optimal estimator is obtained. Moreover, the optimal estimator can be decomposed into the classical optimal estimator and a term related to the model uncertainty parameter under some condition.

A measure of when and how much the UKF is better than the EKF

Sanat K. Biswas, Li Qiao, Andrew G. Dempster, A quantified approach of predicting suitability of using the Unscented Kalman Filter in a non-linear application, . Automatica, Volume 122, 2020, DOI: 10.1016/j.automatica.2020.109241.

A mathematical framework to predict the Unscented Kalman Filter (UKF) performance improvement relative to the Extended Kalman Filter (EKF) using a quantitative measure of non-linearity is presented. It is also shown that the range of performance improvement the UKF can attain, for a given minimum probability depends on the Non-linearity Indices of the corresponding system and measurement models. Three distinct non-linear estimation problems are examined to verify these relations. A launch vehicle trajectory estimation problem, a satellite orbit estimation problem and a re-entry vehicle position estimation problem are examined to verify these relations. Using these relations, a procedure is suggested to predict the estimation performance improvement offered by the UKF relative to the EKF for a given non-linear system and measurement without designing, implementing and tuning the two Kalman Filters.

The problems of the initial state in filtering and its effects in the estimation

He Kong, Mao Shan, Daobilige Su, Yongliang Qiao, Abdullah Al-Azzawi, Salah Sukkarieh, Filtering for systems subject to unknown inputs without a priori initial information, . Automatica, Volume 120, 2020 DOI: 10.1016/j.automatica.2020.109122.

The last few decades have witnessed much development in filtering of systems with Gaussian noises and arbitrary unknown inputs. Nonetheless, there are still some important design questions that warrant thorough discussions. Especially, the existing literature has shown that for unbiased and minimum variance estimation of the state and the unknown input, the initial guess of the state has to be unbiased. This clearly raises the question of whether and under what conditions one can design an unbiased and minimum variance filter, without making such a stringent assumption. The above-mentioned question will be investigated systematically in this paper, i.e., design of the filter is sought to be independent of a priori information about the initial conditions. In particular, for both cases with and without direct feedthrough, we establish necessary and sufficient conditions for unbiased and minimum variance estimation of the state/unknown input, independently of a priori initial conditions, respectively. When the former conditions do not hold, we carry out a thorough analysis of all possible scenarios. For each scenario, we present detailed discussions regarding whether and what can be achieved in terms of unbiased estimation, independently of a priori initial conditions. Extensions to the case with time-delays, conceptually like Kalman smoothing where future measurements are allowed in estimation, will also be presented, amongst others.

Shunyi Zhao, Biao Huang, Trial-and-error or avoiding a guess? Initialization of the Kalman filter, . Automatica, Volume 121, 2020 DOI: 10.1016/j.automatica.2020.109184.

As a recursive state estimation algorithm, the Kalman filter (KF) assumes initial state distribution is known a priori, while in practice the initial distribution is commonly treated as design parameters. In this paper, we will answer three questions concerning initialization: (1) At each time step, how does the KF respond to measurements, control signals, and more importantly, initial states? (2) What is the price (in terms of accuracy) one has to pay if inaccurate initial states are used? and (3) Can we find a better strategy rather than through guessing to improve the performance of KF in the initial estimation phase when the initial condition is unknown? To these ends, the classical recursive KF is first transformed into an equivalent but batch form, from which the responses of the KF to measurements, control signal, and initial state can be clearly separated and observed. Based on this, we isolate the initial distribution by dividing the original state into two parts and reconstructing a new state-space model. An initialization algorithm is then proposed by employing the Bayesian inference technique to estimate all the unknown variables simultaneously. By analyzing its performance, an improved version is further developed. Two simulation examples demonstrate that the proposed initialization approaches can be considered as competitive alternatives of various existing initialization methods when initial condition is unknown.