Category Archives: Mathematics

Partially observed boolean dynamic systems

M. Imani and U. M. Braga-Neto, “Maximum-Likelihood Adaptive Filter for Partially Observed Boolean Dynamical Systems,” in IEEE Transactions on Signal Processing, vol. 65, no. 2, pp. 359-371, Jan.15, 15 2017.DOI: 10.1109/TSP.2016.2614798.

We present a framework for the simultaneous estimation of state and parameters of partially observed Boolean dynamical systems (POBDS). Simultaneous state and parameter estimation is achieved through the combined use of the Boolean Kalman filter and Boolean Kalman smoother, which provide the minimum mean-square error state estimators for the POBDS model, and maximum-likelihood (ML) parameter estimation; in the presence of continuous parameters, ML estimation is performed using the expectation-maximization algorithm. The performance of the proposed ML adaptive filter is demonstrated by numerical experiments with a POBDS model of gene regulatory networks observed through noisy next-generation sequencing (RNA-seq) time series data using the well-known p53-MDM2 negative-feedback loop gene regulatory model.

Performing filtering on graphs instead of individual signals

E. Isufi, A. Loukas, A. Simonetto and G. Leus, “Autoregressive Moving Average Graph Filtering,” in IEEE Transactions on Signal Processing, vol. 65, no. 2, pp. 274-288, Jan.15, 15 2017. DOI: 10.1109/TSP.2016.2614793.

One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogs of classical filters, but intended for signals defined on graphs. This paper brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which are able to approximate any desired graph frequency response, and give exact solutions for specific graph signal denoising and interpolation problems. The philosophy to design the ARMA coefficients independently from the underlying graph renders the ARMA graph filters suitable in static and, particularly, time-varying settings. The latter occur when the graph signal and/or graph topology are changing over time. We show that in case of a time-varying graph signal, our approach extends naturally to a two-dimensional filter, operating concurrently in the graph and regular time domain. We also derive the graph filter behavior, as well as sufficient conditions for filter stability when the graph and signal are time varying. The analytical and numerical results presented in this paper illustrate that ARMA graph filters are practically appealing for static and time-varying settings, as predicted by theoretical derivations.

A variant of particle filters that uses feedback to model how particles move towards the real posterior

T. Yang, P.~G. Mehta, S.~P. Meyn, Feedback particle filter, IEEE Transactions on Automatic Control, 58 (10) (2013), pp. 2465â–2480, DOI: 10.1109/TAC.2013.2258825.

The feedback particle filter introduced in this paper is a new approach to approximate nonlinear filtering, motivated by techniques from mean-field game theory. The filter is defined by an ensemble of controlled stochastic systems (the particles). Each particle evolves under feedback control based on its own state, and features of the empirical distribution of the ensemble. The feedback control law is obtained as the solution to an optimal control problem, in which the optimization criterion is the Kullback-Leibler divergence between the actual posterior, and the common posterior of any particle. The following conclusions are obtained for diffusions with continuous observations: 1) The optimal control solution is exact: The two posteriors match exactly, provided they are initialized with identical priors. 2) The optimal filter admits an innovation error-based gain feedback structure. 3) The optimal feedback gain is obtained via a solution of an Euler-Lagrange boundary value problem; the feedback gain equals the Kalman gain in the linear Gaussian case. Numerical algorithms are introduced and implemented in two general examples, and a neuroscience application involving coupled oscillators. In some cases it is found that the filter exhibits significantly lower variance when compared to the bootstrap particle filter.

A gentle introduction to Box-Particle Filters

A. Gning, B. Ristic, L. Mihaylova and F. Abdallah, An Introduction to Box Particle Filtering [Lecture Notes], in IEEE Signal Processing Magazine, vol. 30, no. 4, pp. 166-171, July 2013. DOI: 10.1109/MSP.2013.225460.

Resulting from the synergy between the sequential Monte Carlo (SMC) method [1] and interval analysis [2], box particle filtering is an approach that has recently emerged [3] and is aimed at solving a general class of nonlinear filtering problems. This approach is particularly appealing in practical situations involving imprecise stochastic measurements that result in very broad posterior densities. It relies on the concept of a box particle that occupies a small and controllable rectangular region having a nonzero volume in the state space. Key advantages of the box particle filter (box-PF) against the standard particle filter (PF) are its reduced computational complexity and its suitability for distributed filtering. Indeed, in some applications where the sampling importance resampling (SIR) PF may require thousands of particles to achieve accurate and reliable performance, the box-PF can reach the same level of accuracy with just a few dozen box particles. Recent developments [4] also show that a box-PF can be interpreted as a Bayes? filter approximation allowing the application of box-PF to challenging target tracking problems [5].

Using multiple RANSACs for tracking

Peter C. Niedfeldt and Randal W. Beard, Convergence and Complexity Analysis of Recursive-RANSAC: A New Multiple Target Tracking Algorithm, in IEEE Transactions on Automatic Control , vol.61, no.2, pp.456-461, Feb. 2016, DOI: 10.1109/TAC.2015.2437518.

The random sample consensus (RANSAC) algorithm was developed as a regression algorithm that robustly estimates the parameters of a single signal in clutter by forming many simple hypotheses and computing how many measurements support that hypothesis. In essence, RANSAC estimates the data association problem of a single target in clutter by identifying the hypothesis with the most supporting measurements. The newly developed recursive-RANSAC (R-RANSAC) algorithm extends the traditional RANSAC algorithm to track multiple targets recursively by storing a set of hypotheses between time steps. In this technical note we show that R-RANSAC converges to the minimum mean-squared solution for well-spaced targets. We also show that the worst-case computational complexity of R-RANSAC is quadratic in the number of new measurements and stored models.

Efficient computation of determinant and inversion of gaussian covariance matrices in the context of gaussian processes

Sivaram Ambikasaran, Daniel Foreman-Mackey, Leslie Greengard, David W. Hogg, and Michael O’Neil, Fast Direct Methods for Gaussian Processes, in IEEE Transactions on Pattern Analysis and Machine Intelligence , vol.38, no.2, pp.252-265, Feb. 1 2016, DOI: 10.1109/TPAMI.2015.2448083
.

A number of problems in probability and statistics can be addressed using the multivariate normal (Gaussian) distribution. In the one-dimensional case, computing the probability for a given mean and variance simply requires the evaluation of the corresponding Gaussian density. In the n-dimensional setting, however, it requires the inversion of an n x n covariance matrix, C, as well as the evaluation of its determinant, det(C). In many cases, such as regression using Gaussian processes, the covariance matrix is of the form C = σ2I + K, where K is computed using a specified covariance kernel which depends on the data and additional parameters (hyperparameters). The matrix C is typically dense, causing standard direct methods for inversion and determinant evaluation to require O(n3) work. This cost is prohibitive for large-scale modeling. Here, we show that for the most commonly used covariance functions, the matrix C can be hierarchically factored into a product of block low-rank updates of the identity matrix, yielding an O(n log2 n) algorithm for inversion. More importantly, we show that this factorization enables the evaluation of the determinant det(C), permitting the direct calculation of probabilities in high dimensions under fairly broad assumptions on the kernel defining K. Our fast algorithm brings many problems in marginalization and the adaptation of hyperparameters within practical reach using a single CPU core. The combination of nearly optimal scaling in terms of problem size with high-performance computing resources will permit the modeling of previously intractable problems. We illustrate the performance of the scheme on standard covariance kernels.

Using the Bingham distribution of probability, which is defined on a d-dimensional sphere to be antipodally symmetric, to address the problem of angle periodicity in [0,2pi] when estimating orientation in a recursive filter

Gilitschenski, I.; Kurz, G.; Julier, S.J.; Hanebeck, U.D., Unscented Orientation Estimation Based on the Bingham Distribution, in Automatic Control, IEEE Transactions on , vol.61, no.1, pp.172-177, Jan. 2016, DOI: 10.1109/TAC.2015.2423831.

In this work, we develop a recursive filter to estimate orientation in 3D, represented by quaternions, using directional distributions. Many closed-form orientation estimation algorithms are based on traditional nonlinear filtering techniques, such as the extended Kalman filter (EKF) or the unscented Kalman filter (UKF). These approaches assume the uncertainties in the system state and measurements to be Gaussian-distributed. However, Gaussians cannot account for the periodic nature of the manifold of orientations and thus small angular errors have to be assumed and ad hoc fixes must be used. In this work, we develop computationally efficient recursive estimators that use the Bingham distribution. This distribution is defined on the hypersphere and is inherently more suitable for periodic problems. As a result, these algorithms are able to consistently estimate orientation even in the presence of large angular errors. Furthermore, handling of nontrivial system functions is performed using an entirely deterministic method which avoids any random sampling. A scheme reminiscent of the UKF is proposed for the nonlinear manifold of orientations. It is the first deterministic sampling scheme that truly reflects the nonlinear manifold of orientations.

Robust Estimation of Unbalanced Mixture Models on Samples with Outliers

Galimzianova, A.; Pernus, F.; Likar, B.; Spiclin, Z., Robust Estimation of Unbalanced Mixture Models on Samples with Outliers, in Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.37, no.11, pp.2273-2285, Nov. 1 2015, DOI: 10.1109/TPAMI.2015.2404835.

Mixture models are often used to compactly represent samples from heterogeneous sources. However, in real world, the samples generally contain an unknown fraction of outliers and the sources generate different or unbalanced numbers of observations. Such unbalanced and contaminated samples may, for instance, be obtained by high density data sensors such as imaging devices. Estimation of unbalanced mixture models from samples with outliers requires robust estimation methods. In this paper, we propose a novel robust mixture estimator incorporating trimming of the outliers based on component-wise confidence level ordering of observations. The proposed method is validated and compared to the state-of-the-art FAST-TLE method on two data sets, one consisting of synthetic samples with a varying fraction of outliers and a varying balance between mixture weights, while the other data set contained structural magnetic resonance images of the brain with tumors of varying volumes. The results on both data sets clearly indicate that the proposed method is capable to robustly estimate unbalanced mixtures over a broad range of outlier fractions. As such, it is applicable to real-world samples, in which the outlier fraction cannot be estimated in advance.

A clarification and systematization of UKF

Menegaz, H.M.T.; Ishihara, J.Y.; Borges, G.A.; Vargas, A.N., A Systematization of the Unscented Kalman Filter Theory, in Automatic Control, IEEE Transactions on , vol.60, no.10, pp.2583-2598, Oct. 2015 DOI: 10.1109/TAC.2015.2404511.

In this paper, we propose a systematization of the (discrete-time) Unscented Kalman Filter (UKF) theory. We gather all available UKF variants in the literature, present corrections to theoretical inconsistencies, and provide a tool for the construction of new UKF’s in a consistent way. This systematization is done, mainly, by revisiting the concepts of Sigma-Representation, Unscented Transformation (UT), Scaled Unscented Transformation (SUT), UKF, and Square-Root Unscented Kalman Filter (SRUKF). Inconsistencies are related to 1) matching the order of the transformed covariance and cross-covariance matrices of both the UT and the SUT; 2) multiple UKF definitions; 3) issue with some reduced sets of sigma points described in the literature; 4) the conservativeness of the SUT; 5) the scaling effect of the SUT on both its transformed covariance and cross-covariance matrices; and 6) possibly ill-conditioned results in SRUKF’s. With the proposed systematization, the symmetric sets of sigma points in the literature are formally justified, and we are able to provide new consistent variations for UKF’s, such as the Scaled SRUKF’s and the UKF’s composed by the minimum number of sigma points. Furthermore, our proposed SRUKF has improved computational properties when compared to state-of-the-art methods.

Building probabilistic models of physical processes from their deterministic models and some experimental data, with guarantees on the degree of coincidence between the generated model and the real system

Konstantinos Karydis, Ioannis Poulakakis, Jianxin Sun, and Herbert G. Tanner, Probabilistically valid stochastic extensions of deterministic models for systems with uncertainty, The International Journal of Robotics Research September 2015 34: 1278-1295, first published on May 28, 2015. DOI: 10.1177/0278364915576336.

Models capable of capturing and reproducing the variability observed in experimental trials can be valuable for planning and control in the presence of uncertainty. This paper reports on a new data-driven methodology that extends deterministic models to a stochastic regime and offers probabilistic guarantees of model fidelity. From an acceptable deterministic model, a stochastic one is generated, capable of capturing and reproducing uncertain system–environment interactions at given levels of fidelity. The reported approach combines methodological elements from probabilistic model validation and randomized algorithms, to simultaneously quantify the fidelity of a model and tune the distribution of random parameters in the corresponding stochastic extension, in order to reproduce the variability observed experimentally in the physical process of interest. The approach can be applied to an array of physical processes, the models of which may come in different forms, including differential equations; we demonstrate this point by considering examples from the areas of miniature legged robots and aerial vehicles.