Tag Archives: Gaussian Processes

Mapping the wifi signal for robot localization both precisely and accurately through a complex model of the signal

Renato Miyagusuku, Atsushi Yamashita, Hajime Asama, Precise and accurate wireless signal strength mappings using Gaussian processes and path loss models, Robotics and Autonomous Systems, Volume 103, 2018, Pages 134-150, DOI: 10.1016/j.robot.2018.02.011.

In this work, we present a new modeling approach that generates precise (low variance) and accurate (low mean error) wireless signal strength mappings. In robot localization, these mappings are used to compute the likelihood of locations conditioned to new sensor measurements. Therefore, both mean and variance predictions are required. Gaussian processes have been successfully used for learning highly accurate mappings. However, they generalize poorly at locations far from their training inputs, making those predictions have high variance (low precision). In this work, we address this issue by incorporating path loss models, which are parametric functions that although lacking in accuracy, generalize well. Path loss models are used together with Gaussian processes to compute mean predictions and most importantly, to bound Gaussian processes’ predicted variances. Through extensive testing done with our open source framework, we demonstrate the ability of our approach to generating precise and accurate mappings, and the increased localization accuracy of Monte Carlo localization algorithms when using them; with all our datasets and software been made readily available online for the community.

Robots that pre-compute a number of possible behaviours (in simulation) and then learn their performance with them (propragating that performance measures to similar behaviors through Gaussian Processes Regression) and select the best at each situation (through Bayesian Optimization), thus confronting varying environments and damages to the robot

A. Cully, et al. Robots that can adapt like animals, Nature, 521 (2015), pp. 503–507, DOI: 10.1038/nature14422.

Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot think outside the box to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robotâ €™ s prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.

Efficient computation of determinant and inversion of gaussian covariance matrices in the context of gaussian processes

Sivaram Ambikasaran, Daniel Foreman-Mackey, Leslie Greengard, David W. Hogg, and Michael O’Neil, Fast Direct Methods for Gaussian Processes, in IEEE Transactions on Pattern Analysis and Machine Intelligence , vol.38, no.2, pp.252-265, Feb. 1 2016, DOI: 10.1109/TPAMI.2015.2448083
.

A number of problems in probability and statistics can be addressed using the multivariate normal (Gaussian) distribution. In the one-dimensional case, computing the probability for a given mean and variance simply requires the evaluation of the corresponding Gaussian density. In the n-dimensional setting, however, it requires the inversion of an n x n covariance matrix, C, as well as the evaluation of its determinant, det(C). In many cases, such as regression using Gaussian processes, the covariance matrix is of the form C = σ2I + K, where K is computed using a specified covariance kernel which depends on the data and additional parameters (hyperparameters). The matrix C is typically dense, causing standard direct methods for inversion and determinant evaluation to require O(n3) work. This cost is prohibitive for large-scale modeling. Here, we show that for the most commonly used covariance functions, the matrix C can be hierarchically factored into a product of block low-rank updates of the identity matrix, yielding an O(n log2 n) algorithm for inversion. More importantly, we show that this factorization enables the evaluation of the determinant det(C), permitting the direct calculation of probabilities in high dimensions under fairly broad assumptions on the kernel defining K. Our fast algorithm brings many problems in marginalization and the adaptation of hyperparameters within practical reach using a single CPU core. The combination of nearly optimal scaling in terms of problem size with high-performance computing resources will permit the modeling of previously intractable problems. We illustrate the performance of the scheme on standard covariance kernels.

Brief but nice related work about structured prediction (MRFs, CRFs, etc.)

Bratieres, S.; Quadrianto, N.; Ghahramani, Z., GPstruct: Bayesian Structured Prediction Using Gaussian Processes, Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.37, no.7, pp.1514,1520, July 1 2015, DOI: 10.1109/TPAMI.2014.2366151.

We introduce a conceptually novel structured prediction model, GPstruct, which is kernelized, non-parametric and Bayesian, by design. We motivate the model with respect to existing approaches, among others, conditional random fields (CRFs), maximum margin Markov networks (M ^3 N), and structured support vector machines (SVMstruct), which embody only a subset of its properties. We present an inference procedure based on Markov Chain Monte Carlo. The framework can be instantiated for a wide range of structured objects such as linear chains, trees, grids, and other general graphs. As a proof of concept, the model is benchmarked on several natural language processing tasks and a video gesture segmentation task involving a linear chain structure. We show prediction accuracies for GPstruct which are comparable to or exceeding those of CRFs and SVMstruct.