An excellent survey of metrical SLAM (and of map representations and other issues related to SLAM) as of 2016

C. Cadena et al., “Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age,” in IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309-1332, Dec. 2016. DOI: 10.1109/TRO.2016.2624754.

Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. We survey the current state of SLAM and consider future directions. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?

Subgraph matching (isomorphism) using GPUs for managing commonsense knowledge, and a short list of other graph problems that have had benefit from multiprocessing

Ha-Nguyen Tran, Erik Cambria, Amir Hussain, Towards GPU-Based Common-Sense Reasoning: Using Fast Subgraph Matching, Cognitive Computation, December 2016, Volume 8, Issue 6, pp 1074–1086, DOI: 10.1007/s12559-016-9418-4.

Common-sense reasoning is concerned with simulating cognitive human ability to make presumptions about the type and essence of ordinary situations encountered every day. The most popular way to represent common-sense knowledge is in the form of a semantic graph. Such type of knowledge, however, is known to be rather extensive: the more concepts added in the graph, the harder and slower it becomes to apply standard graph mining techniques.In this work, we propose a new fast subgraph matching approach to overcome these issues. Subgraph matching is the task of finding all matches of a query graph in a large data graph, which is known to be a non-deterministic polynomial time-complete problem. Many algorithms have been previously proposed to solve this problem using central processing units. Here, we present a new graphics processing unit-friendly method for common-sense subgraph matching, termed GpSense, which is designed for scalable massively parallel architectures, to enable next-generation Big Data sentiment analysis and natural language processing applications.We show that GpSense outperforms state-of-the-art algorithms and efficiently answers subgraph queries on large common-sense graphs.

Including selective attention and cortical magnification to improve computer vision

Ala Aboudib, Vincent Gripon, Gilles Coppin, A Biologically Inspired Framework for Visual Information Processing and an Application on Modeling Bottom-Up Visual Attention, Cognitive Computation, December 2016, Volume 8, Issue 6, pp 1007–1026, DOI: 10.1007/s12559-016-9430-8.

An emerging trend in visual information processing is toward incorporating some interesting properties of the ventral stream in order to account for some limitations of machine learning algorithms. Selective attention and cortical magnification are two such important phenomena that have been the subject of a large body of research in recent years. In this paper, we focus on designing a new model for visual acquisition that takes these important properties into account.We propose a new framework for visual information acquisition and representation that emulates the architecture of the primate visual system by integrating features such as retinal sampling and cortical magnification while avoiding spatial deformations and other side effects produced by models that tried to implement these two features. It also explicitly integrates the notion of visual angle, which is rarely taken into account by vision models. We argue that this framework can provide the infrastructure for implementing vision tasks such as object recognition and computational visual attention algorithms.To demonstrate the utility of the proposed vision framework, we propose an algorithm for bottom-up saliency prediction implemented using the proposed architecture. We evaluate the performance of the proposed model on the MIT saliency benchmark and show that it attains state-of-the-art performance, while providing some advantages over other models.

Partially observed boolean dynamic systems

M. Imani and U. M. Braga-Neto, “Maximum-Likelihood Adaptive Filter for Partially Observed Boolean Dynamical Systems,” in IEEE Transactions on Signal Processing, vol. 65, no. 2, pp. 359-371, Jan.15, 15 2017.DOI: 10.1109/TSP.2016.2614798.

We present a framework for the simultaneous estimation of state and parameters of partially observed Boolean dynamical systems (POBDS). Simultaneous state and parameter estimation is achieved through the combined use of the Boolean Kalman filter and Boolean Kalman smoother, which provide the minimum mean-square error state estimators for the POBDS model, and maximum-likelihood (ML) parameter estimation; in the presence of continuous parameters, ML estimation is performed using the expectation-maximization algorithm. The performance of the proposed ML adaptive filter is demonstrated by numerical experiments with a POBDS model of gene regulatory networks observed through noisy next-generation sequencing (RNA-seq) time series data using the well-known p53-MDM2 negative-feedback loop gene regulatory model.

Performing filtering on graphs instead of individual signals

E. Isufi, A. Loukas, A. Simonetto and G. Leus, “Autoregressive Moving Average Graph Filtering,” in IEEE Transactions on Signal Processing, vol. 65, no. 2, pp. 274-288, Jan.15, 15 2017. DOI: 10.1109/TSP.2016.2614793.

One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogs of classical filters, but intended for signals defined on graphs. This paper brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which are able to approximate any desired graph frequency response, and give exact solutions for specific graph signal denoising and interpolation problems. The philosophy to design the ARMA coefficients independently from the underlying graph renders the ARMA graph filters suitable in static and, particularly, time-varying settings. The latter occur when the graph signal and/or graph topology are changing over time. We show that in case of a time-varying graph signal, our approach extends naturally to a two-dimensional filter, operating concurrently in the graph and regular time domain. We also derive the graph filter behavior, as well as sufficient conditions for filter stability when the graph and signal are time varying. The analytical and numerical results presented in this paper illustrate that ARMA graph filters are practically appealing for static and time-varying settings, as predicted by theoretical derivations.

On the limitations of cognitive control from the human psychological perspective

Tarek Amer, Karen L. Campbell, Lynn Hasher, Cognitive Control As a Double-Edged Sword, Trends in Cognitive Sciences, Volume 20, Issue 12, 2016, Pages 905-915, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.10.002.

Cognitive control, the ability to limit attention to goal-relevant information, aids performance on a wide range of laboratory tasks. However, there are many day-to-day functions which require little to no control and others which even benefit from reduced control. We review behavioral and neuroimaging evidence demonstrating that reduced control can enhance the performance of both older and, under some circumstances, younger adults. Using healthy aging as a model, we demonstrate that decreased cognitive control benefits performance on tasks ranging from acquiring and using environmental information to generating creative solutions to problems. Cognitive control is thus a double-edged sword – aiding performance on some tasks when fully engaged, and many others when less engaged.

A proposal that explains why the human brain seems bayesian but finds difficulties in using probabilities: because it uses sampling

Adam N. Sanborn, Nick Chater, Bayesian Brains without Probabilities, Trends in Cognitive Sciences, Volume 20, Issue 12, 2016, Pages 883-893, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.10.003.

Bayesian explanations have swept through cognitive science over the past two decades, from intuitive physics and causal learning, to perception, motor control and language. Yet people flounder with even the simplest probability questions. What explains this apparent paradox? How can a supposedly Bayesian brain reason so poorly with probabilities? In this paper, we propose a direct and perhaps unexpected answer: that Bayesian brains need not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, the brain is a Bayesian sampler. Only with infinite samples does a Bayesian sampler conform to the laws of probability; with finite samples it systematically generates classic probabilistic reasoning errors, including the unpacking effect, base-rate neglect, and the conjunction fallacy.

State of the art of symbolic planning, particularly the one that optimizes some cost, and a novel approach

Álvaro Torralba, Vidal Alcázar, Peter Kissmann, Stefan Edelkamp, Efficient symbolic search for cost-optimal planning, Artificial Intelligence, Volume 242, January 2017, Pages 52-79, ISSN 0004-3702, DOI: 10.1016/j.artint.2016.10.001.

In cost-optimal planning we aim to find a sequence of operators that achieve a set of goals with minimum cost. Symbolic search with Binary Decision Diagrams (BDDs) performs efficient state space exploration in terms of time and memory. This is crucial in optimal settings, in which large parts of the state space must be explored in order to prove optimality. However, the development of accurate heuristics for explicit-state search in recent years have left symbolic search techniques in a secondary place. In this article we propose two orthogonal improvements for symbolic search planning. On the one hand, we analyze and compare different methods for image computation in order to efficiently perform the successor generation on symbolic search. Image computation is the main bottleneck of symbolic search algorithms so an efficient computation is paramount for efficient symbolic search planning. On the other hand, we study how to use state-invariant constraints to prune states in symbolic search. This is essential in regression search but it is yet to be exploited in symbolic search planners. Experiments with symbolic bidirectional uniform-cost search and symbolic A ⁎ search with PDBs show remarkable performance improvements on most IPC benchmark domains. Overall, with the help of our improvements, symbolic bidirectional search outperforms explicit-state search with state-of-the-art heuristics such as LM-cut across many different domains.

Model-based reinforcement learning with a reduced number of basis functions to aproximate the value function, a study of its convergence guarantees, and a nice state of the art on the use of (mdel-based) reinforcement learning for automatic control

Rushikesh Kamalapurkar, Joel A. Rosenfeld, Warren E. Dixon, Efficient model-based reinforcement learning for approximate online optimal control, Automatica, Volume 74, 2016, Pages 247-258, ISSN 0005-1098, DOI: 10.1016/j.automatica.2016.08.004.

An infinite horizon optimal regulation problem is solved online for a deterministic control-affine nonlinear dynamical system using a state following (StaF) kernel method to approximate the value function. Unlike traditional methods that aim to approximate a function over a large compact set, the StaF kernel method aims to approximate a function in a small neighborhood of a state that travels within a compact set. Simulation results demonstrate that stability and approximate optimality of the control system can be achieved with significantly fewer basis functions than may be required for global approximation methods.

A novel particle filter algorithm with an adaptive number of particles, and a curious and interesting table I about the pros and cons of different sensors

T. de J. Mateo Sanguino and F. Ponce Gómez, “Toward Simple Strategy for Optimal Tracking and Localization of Robots With Adaptive Particle Filtering,” in IEEE/ASME Transactions on Mechatronics, vol. 21, no. 6, pp. 2793-2804, Dec. 2016.DOI: 10.1109/TMECH.2016.2531629.

The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter – DAPF) is to provide higher number of particles during the initial searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named Kullback-Leiber distance.