Author Archives: Juan-antonio Fernández-madrigal

Symbol grounding through neural networks

Shridhar M, Mittal D, Hsu D., INGRESS: Interactive visual grounding of referring expressions, The International Journal of Robotics Research. January 2020, DOI: 10.1177/0278364919897133.

This article presents INGRESS, a robot system that follows human natural language instructions to pick and place everyday objects. The key question here is to ground referring expressions: understand expressions about objects and their relationships from image and natural language inputs. INGRESS allows unconstrained object categories and rich language expressions. Further, it asks questions to clarify ambiguous referring expressions interactively. To achieve these, we take the approach of grounding by generation and propose a two-stage neural-network model for grounding. The first stage uses a neural network to generate visual descriptions of objects, compares them with the input language expressions, and identifies a set of candidate objects. The second stage uses another neural network to examine all pairwise relations between the candidates and infers the most likely referred objects. The same neural networks are used for both grounding and question generation for disambiguation. Experiments show that INGRESS outperformed a state-of-the-art method on the RefCOCO dataset and in robot experiments with humans. The INGRESS source code is available at https://github.com/MohitShridhar/ingress.

Adapting perception to environmental changes explicitly

Sriram Siva, Hao Zhang, Robot perceptual adaptation to environment changes for long-term human teammate following, The International Journal of Robotics Research. January 2020, DOI: 10.1177/0278364919896625.

Perception is one of the several fundamental abilities required by robots, and it also poses significant challenges, especially in real-world field applications. Long-term autonomy introduces additional difficulties to robot perception, including short- and long-term changes of the robot operation environment (e.g., lighting changes). In this article, we propose an innovative human-inspired approach named robot perceptual adaptation (ROPA) that is able to calibrate perception according to the environment context, which enables perceptual adaptation in response to environmental variations. ROPA jointly performs feature learning, sensor fusion, and perception calibration under a unified regularized optimization framework. We also implement a new algorithm to solve the formulated optimization problem, which has a theoretical guarantee to converge to the optimal solution. In addition, we collect a large-scale dataset from physical robots in the field, called perceptual adaptation to environment changes (PEAC), with the aim to benchmark methods for robot adaptation to short-term and long-term, and fast and gradual lighting changes for human detection based upon different feature modalities extracted from color and depth sensors. Utilizing the PEAC dataset, we conduct extensive experiments in the application of human recognition and following in various scenarios to evaluate ROPA. Experimental results have validated that the ROPA approach obtains promising performance in terms of accuracy and efficiency, and effectively adapts robot perception to address short-term and long-term lighting changes in human detection and following applications.

Quantizing a continuous POMDP into a finite MDP to preserve optimality

Naci Saldi; Serdar Yüksel; Tamás Linder, Asymptotic Optimality of Finite Model Approximations for Partially Observed Markov Decision Processes With Discounted Cost, IEEE Transactions on Automatic Control ( Volume: 65, Issue: 1, Jan. 2020), DOI: 10.1109/TAC.2019.2907172.

We consider finite model approximations of discrete-time partially observed Markov decision processes (POMDPs) under the discounted cost criterion. After converting the original partially observed stochastic control problem to a fully observed one on the belief space, the finite models are obtained through the uniform quantization of the state and action spaces of the belief space Markov decision process (MDP). Under mild assumptions on the components of the original model, it is established that the policies obtained from these finite models are nearly optimal for the belief space MDP, and so, for the original partially observed problem. The assumptions essentially require that the belief space MDP satisfies a mild weak continuity condition. We provide an example and introduce explicit approximation procedures for the quantization of the set of probability measures on the state space of POMDP (i.e., belief space).

A universal approximator for the value function in continuous-state VI

William B. Haskell; Rahul Jain; Hiteshi Sharma; Pengqian Yu, TA Universal Empirical Dynamic Programming Algorithm for Continuous State MDPs, IEEE Transactions on Automatic Control ( Volume: 65, Issue: 1, Jan. 2020), DOI: 10.1109/TAC.2019.2907414.

We propose universal randomized function approximation-based empirical value learning (EVL) algorithms for Markov decision processes. The “empirical” nature comes from each iteration being done empirically from samples available from simulations of the next state. This makes the Bellman operator a random operator. A parametric and a nonparametric method for function approximation using a parametric function space and a reproducing kernel Hilbert space respectively are then combined with EVL. Both function spaces have the universal function approximation property. Basis functions are picked randomly. Convergence analysis is performed using a random operator framework with techniques from the theory of stochastic dominance. Finite time sample complexity bounds are derived for both universal approximate dynamic programming algorithms. Numerical experiments support the versatility and computational tractability of this approach.

Do we prefer that our predictions fit observations -to validate our expectations- or that they surprise us -to acquire new knowledge-?

Clare Press, Peter Kok, Daniel Yon, The Perceptual Prediction Paradox, Trends in Cognitive Sciences, Volume 24, Issue 1, January 2020, Pages 4-6, DOI: 10.1016/j.tics.2019.11.003.

From the noisy information bombarding our senses, our brains must construct percepts that are veridical – reflecting the true state of the world – and informative – conveying what we did not already know. Influential theories suggest that both challenges are met through mechanisms that use expectations about the likely state of the world to shape perception. However, current models explaining how expectations render perception either veridical or informative are mutually incompatible. While the former propose that perceptual experiences are dominated by events we expect, the latter propose that perception of expected events is suppressed. To solve this paradox we propose a two-process model in which probabilistic knowledge initially biases perception towards what is likely and subsequently upweights events that are particularly surprising.

Similarities between motor control and cognitive control

Harrison Ritz, Romy Frömer, Amitai Shenhav, Bridging Motor and Cognitive Control: It’s About Time!, Trends in Cognitive Sciences, Volume 24, Issue 1, January 2020, Pages 4-6, DOI: 10.1016/j.tics.2019.11.005.

Is how we control our thoughts similar to how we control our movements? Egger et al. show that the neural dynamics underlying the control of internal states exhibit similar algorithmic properties as those that control movements. This experiment reveals a promising connection between how we control our brain and our body.

Interesting alternative to the classical “maximize expected utility” rule for decision making

EtienneKoechlin, Human Decision-Making beyond the Rational Decision Theory, Trends in Cognitive Sciences, Volume 24, Issue 1, January 2020, Pages 4-6, DOI: 10.1016/j.tics.2019.11.001.

Two recent studies (Farashahi et al. and Rouault et al.) provide compelling evidence refuting the Subjective Expected Utility (SEU) hypothesis as a ground model describing human decision-making. Together, these studies pave the way towards a new model that subsumes the notion of decision-making and adaptive behavior into a single account.

Nice related work on change-point detection and a novel algorithm for off-line detection of abrupt changes in multivariate signals

Charles Truong; Laurent Oudre; Nicolas Vayatis, Greedy Kernel Change-Point Detection, IEEE Transactions on Signal Processing ( Volume: 67, Issue: 24, Dec.15, 15 2019), DOI: 10.1109/TSP.2019.2953670.

We consider the problem of detecting abrupt changes in the underlying stochastic structure of multivariate signals. A novel non-parametric and model-free off-line change-point detection method based on a kernel mapping is presented. This approach is sequential and alternates between two steps: a greedy detection to estimate a new breakpoint and a projection to remove its contribution to the signal. The resulting algorithm is able to segment time series for which no accurate model is available: it is computationally more efficient than exact kernel change-point detection and more precise than window-based approximations. The proposed method also offers some theoretical consistency properties. For the special case of a linear kernel, an even faster implementation is provided. The proposed strategy is compared to standard parametric and non-parametric procedures on a real-world data set composed of 262 accelerometer recordings.

On the importance of dynamics and diversity in (cognitive) symbol systems

Tadahiro Taniguchi; Emre Ugur; Matej Hoffmann; Lorenzo Jamone; Takayuki Nagai; Benjamin Rosman, Symbol Emergence in Cognitive Developmental Systems: A Survey, IEEE Transactions on Cognitive and Developmental Systems ( Volume: 11, Issue: 4, Dec. 2019), DOI: 10.1109/TCDS.2018.2867772.

Humans use signs, e.g., sentences in a spoken language, for communication and thought. Hence, symbol systems like language are crucial for our communication with other agents and adaptation to our real-world environment. The symbol systems we use in our human society adaptively and dynamically change over time. In the context of artificial intelligence (AI) and cognitive systems, the symbol grounding problem has been regarded as one of the central problems related to symbols. However, the symbol grounding problem was originally posed to connect symbolic AI and sensorimotor information and did not consider many interdisciplinary phenomena in human communication and dynamic symbol systems in our society, which semiotics considered. In this paper, we focus on the symbol emergence problem, addressing not only cognitive dynamics but also the dynamics of symbol systems in society, rather than the symbol grounding problem. We first introduce the notion of a symbol in semiotics from the humanities, to leave the very narrow idea of symbols in symbolic AI. Furthermore, over the years, it became more and more clear that symbol emergence has to be regarded as a multifaceted problem. Therefore, second, we review the history of the symbol emergence problem in different fields, including both biological and artificial systems, showing their mutual relations. We summarize the discussion and provide an integrative viewpoint and comprehensive overview of symbol emergence in cognitive systems. Additionally, we describe the challenges facing the creation of cognitive systems that can be part of symbol emergence systems.

Interesting related work on internal models for action prediction and on the exploration/exploitation trade-off

Simón C. Smith; J. Michael Herrmann, Evaluation of Internal Models in Autonomous Learning, IEEE Transactions on Cognitive and Developmental Systems ( Volume: 11, Issue: 4, Dec. 2019), DOI: 10.1109/TCDS.2018.2865999.

Internal models (IMs) can represent relations between sensors and actuators in natural and artificial agents. In autonomous robots, the adaptation of IMs and the adaptation of the behavior are interdependent processes which have been studied under paradigms for self-organization of behavior such as homeokinesis. We compare the effect of various types of IMs on the generation of behavior in order to evaluate model quality across different behaviors. The considered IMs differ in the degree of flexibility and expressivity related to, respectively, learning speed and structural complexity of the model. We show that the different IMs generate different error characteristics which in turn lead to variations of the self-generated behavior of the robot. Due to the tradeoff between error minimization and complexity of the explored environment, we compare the models in the sense of Pareto optimality. Among the linear and nonlinear models that we analyze, echo-state networks achieve a particularly high performance which we explain as a result of the combination of fast learning and complex internal dynamics. More generally, we provide evidence that Pareto optimization is preferable in autonomous learning as it allows that a special solution can be negotiated in any particular environment.