Category Archives: Cognitive Sciences

Multi-agent reinfocerment learning for working with high-dimensional spaces

David L. Leottau, Javier Ruiz-del-Solar, Robert Babuška, Decentralized Reinforcement Learning of Robot Behaviors, Artificial Intelligence, Volume 256, 2018, Pages 130-159, DOI: 10.1016/j.artint.2017.12.001.

A multi-agent methodology is proposed for Decentralized Reinforcement Learning (DRL) of individual behaviors in problems where multi-dimensional action spaces are involved. When using this methodology, sub-tasks are learned in parallel by individual agents working toward a common goal. In addition to proposing this methodology, three specific multi agent DRL approaches are considered: DRL-Independent, DRL Cooperative-Adaptive (CA), and DRL-Lenient. These approaches are validated and analyzed with an extensive empirical study using four different problems: 3D Mountain Car, SCARA Real-Time Trajectory Generation, Ball-Dribbling in humanoid soccer robotics, and Ball-Pushing using differential drive robots. The experimental validation provides evidence that DRL implementations show better performances and faster learning times than their centralized counterparts, while using less computational resources. DRL-Lenient and DRL-CA algorithms achieve the best final performances for the four tested problems, outperforming their DRL-Independent counterparts. Furthermore, the benefits of the DRL-Lenient and DRL-CA are more noticeable when the problem complexity increases and the centralized scheme becomes intractable given the available computational resources and training time.

Survey of the modelling of agents (intentions, goals, etc.)

Stefano V. Albrecht, Peter Stone, Autonomous agents modelling other agents: A comprehensive survey and open problems, Artificial Intelligence,
Volume 258, 2018, Pages 66-95, DOI: 10.1016/j.artint.2018.01.002.

Much research in artificial intelligence is concerned with the development of autonomous agents that can interact effectively with other agents. An important aspect of such agents is the ability to reason about the behaviours of other agents, by constructing models which make predictions about various properties of interest (such as actions, goals, beliefs) of the modelled agents. A variety of modelling approaches now exist which vary widely in their methodology and underlying assumptions, catering to the needs of the different sub-communities within which they were developed and reflecting the different practical uses for which they are intended. The purpose of the present article is to provide a comprehensive survey of the salient modelling methods which can be found in the literature. The article concludes with a discussion of open problems which may form the basis for fruitful future research.

Using interactive reinforcement learning with the advisor being another reinforcement learning agent

Francisco Cruz, Sven Magg, Yukie Nagai & Stefan Wermter, Improving interactive reinforcement learning: What makes a good teacher?, Connection Science, DOI: 10.1080/09540091.2018.1443318.

Interactive reinforcement learning (IRL) has become an important apprenticeship approach to speed up convergence in classic reinforcement learning (RL) problems. In this regard, a variant of IRL is policy shaping which uses a parent-like trainer to propose the next action to be performed and by doing so reduces the search space by advice. On some occasions, the trainer may be another artificial agent which in turn was trained using RL methods to afterward becoming an advisor for other learner-agents. In this work, we analyse internal representations and characteristics of artificial agents to determine which agent may outperform others to become a better trainer-agent. Using a polymath agent, as compared to a specialist agent, an advisor leads to a larger reward and faster convergence of the reward signal and also to a more stable behaviour in terms of the state visit frequency of the learner-agents. Moreover, we analyse system interaction parameters in order to determine how influential they are in the apprenticeship process, where the consistency of feedback is much more relevant when dealing with different learner obedience parameters.

Using memory of past input data to improve the convergence of NN when faced with small samples

Zhang, S., Huang, K., Zhang, R. et al., Learning from Few Samples with Memory Network, Cogn Comput (2018) 10: 15, DOI: 10.1007/s12559-017-9507-z.

Neural networks (NN) have achieved great successes in pattern recognition and machine learning. However, the success of a NN usually relies on the provision of a sufficiently large number of data samples as training data. When fed with a limited data set, a NN’s performance may be degraded significantly. In this paper, a novel NN structure is proposed called a memory network. It is inspired by the cognitive mechanism of human beings, which can learn effectively, even from limited data. Taking advantage of the memory from previous samples, the new model achieves a remarkable improvement in performance when trained using limited data. The memory network is demonstrated here using the multi-layer perceptron (MLP) as a base model. However, it would be straightforward to extend the idea to other neural networks, e.g., convolutional neural networks (CNN). In this paper, the memory network structure is detailed, the training algorithm is presented, and a series of experiments are conducted to validate the proposed framework. Experimental results show that the proposed model outperforms traditional MLP-based models as well as other competitive algorithms in response to two real benchmark data sets.

A model of others’ emotions that predicts very well experimental results

Rebecca Saxe, Seeing Other Minds in 3D, Trends in Cognitive Sciences, Volume 22, Issue 3, 2018, Pages 193-195, DOI: 10.1016/j.tics.2018.01.003.

Tamir and Thornton [1] have identified three key dimensions that organize our understanding of other minds. These dimensions (glossed as valence, social impact, and rationality) can capture the similarities and differences between concepts of internal experiences (anger, loneliness, gratitude), and also between concepts of personalities (aggressive, introverted, agreeable). Most impressively, the three dimensions explain the patterns of hemodynamic activity in our brains as we consider these experiences [2] (Box 1). States such as anger and gratitude are invisible, but the patterns evoked in our brain as we think about them are as predictable by the model of Tamir and Thornton as the patterns evoked in our visual cortex when we look at chairs, bicycles, or pineapples are predictable by models of high-level vision [3]. Human social prediction follows the same dimensions: observers predict that transitions are more likely between states that are ‘nearby’ in this abstract 3D space [4]. Thus, we expect that a friend now feeling ‘anxious’ will be more likely to feel ‘sluggish’ than ‘energetic’ later.

A model of the interdependence of previous sensorimotor experiences in the following decision making

Evelina Dineva & Gregor Schöner, How infants’ reaches reveal principles of sensorimotor decision making, Connection Science vol. 30 iss. 1, p. 53-80, DOI: 10.1080/09540091.2017.1405382.

In Piaget’s classical A-not-B-task, infants repeatedly make a sensorimotor decision to reach to one of two cued targets. Perseverative errors are induced by switching the cue from A to B, while spontaneous errors are unsolicited reaches to B when only A is cued. We argue that theoretical accounts of sensorimotor decision-making fail to address how motor decisions leave a memory trace that may impact future sensorimotor decisions. Instead, in extant neural models, perseveration is caused solely by the history of stimulation. We present a neural dynamic model of sensorimotor decision-making within the framework of Dynamic Field Theory, in which a dynamic instability amplifies fluctuations in neural activation into macroscopic, stable neural activation states that leave memory traces. The model predicts perseveration, but also a tendency to repeat spontaneous errors. To test the account, we pool data from several A-not-B experiments. A conditional probabilities analysis accounts quantitatively how motor decisions depend on the history of reaching. The results provide evidence for the interdependence among subsequent reaching decisions that is explained by the model, showing that by amplifying small differences in activation and affecting learning, decisions have consequences beyond the individual behavioural act.

A survey in interactive perception in robots: interacting with the environment to improve perception and using internal models and prediction too

J. Bohg et al, Interactive Perception: Leveraging Action in Perception and Perception in Action, IEEE Transactions on Robotics, vol. 33, no. 6, pp. 1273-1291, DOI: 10.1109/TRO.2017.2721939.

Recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment. These approaches are subsumed under the term Interactive Perception (IP). This view of perception provides the following benefits. First, interaction with the environment creates a rich sensory signal that would otherwise not be present. Second, knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal. In this survey, we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of IP. We close this survey by discussing remaining open questions. With this survey, we hope to help define the field of Interactive Perception and to provide a valuable resource for future research.

On how psychologists realize that the brain, after all, may be creating symbols (concepts), like deep neural networks show

Jeffrey S. Bowers, Parallel Distributed Processing Theory in the Age of Deep Networks, Trends in Cognitive Sciences, Volume 21, Issue 12, 2017, Pages 950-961, DOI: 10.1016/j.tics.2017.09.013.

Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory.

Towards taking into account the complexity of finding the best option in decision-making systems

Peter Bossaerts, Carsten Murawski, Computational Complexity and Human Decision-Making, Trends in Cognitive Sciences, Volume 21, Issue 12, 2017, Pages 917-929, DOI: 10.1016/j.tics.2017.09.005.

The rationality principle postulates that decision-makers always choose the best action available to them. It underlies most modern theories of decision-making. The principle does not take into account the difficulty of finding the best option. Here, we propose that computational complexity theory (CCT) provides a framework for defining and quantifying the difficulty of decisions. We review evidence showing that human decision-making is affected by computational complexity. Building on this evidence, we argue that most models of decision-making, and metacognition, are intractable from a computational perspective. To be plausible, future theories of decision-making will need to take into account both the resources required for implementing the computations implied by the theory, and the resource constraints imposed on the decision-maker by biology.

On numerical cognition and the inexistence of an innate concept of number but the existence of an innate concept of quantity

Tom Verguts, Qi Chen, Numerical Cognition: Learning Binds Biology to Culture, Trends in Cognitive Sciences, Volume 21, Issue 12, 2017, Pages 913-914, DOI: 10.1016/j.tics.2017.09.004.

First, we address the issue of which quantity representations are innate. Second, we consider the role of the number list, whose characteristics are no doubt highly culturally dependent.