Category Archives: Cognitive Sciences

What is Cognitive Computational Neuroscience

Thomas Naselaris, Danielle S. Bassett, Alyson K. Fletcher, Konrad Kording, Nikolaus Kriegeskorte, Hendrikje Nienborg, Russell A. Poldrack, Daphna Shohamy, Kendrick Kay, Cognitive Computational Neuroscience: A New Conference for an Emerging Discipline, Trends in Cognitive Sciences, Volume 22, Issue 5, 2018, Pages 365-367, DOI: 10.1016/j.tics.2018.02.008.

Understanding the computational principles that underlie complex behavior is a central goal in cognitive science, artificial intelligence, and neuroscience. In an attempt to unify these disconnected communities, we created a new conference called Cognitive Computational Neuroscience (CCN). The inaugural meeting revealed considerable enthusiasm but significant obstacles remain.

On how seeking for the lowest-cost action is not always what happens in reality

Michael Inzlicht, Amitai Shenhav, Christopher Y. Olivola, The Effort Paradox: Effort Is Both Costly and Valued, Trends in Cognitive Sciences, Volume 22, Issue 4, 2018, Pages 337-349, DOI: 10.1016/j.tics.2018.01.007.

According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort’s role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time.

On how children learn with progressive environment changes aimed at improving their learning statistically

Linda B. Smith, Swapnaa Jayaraman, Elizabeth Clerkin, Chen Yu, The Developing Infant Creates a Curriculum for Statistical Learning, Trends in Cognitive Sciences, Volume 22, Issue 4, 2018, Pages 325-336, DOI: 10.1016/j.tics.2018.02.004.

New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the training sets for statistical learning develop as the sensorimotor abilities of the infant develop, yielding a series of ordered datasets for visual learning that differ in content and structure between timepoints but are highly selective at each timepoint. These changing environments may constitute a developmentally ordered curriculum that optimizes learning across many domains. Future advances in computational models will be necessary to connect the developmentally changing content and statistics of infant experience to the internal machinery that does the learning.

A theory that integrates motivation and control

Giovanni Pezzulo, Francesco Rigoli, Karl J. Friston, Hierarchical Active Inference: A Theory of Motivated Control, Trends in Cognitive Sciences, Volume 22, Issue 4, 2018, Pages 294-306, DOI: 10.1016/j.tics.2018.01.009.

Motivated control refers to the coordination of behaviour to achieve affectively valenced outcomes or goals. The study of motivated control traditionally assumes a distinction between control and motivational processes, which map to distinct (dorsolateral versus ventromedial) brain systems. However, the respective roles and interactions between these processes remain controversial. We offer a novel perspective that casts control and motivational processes as complementary aspects − goal propagation and prioritization, respectively − of active inference and hierarchical goal processing under deep generative models. We propose that the control hierarchy propagates prior preferences or goals, but their precision is informed by the motivational context, inferred at different levels of the motivational hierarchy. The ensuing integration of control and motivational processes underwrites action and policy selection and, ultimately, motivated behaviour, by enabling deep inference to prioritize goals in a context-sensitive way.

A critic of the “two types of thinking” myth (deliberative, slow, rational, optimal vs. reactive, quick, emotional, suboptimal)

David E. Melnikoff, John A. Bargh, The Mythical Number Two, Trends in Cognitive Sciences, Volume 22, Issue 4, 2018, Pages 280-293, DOI: 10.1016/j.tics.2018.02.001.

It is often said that there are two types of psychological processes: one that is intentional, controllable, conscious, and inefficient, and another that is unintentional, uncontrollable, unconscious, and efficient. Yet, there have been persistent and increasing objections to this widely influential dual-process typology. Critics point out that the ‘two types’ framework lacks empirical support, contradicts well-established findings, and is internally incoherent. Moreover, the untested and untenable assumption that psychological phenomena can be partitioned into two types, we argue, has the consequence of systematically thwarting scientific progress. It is time that we as a field come to terms with these issues. In short, the dual-process typology is a convenient and seductive myth, and we think cognitive science can do better.

Survey on the concept of affordance and its use in robotics (the rest of this issue of the journal also deals with affordances in robotics)

L. Jamone et al, Affordances in Psychology, Neuroscience, and Robotics: A Survey,, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 1, pp. 4-25, March 2018, DOI: 10.1109/TCDS.2016.2594134.

The concept of affordances appeared in psychology during the late 60s as an alternative perspective on the visual perception of the environment. It was revolutionary in the intuition that the way living beings perceive the world is deeply influenced by the actions they are able to perform. Then, across the last 40 years, it has influenced many applied fields, e.g., design, human-computer interaction, computer vision, and robotics. In this paper, we offer a multidisciplinary perspective on the notion of affordances. We first discuss the main definitions and formalizations of the affordance theory, then we report the most significant evidence in psychology and neuroscience that support it, and finally we review the most relevant applications of this concept in robotics.

Multi-agent reinfocerment learning for working with high-dimensional spaces

David L. Leottau, Javier Ruiz-del-Solar, Robert Babuška, Decentralized Reinforcement Learning of Robot Behaviors, Artificial Intelligence, Volume 256, 2018, Pages 130-159, DOI: 10.1016/j.artint.2017.12.001.

A multi-agent methodology is proposed for Decentralized Reinforcement Learning (DRL) of individual behaviors in problems where multi-dimensional action spaces are involved. When using this methodology, sub-tasks are learned in parallel by individual agents working toward a common goal. In addition to proposing this methodology, three specific multi agent DRL approaches are considered: DRL-Independent, DRL Cooperative-Adaptive (CA), and DRL-Lenient. These approaches are validated and analyzed with an extensive empirical study using four different problems: 3D Mountain Car, SCARA Real-Time Trajectory Generation, Ball-Dribbling in humanoid soccer robotics, and Ball-Pushing using differential drive robots. The experimental validation provides evidence that DRL implementations show better performances and faster learning times than their centralized counterparts, while using less computational resources. DRL-Lenient and DRL-CA algorithms achieve the best final performances for the four tested problems, outperforming their DRL-Independent counterparts. Furthermore, the benefits of the DRL-Lenient and DRL-CA are more noticeable when the problem complexity increases and the centralized scheme becomes intractable given the available computational resources and training time.

Survey of the modelling of agents (intentions, goals, etc.)

Stefano V. Albrecht, Peter Stone, Autonomous agents modelling other agents: A comprehensive survey and open problems, Artificial Intelligence,
Volume 258, 2018, Pages 66-95, DOI: 10.1016/j.artint.2018.01.002.

Much research in artificial intelligence is concerned with the development of autonomous agents that can interact effectively with other agents. An important aspect of such agents is the ability to reason about the behaviours of other agents, by constructing models which make predictions about various properties of interest (such as actions, goals, beliefs) of the modelled agents. A variety of modelling approaches now exist which vary widely in their methodology and underlying assumptions, catering to the needs of the different sub-communities within which they were developed and reflecting the different practical uses for which they are intended. The purpose of the present article is to provide a comprehensive survey of the salient modelling methods which can be found in the literature. The article concludes with a discussion of open problems which may form the basis for fruitful future research.

Using interactive reinforcement learning with the advisor being another reinforcement learning agent

Francisco Cruz, Sven Magg, Yukie Nagai & Stefan Wermter, Improving interactive reinforcement learning: What makes a good teacher?, Connection Science, DOI: 10.1080/09540091.2018.1443318.

Interactive reinforcement learning (IRL) has become an important apprenticeship approach to speed up convergence in classic reinforcement learning (RL) problems. In this regard, a variant of IRL is policy shaping which uses a parent-like trainer to propose the next action to be performed and by doing so reduces the search space by advice. On some occasions, the trainer may be another artificial agent which in turn was trained using RL methods to afterward becoming an advisor for other learner-agents. In this work, we analyse internal representations and characteristics of artificial agents to determine which agent may outperform others to become a better trainer-agent. Using a polymath agent, as compared to a specialist agent, an advisor leads to a larger reward and faster convergence of the reward signal and also to a more stable behaviour in terms of the state visit frequency of the learner-agents. Moreover, we analyse system interaction parameters in order to determine how influential they are in the apprenticeship process, where the consistency of feedback is much more relevant when dealing with different learner obedience parameters.

Using memory of past input data to improve the convergence of NN when faced with small samples

Zhang, S., Huang, K., Zhang, R. et al., Learning from Few Samples with Memory Network, Cogn Comput (2018) 10: 15, DOI: 10.1007/s12559-017-9507-z.

Neural networks (NN) have achieved great successes in pattern recognition and machine learning. However, the success of a NN usually relies on the provision of a sufficiently large number of data samples as training data. When fed with a limited data set, a NN’s performance may be degraded significantly. In this paper, a novel NN structure is proposed called a memory network. It is inspired by the cognitive mechanism of human beings, which can learn effectively, even from limited data. Taking advantage of the memory from previous samples, the new model achieves a remarkable improvement in performance when trained using limited data. The memory network is demonstrated here using the multi-layer perceptron (MLP) as a base model. However, it would be straightforward to extend the idea to other neural networks, e.g., convolutional neural networks (CNN). In this paper, the memory network structure is detailed, the training algorithm is presented, and a series of experiments are conducted to validate the proposed framework. Experimental results show that the proposed model outperforms traditional MLP-based models as well as other competitive algorithms in response to two real benchmark data sets.