Category Archives: Cognitive Sciences

On how the calculus of utility of actions drives many human behaviours

Julian Jara-Ettinger, Hyowon Gweon, Laura E. Schulz, Joshua B. Tenenbaum, The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology, Trends in Cognitive Sciences, Volume 20, Issue 8, 2016, Pages 589-604, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.05.011.

We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This \u2018naïve utility calculus\u2019 allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy.

Evidences that the brain encodes numbers on an internal continous line and that the zero value is also represented

Luca Rinaldi, Luisa Girelli, A Place for Zero in the Brain, Trends in Cognitive Sciences, Volume 20, Issue 8, 2016, Pages 563-564, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.06.006.

It has long been thought that the primary cognitive and neural systems responsible for processing numerosities are not predisposed to encode empty sets (i.e., numerosity zero). A new study challenges this view by demonstrating that zero is translated into an abstract quantity along the numerical continuum by the primate parietofrontal magnitude system.

A formal study of the guarantees that deep neural network offer for classification

R. Giryes, G. Sapiro and A. M. Bronstein, “Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?,” in IEEE Transactions on Signal Processing, vol. 64, no. 13, pp. 3444-3457, July1, 1 2016. DOI: 10.1109/TSP.2016.2546221.

Three important properties of a classification machinery are i) the system preserves the core information of the input data; ii) the training examples convey information about unseen data; and iii) the system is able to treat differently points from different classes. In this paper, we show that these fundamental properties are satisfied by the architecture of deep neural networks. We formally prove that these networks with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data. Similar points at the input of the network are likely to have a similar output. The theoretical analysis of deep networks here presented exploits tools used in the compressed sensing and dictionary learning literature, thereby making a formal connection between these important topics. The derived results allow drawing conclusions on the metric learning properties of the network and their relation to its structure, as well as providing bounds on the required size of the training set such that the training examples would represent faithfully the unseen data. The results are validated with state-of-the-art trained networks.

A new theoretical framework for modeling concepts that allows them to combine reflecting the way humans do, with a good related-work on other concept frameworks in AI

Martha Lewis, Jonathan Lawry, Hierarchical conceptual spaces for concept combination, Artificial Intelligence, Volume 237, August 2016, Pages 204-227, ISSN 0004-3702, DOI: 10.1016/j.artint.2016.04.008.

We introduce a hierarchical framework for conjunctive concept combination based on conceptual spaces and random set theory. The model has the flexibility to account for composition of concepts at various levels of complexity. We show that the conjunctive model includes linear combination as a special case, and that the more general model can account for non-compositional behaviours such as overextension, non-commutativity, preservation of necessity and impossibility of attributes and to some extent, attribute loss or emergence. We investigate two further aspects of human concept use, the conjunction fallacy and the “guppy effect”.

Interesting hypothesis about how cognitive abilities can be modelled with closed control loops that run in parallel -using hierarchies of abstraction and prediction-, traditionally used just for low-level behaviours

Giovanni Pezzulo, Paul Cisek, Navigating the Affordance Landscape: Feedback Control as a Process Model of Behavior and Cognition, Trends in Cognitive Sciences, Volume 20, Issue 6, June 2016, Pages 414-424, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.03.013.

We discuss how cybernetic principles of feedback control, used to explain sensorimotor behavior, can be extended to provide a foundation for understanding cognition. In particular, we describe behavior as parallel processes of competition and selection among potential action opportunities (‘affordances’) expressed at multiple levels of abstraction. Adaptive selection among currently available affordances is biased not only by predictions of their immediate outcomes and payoffs but also by predictions of what new affordances they will make available. This allows animals to purposively create new affordances that they can later exploit to achieve high-level goals, resulting in intentional action that links across multiple levels of control. Finally, we discuss how such a ‘hierarchical affordance competition’ process can be mapped to brain structure.

Physiological evidences that visual attention is based on predictions

Martin Rolfs, Martin Szinte, Remapping Attention Pointers: Linking Physiology and Behavior, Trends in Cognitive Sciences, Volume 20, Issue 6, 2016, Pages 399-401, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.04.003.

Our eyes rapidly scan visual scenes, displacing the projection on the retina with every move. Yet these frequent retinal image shifts do not appear to hamper vision. Two recent physiological studies shed new light on the role of attention in visual processing across saccadic eye movements.

“Nexting” (predicting events that occur next, possibly at different time scales) implemented in a robot through temporal difference learning and with a large number of learners

Joseph Modayil, Adam White, Richard S. Sutton (2011), Multi-timescale Nexting in a Reinforcement Learning Robot, arXiv:1112.1133 [cs.LG]. ARXIV, (this version to appear in the Proceedings of the Conference on the Simulation of Adaptive Behavior, 2012).

The term “nexting” has been used by psychologists to refer to the propensity of people and many other animals to continually predict what will happen next in an immediate, local, and personal sense. The ability to “next” constitutes a basic kind of awareness and knowledge of one’s environment. In this paper we present results with a robot that learns to next in real time, predicting thousands of features of the world’s state, including all sensory inputs, at timescales from 0.1 to 8 seconds. This was achieved by treating each state feature as a reward-like target and applying temporal-difference methods to learn a corresponding value function with a discount rate corresponding to the timescale. We show that two thousand predictions, each dependent on six thousand state features, can be learned and updated online at better than 10Hz on a laptop computer, using the standard TD(lambda) algorithm with linear function approximation. We show that this approach is efficient enough to be practical, with most of the learning complete within 30 minutes. We also show that a single tile-coded feature representation suffices to accurately predict many different signals at a significant range of timescales. Finally, we show that the accuracy of our learned predictions compares favorably with the optimal off-line solution.

Theoretical models for explaining the human (quick) decicion-making process

Roger Ratcliff, Philip L. Smith, Scott D. Brown, Gail McKoon, Diffusion Decision Model: Current Issues and History, Trends in Cognitive Sciences, Volume 20, Issue 4, April 2016, Pages 260-281, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.01.007.

There is growing interest in diffusion models to represent the cognitive and neural processes of speeded decision making. Sequential-sampling models like the diffusion model have a long history in psychology. They view decision making as a process of noisy accumulation of evidence from a stimulus. The standard model assumes that evidence accumulates at a constant rate during the second or two it takes to make a decision. This process can be linked to the behaviors of populations of neurons and to theories of optimality. Diffusion models have been used successfully in a range of cognitive tasks and as psychometric tools in clinical research to examine individual differences. In this review, we relate the models to both earlier and more recent research in psychology.

Cognitive Models as Bridge between Brain and Behavior

Bradley C. Love, Cognitive Models as Bridge between Brain and Behavior, Trends in Cognitive Sciences, Volume 20, Issue 4, April 2016, Pages 247-248, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.02.006.

How can disparate neural and behavioral measures be integrated? Turner and colleagues propose joint modeling as a solution. Joint modeling mutually constrains the interpretation of brain and behavioral measures by exploiting their covariation structure. Simultaneous estimation allows for more accurate prediction than would be possible by considering these measures in isolation.

The diverse roles of the hippocampus

Daniel Bendor, Hugo J. Spiers, Does the Hippocampus Map Out the Future?, Trends in Cognitive Sciences, Volume 20, Issue 3, March 2016, Pages 167-169, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.01.003.

Decades of research have established two central roles of the hippocampus – memory consolidation and spatial navigation. Recently, a third function of the hippocampus has been proposed: simulating future events. However, claims that the neural patterns underlying simulation occur without prior experience have come under fire in light of newly published data.