Category Archives: Cognitive Sciences

How to make that a symbol becomes related to things on which it is not grounded, and a nice introduction to the symbolist/subsymbolist dilemma

Veale, Tony and Al-Najjar, Khalid (2016). Grounded for life: creative symbol-grounding for lexical invention. Connection Science 28(2). DOI: 10.1080/09540091.2015.1130025

One of the challenges of linguistic creativity is to use words in a way that is novel and striking and even whimsical, to convey meanings that remain stubbornly grounded in the very same world of familiar experiences as serves to anchor the most literal and unimaginative language. The challenge remains unmet by systems that merely shuttle or arrange words to achieve novel arrangements without concern as to how those arrangements are to spur the processes of meaning construction in a listener. In this paper we explore a problem of lexical invention that cannot be solved without a model ? explicit or implicit ? of the perceptual grounding of language: the invention of apt new names for colours. To solve this problem here we shall call upon the notion of a linguistic readymade, a phrase that is wrenched from its original context of use to be given new meaning and new resonance in new settings. To ensure that our linguistic readymades ? which owe a great deal to Marcel Duchamp’s notion of found art ? are anchored in a consensus model of perception, we introduce the notion of a lexicalised colour stereotype.

Limitations of the simulation of physical systems when used in AI reasoning processes for prediction

Ernest Davis, Gary Marcus, The scope and limits of simulation in automated reasoning, Artificial Intelligence, Volume 233, April 2016, Pages 60-72, ISSN 0004-3702, DOI: 10.1016/j.artint.2015.12.003.

In scientific computing and in realistic graphic animation, simulation – that is, step-by-step calculation of the complete trajectory of a physical system – is one of the most common and important modes of calculation. In this article, we address the scope and limits of the use of simulation, with respect to AI tasks that involve high-level physical reasoning. We argue that, in many cases, simulation can play at most a limited role. Simulation is most effective when the task is prediction, when complete information is available, when a reasonably high quality theory is available, and when the range of scales involved, both temporal and spatial, is not extreme. When these conditions do not hold, simulation is less effective or entirely inappropriate. We discuss twelve features of physical reasoning problems that pose challenges for simulation-based reasoning. We briefly survey alternative techniques for physical reasoning that do not rely on simulation.

It seems that the human motor cortex not only contains a map of motions but a map of basic behaviors (compositions of motions)

Michael S.A. Graziano, Ethological Action Maps: A Paradigm Shift for the Motor Cortex, Trends in Cognitive Sciences, Volume 20, Issue 2, February 2016, Pages 121-132, ISSN 1364-6613, DOI: 10.1016/j.tics.2015.10.008.

The map of the body in the motor cortex is one of the most iconic images in neuroscience. The map, however, is not perfect. It contains overlaps, reversals, and fractures. The complex pattern suggests that a body plan is not the only organizing principle. Recently a second organizing principle was discovered: an action map. The motor cortex appears to contain functional zones, each of which emphasizes an ethologically relevant category of behavior. Some of these complex actions can be evoked by cortical stimulation. Although the findings were initially controversial, interest in the ethological action map has grown. Experiments on primates, mice, and rats have now confirmed and extended the earlier findings with a range of new methods.

How mood influcences learning, concretely perception of rewards in the context of reinforcement learning

Eran Eldar, Robb B. Rutledge, Raymond J. Dolan, Yael Niv, Mood as Representation of Momentum, Trends in Cognitive Sciences, Volume 20, Issue 1, January 2016, Pages 15-24, ISSN 1364-6613, DOI: j.tics.2015.07.010.

Experiences affect mood, which in turn affects subsequent experiences. Recent studies suggest two specific principles. First, mood depends on how recent reward outcomes differ from expectations. Second, mood biases the way we perceive outcomes (e.g., rewards), and this bias affects learning about those outcomes. We propose that this two-way interaction serves to mitigate inefficiencies in the application of reinforcement learning to real-world problems. Specifically, we propose that mood represents the overall momentum of recent outcomes, and its biasing influence on the perception of outcomes ‘corrects’ learning to account for environmental dependencies. We describe potential dysfunctions of this adaptive mechanism that might contribute to the symptoms of mood disorders.

Implementation of affects in artificial systems through MDPs

Jesse Hoey, Tobias Schröder, Areej Alhothali, Affect control processes: Intelligent affective interaction using a partially observable Markov decision process, Artificial Intelligence, Volume 230, January 2016, Pages 134-172, DOI: 10.1016/j.artint.2015.09.004.

This paper describes a novel method for building affectively intelligent human-interactive agents. The method is based on a key sociological insight that has been developed and extensively verified over the last twenty years, but has yet to make an impact in artificial intelligence. The insight is that resource bounded humans will, by default, act to maintain affective consistency. Humans have culturally shared fundamental affective sentiments about identities, behaviours, and objects, and they act so that the transient affective sentiments created during interactions confirm the fundamental sentiments. Humans seek and create situations that confirm or are consistent with, and avoid and suppress situations that disconfirm or are inconsistent with, their culturally shared affective sentiments. This “affect control principle” has been shown to be a powerful predictor of human behaviour. In this paper, we present a probabilistic and decision-theoretic generalisation of this principle, and we demonstrate how it can be leveraged to build affectively intelligent artificial agents. The new model, called BayesAct, can maintain multiple hypotheses about sentiments simultaneously as a probability distribution, and can make use of an explicit utility function to make value-directed action choices. This allows the model to generate affectively intelligent interactions with people by learning about their identity, predicting their behaviours using the affect control principle, and taking actions that are simultaneously goal-directed and affect-sensitive. We demonstrate this generalisation with a set of simulations. We then show how our model can be used as an emotional “plug-in” for artificially intelligent systems that interact with humans in two different settings: an exam practice assistant (tutor) and an assistive device for persons with a cognitive disability.

The quick-intuition vs. slow-deliberation dilemma from a decision-making perspective

Y-Lan Boureau, Peter Sokol-Hessner, Nathaniel D. Daw, Deciding How To Decide: Self-Control and Meta-Decision Making, Trends in Cognitive Sciences, Volume 19, Issue 11, November 2015, Pages 700-710, ISSN 1364-6613, DOI: 10.1016/j.tics.2015.08.013.

Many different situations related to self control involve competition between two routes to decisions: default and frugal versus more resource-intensive. Examples include habits versus deliberative decisions, fatigue versus cognitive effort, and Pavlovian versus instrumental decision making. We propose that these situations are linked by a strikingly similar core dilemma, pitting the opportunity costs of monopolizing shared resources such as executive functions for some time, against the possibility of obtaining a better outcome. We offer a unifying normative perspective on this underlying rational meta-optimization, review how this may tie together recent advances in many separate areas, and connect several independent models. Finally, we suggest that the crucial mechanisms and meta-decision variables may be shared across domains.

A possible framework for the relationship between culture, behavior and the brain

Shihui Han, Yina Ma, A Culture–Behavior–Brain Loop Model of Human Development, Trends in Cognitive Sciences, Volume 19, Issue 11, November 2015, Pages 666-676, ISSN 1364-6613, DOI: 10.1016/j.tics.2015.08.010.

Increasing evidence suggests that cultural influences on brain activity are associated with multiple cognitive and affective processes. These findings prompt an integrative framework to account for dynamic interactions between culture, behavior, and the brain. We put forward a culture–behavior–brain (CBB) loop model of human development that proposes that culture shapes the brain by contextualizing behavior, and the brain fits and modifies culture via behavioral influences. Genes provide a fundamental basis for, and interact with, the CBB loop at both individual and population levels. The CBB loop model advances our understanding of the dynamic relationships between culture, behavior, and the brain, which are crucial for human phylogeny and ontogeny. Future brain changes due to cultural influences are discussed based on the CBB loop model.

On how moral can shape perception

Ana P. Gantman, Jay J. Van Bavel,Moral Perception, Trends in Cognitive Sciences, Volume 19, Issue 11, November 2015, Pages 631-633, ISSN 1364-6613, DOI: 10.1016/j.tics.2015.08.004.

Based on emerging research, we propose that human perception is preferentially attuned to moral content. We describe how moral concerns enhance detection of morally relevant stimuli, and both command and direct attention. These perceptual processes, in turn, have important consequences for moral judgment and behavior.

Using MDPs when the transition probability matrix is just partially specified, therefore getting closer to a model-free approach

Karina V. Delgado, Leliane N. de Barros, Daniel B. Dias, Scott Sanner, Real-time dynamic programming for Markov decision processes with imprecise probabilities, Artificial Intelligence, Volume 230, January 2016, Pages 192-223, ISSN 0004-3702, DOI: 10.1016/j.artint.2015.09.005.

Markov Decision Processes have become the standard model for probabilistic planning. However, when applied to many practical problems, the estimates of transition probabilities are inaccurate. This may be due to conflicting elicitations from experts or insufficient state transition information. The Markov Decision Process with Imprecise Transition Probabilities (MDP-IPs) was introduced to obtain a robust policy where there is uncertainty in the transition. Although it has been proposed a symbolic dynamic programming algorithm for MDP-IPs (called SPUDD-IP) that can solve problems up to 22 state variables, in practice, solving MDP-IP problems is time-consuming. In this paper we propose efficient algorithms for a more general class of MDP-IPs, called Stochastic Shortest Path MDP-IPs (SSP MDP-IPs) that use initial state information to solve complex problems by focusing on reachable states. The (L)RTDP-IP algorithm, a (Labeled) Real Time Dynamic Programming algorithm for SSP MDP-IPs, is proposed together with three different methods for sampling the next state. It is shown here that the convergence of (L)RTDP-IP can be obtained by using any of these three methods, although the Bellman backups for this class of problems prescribe a minimax optimization. As far as we are aware, this is the first asynchronous algorithm for SSP MDP-IPs given in terms of a general set of probability constraints that requires non-linear optimization over imprecise probabilities in the Bellman backup. Our results show up to three orders of magnitude speedup for (L)RTDP-IP when compared with the SPUDD-IP algorithm.

See also:

  • Karina Valdivia Delgado, Scott Sanner, Leliane Nunes de Barros, Efficient solutions to factored MDPs with imprecise transition probabilities, Artif. Intell. 175 (9–10) (2011) 1498–1527.
  • Satia, J. K., and Lave Jr., R. E. 1970. MDPs with uncertain transition probabilities. Operations Research 21:728–740
  • White III, C. C., and El-Deib, H. K. 1994. MDPs with Imprecise Transition Probabilities. Operations Research 42(4):739–749

Modelling emotions in adaptive agents through the action selection part of reinforcement learning, plus some references on the neurophysiological bases of RL and a good review of literature on emotions

Joost Broekens , Elmer Jacobs , Catholijn M. Jonker, A reinforcement learning model of joy, distress, hope and fear, Connection Science, Vol. 27, Iss. 3, 2015, DOI: 10.1080/09540091.2015.1031081.

In this paper we computationally study the relation between adaptive behaviour and emotion. Using the reinforcement learning framework, we propose that learned state utility, V(s), models fear (negative) and hope (positive) based on the fact that both signals are about anticipation of loss or gain. Further, we propose that joy/distress is a signal similar to the error signal. We present agent-based simulation experiments that show that this model replicates psychological and behavioural dynamics of emotion. This work distinguishes itself by assessing the dynamics of emotion in an adaptive agent framework – coupling it to the literature on habituation, development, extinction and hope theory. Our results support the idea that the function of emotion is to provide a complex feedback signal for an organism to adapt its behaviour. Our work is relevant for understanding the relation between emotion and adaptation in animals, as well as for human–robot interaction, in particular how emotional signals can be used to communicate between adaptive agents and humans.