Category Archives: Cognitive Sciences

On rewards and values when the RL theory is applied to human brain

Keno Juechems, Christopher Summerfield, Where Does Value Come From?. Trends in Cognitive Sciences, Volume 23, Issue 10, 2019, Pages 836-850, DOI: 10.1016/j.tics.2019.07.012.

The computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this opinion, we highlight open challenges for RL as a model of animal behaviour in natural environments. We ask how the external reward function is designed for biological systems, and how we can account for the context sensitivity of valuation. We summarise both old and new theories proposing that animals track current and desired internal states and seek to minimise the distance to a goal across multiple value dimensions. We suggest that this framework readily accounts for canonical phenomena observed in the fields of psychology, behavioural ecology, and economics, and recent findings from brain-imaging studies of value-guided decision-making.

On the integer numbers in the brain

Susan Carey, David Barner, Ontogenetic Origins of Human Integer Representations. Trends in Cognitive Sciences, Volume 23, Issue 10, 2019, Pages 823-835, DOI: 10.1016/j.tics.2019.07.004.

Do children learn number words by associating them with perceptual magnitudes? Recent studies argue that approximate numerical magnitudes play a foundational role in the development of integer concepts. Against this, we argue that approximate number representations fail both empirically and in principle to provide the content required of integer concepts. Instead, we suggest that children\u2019s understanding of integer concepts proceeds in two phases. In the first phase, children learn small exact number word meanings by associating words with small sets. In the second phase, children learn the meanings of larger number words by mastering the logic of exact counting algorithms, which implement the successor function and Hume\u2019s principle (that one-to-one correspondence guarantees exact equality). In neither phase do approximate number representations play a foundational role.

On the role and limitations of motor internal simulation as a way of predicting the effects of a future action in the brain

Myrthel Dogge, Ruud Custers, Henk Aarts, Moving Forward: On the Limits of Motor-Based Forward Models. Trends in Cognitive Sciences, Volume 23, Issue 9, 2019, Pages 743-753, DOI: 10.1016/j.tics.2019.06.008.

The human ability to anticipate the consequences that result from action is an essential building block for cognitive, emotional, and social functioning. A dominant view is that this faculty is based on motor predictions, in which a forward model uses a copy of the motor command to predict imminent sensory action-consequences. Although this account was originally conceived to explain the processing of action-outcomes that are tightly coupled to bodily movements, it has been increasingly extrapolated to effects beyond the body. Here, we critically evaluate this generalization and argue that, although there is ample evidence for the role of predictions in the processing of environment-related action-outcomes, there is hitherto little reason to assume that these predictions result from motor-based forward models.

Numerosity in animals (insects)

Martin Giurfa, An Insect\u2019s Sense of Number. Trends in Cognitive Sciences, Volume 23, Issue 9, 2019, Pages 720-722, DOI: 10.1016/j.tics.2019.06.010.

Recent studies revealed numerosity judgments in bees, which include the concept of zero, subtraction and addition, and matching symbols to numbers. Despite their distant origins, bees and vertebrates share similarities in their numeric competences, thus suggesting that numerosity is evolutionary conserved and can be implemented in miniature brains without neocortex.

Synthesizing a supervisor (a Finite State Machine) instead of finding a standard policy in MDPs, applied to multi-agent systems

B. Wu, X. Zhang and H. Lin Permissive Supervisor Synthesis for Markov Decision Processes Through Learning. IEEE Transactions on Automatic Control, vol. 64, no. 8, pp. 3332-3338, Aug. 2019. DOI: 10.1109/TAC.2018.2879505.

This paper considers the permissive supervisor synthesis for probabilistic systems modeled as Markov Decision Processes (MDP). Such systems are prevalent in power grids, transportation networks, communication networks, and robotics. We propose a novel supervisor synthesis framework using automata learning and compositional model checking to generate the permissive local supervisors in a distributed manner. With the recent advances in assume-guarantee reasoning verification for MDPs, constructing the composed system can be avoided to alleviate the state space explosion. Our framework learns the supervisors iteratively using counterexamples from the verification and is guaranteed to terminate in finite steps and to be correct.

On theories of human decision making and the role of affects

Ian D. Roberts, Cendri A. Hutcherson, Affect and Decision Making: Insights and Predictions from Computational Models, Trends in Cognitive Sciences,
Volume 23, Issue 7, 2019, Pages 602-614 DOI: 10.1016/j.tics.2019.04.005.

In recent years interest in integrating the affective and decision sciences has skyrocketed. Immense progress has been made, but the complexities of each field, which can multiply when combined, present a significant obstacle. A carefully defined framework for integration is needed. The shift towards computational modeling in decision science provides a powerful basis and a path forward, but one whose synergistic potential will only be fully realized by drawing on the theoretical richness of the affective sciences. Reviewing research using a popular computational model of choice (the drift diffusion model), we discuss how mapping concepts to parameters reduces conceptual ambiguity and reveals novel hypotheses.

On the not clear distinction between fast/shallow and slow/deep cognitive processing

Adrianna C. Jenkins, Rethinking Cognitive Load: A Default-Mode Network Perspective,Trends in Cognitive Sciences, Volume 23, Issue 7, 2019, Pages 531-533 DOI: 10.1016/j.tics.2019.04.008.

Typical cognitive load tasks are now known to deactivate the brain’s default-mode network (DMN). This raises the possibility that apparent effects of cognitive load could arise from disruptions of DMN processes, including social cognition. Cognitive load studies are reconsidered, with reinterpretations of past research and implications for dual-process theory.

A brief (and relatively shallow) account of computer programming as a cognitive ability

Evelina Fedorenko, Anna Ivanova, Riva Dhamala, Marina Umaschi Bers, The Language of Programming: A Cognitive Perspective, Trends in Cognitive Sciences,
Volume 23, Issue 7, 2019, Pages 525-528 DOI: 10.1016/j.tics.2019.04.010.

Computer programming is becoming essential across fields. Traditionally grouped with science, technology, engineering, and mathematics (STEM) disciplines, programming also bears parallels to natural languages. These parallels may translate into overlapping processing mechanisms. Investigating the cognitive basis of programming is important for understanding the human mind and could transform education practices.

A Survey of Knowledge Representation in Service Robotics

avid Paulius, Yu Sun, A Survey of Knowledge Representation in Service Robotics,Robotics and Autonomous Systems, Volume 118, 2019, Pages 13-30 DOI: 10.1016/j.robot.2019.03.005.

Within the realm of service robotics, researchers have placed a great amount of effort into learning, understanding, and representing motions as manipulations for task execution by robots. The task of robot learning and problem-solving is very broad, as it integrates a variety of tasks such as object detection, activity recognition, task/motion planning, localization, knowledge representation and retrieval, and the intertwining of perception/vision and machine learning techniques. In this paper, we solely focus on knowledge representations and notably how knowledge is typically gathered, represented, and reproduced to solve problems as done by researchers in the past decades. In accordance with the definition of knowledge representations, we discuss the key distinction between such representations and useful learning models that have extensively been introduced and studied in recent years, such as machine learning, deep learning, probabilistic modeling, and semantic graphical structures. Along with an overview of such tools, we discuss the problems which have existed in robot learning and how they have been built and used as solutions, technologies or developments (if any) which have contributed to solving them. Finally, we discuss key principles that should be considered when designing an effective knowledge representation.

The concepts of agency and ownership in cognitive maps, and a nice survey of cognitive maps

Shahar Arzy, Daniel L. Schacter, Self-Agency and Self-Ownership in Cognitive Mapping, Trends in Cognitive Sciences, Volume 23, Issue 6, 2019, Pages 476-487, DOI: 10.1016/j.tics.2019.04.003.

The concepts of agency of one’s actions and ownership of one’s experience have proved useful in relating body representations to bodily consciousness. Here we apply these concepts to cognitive maps. Agency is defined as ‘the sense that I am the one who is generating the experience represented on a cognitive map’, while ownership is defined as ‘the sense that I am the one who is undergoing an experience, represented on a cognitive map’. The roles of agency and ownership are examined with respect to the transformation between egocentric and allocentric representations and the underlying neurocognitive and computational mechanisms; and within the neuropsychiatric domain, including Alzheimer’s disease (AD) and other memory-related disorders, in which the senses of agency and ownership may be disrupted.