Category Archives: Cognitive Sciences

It seems that consciousness is not an analog uni-dimensional line, but multi-dimensional

Jonathan Birch, Alexandra K. Schnell, Nicola S. Clayton, Dimensions of Animal Consciousness. Trends in Cognitive Sciences, Volume 24, Issue 10, 2020, Pages 789-801 DOI: 10.1016/j.tics.2020.07.007.

How does consciousness vary across the animal kingdom? Are some animals ‘more conscious’ than others? This article presents a multidimensional framework for understanding interspecies variation in states of consciousness. The framework distinguishes five key dimensions of variation: perceptual richness, evaluative richness, integration at a time, integration across time, and self-consciousness. For each dimension, existing experiments that bear on it are reviewed and future experiments are suggested. By assessing a given species against each dimension, we can construct a consciousness profile for that species. On this framework, there is no single scale along which species can be ranked as more or less conscious. Rather, each species has its own distinctive consciousness profile.

It seems that our brain predicts semantic features of sensory stimuli to come

Friedemann Pulvermüller, Luigi Grisoni, Semantic Prediction in Brain and Mind. Trends in Cognitive Sciences, Volume 24, Issue 10, 2020, Pages 781-784 DOI: 10.1016/j.tics.2020.07.002.

We highlight a novel brain correlate of prediction, the prediction potential (or PP), a slow negative-going potential shift preceding visual, acoustic, and spoken or written verbal stimuli that can be predicted from their context. The cortical sources underlying the prediction potential reflect perceptual and semantic features of anticipated stimuli before these appear.

“Early exit” deep neural networks (i.e., CNN that provide outputs at intermediate points)

Scardapane, S., Scarpiniti, M., Baccarelli, E. et al. , Why Should We Add Early Exits to Neural Networks? . Cogn Comput 12, 954–966 (2020) DOI: 10.1007/s12559-020-09734-4.

Deep neural networks are generally designed as a stack of differentiable layers, in which a prediction is obtained only after running the full stack. Recently, some contributions have proposed techniques to endow the networks with early exits, allowing to obtain predictions at intermediate points of the stack. These multi-output networks have a number of advantages, including (i) significant reductions of the inference time, (ii) reduced tendency to overfitting and vanishing gradients, and (iii) capability of being distributed over multi-tier computation platforms. In addition, they connect to the wider themes of biological plausibility and layered cognitive reasoning. In this paper, we provide a comprehensive introduction to this family of neural networks, by describing in a unified fashion the way these architectures can be designed, trained, and actually deployed in time-constrained scenarios. We also describe in-depth their application scenarios in 5G and Fog computing environments, as long as some of the open research questions connected to them.

A new theory: we are curious about tasks that increase our ability to solve as many future tasks as possible

Franziska Brändle, Charley M. Wu, Eric Schulz, What Are We Curious about?, . Trends in Cognitive Sciences, Volume 24, Issue 9, 2020 DOI: 10.1016/j.tics.2020.05.010.

(no abstract).

Predicting optimistically seems to lead to better response of the agent to achieve the best goals

Zekun Sun, Chaz Firestone, Optimism and Pessimism in the Predictive Brain, . Trends in Cognitive Sciences, Volume 24, Issue 9, 2020 DOI: 10.1016/j.tics.2020.06.001.

(no abstract).

Interesting review of pshycological motivation and the role of RL in studying it

Randall C. O’Reilly, Unraveling the Mysteries of Motivation, Trends in Cognitive Sciences, Volume 24, Issue 6, 2020, Pages 425-434, DOI: 10.1016/j.tics.2020.03.001.

Motivation plays a central role in human behavior and cognition but is not well captured by widely used artificial intelligence (AI) and computational modeling frameworks. This Opinion article addresses two central questions regarding the nature of motivation: what are the nature and dynamics of the internal goals that drive our motivational system and how can this system be sufficiently flexible to support our ability to rapidly adapt to novel situations, tasks, etc.? In reviewing existing systems and neuroscience research and theorizing on these questions, a wealth of insights to constrain the development of computational models of motivation can be found.

Consciousness as a learning framework

Axel Cleeremans, Dalila Achoui, Arnaud Beauny, Lars Keuninckx, Jean-Remy Martin, Santiago Muñoz-Moldes, Laurène Vuillaume, Adélaïde de Heering, Learning to Be Conscious, Trends in Cognitive Sciences, Volume 24, Issue 2, 2020, Pages 112-123 DOI: 10.1016/j.tics.2019.11.011.

Consciousness remains a formidable challenge. Different theories of consciousness have proposed vastly different mechanisms to account for phenomenal experience. Here, appealing to aspects of global workspace theory, higher-order theories, social theories, and predictive processing, we introduce a novel framework: the self-organizing metarerpresentational account (SOMA), in which consciousness is viewed as something that the brain learns to do. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of metarepresentations that qualify target first-order representations. Thus, experiences only occur in experiencers that have learned to know they possess certain first-order states and that have learned to care more about certain states than about others. In this sense, consciousness is the brain’s (unconscious, embodied, enactive, nonconceptual) theory about itself.

A model of the psychomotor behaviour of humans intended to be useful for integration with robots

Stephen Fox, Adrian Kotelba, Ilari Marstio, Jari Montonen, Aligning human psychomotor characteristics with robots, exoskeletons and augmented reality, Robotics and Computer-Integrated Manufacturing, Volume 63, 2020, DOI: 10.1016/j.rcim.2019.101922.

In previous production literature, the uncertainty of human behaviour has been recognized as a source of productivity, quality, and safety problems. However, fundamental reasons for the uncertainty of human behavior have received little analysis in the production literature. Furthermore, potential for these fundamental reasons to be aligned with production technologies in order to improve production performance has not been addressed. By contrast, in this paper, fundamental reasons for the uncertainty of human behaviour are explained through a model of psychomotor characteristics that encompasses physiology, past experiences, personality, gender, culture, emotion, reasoning, and biocybernetics. Through reference to 10 action research cases, the formal model is applied to provide guidelines for planning production work that includes robots, exoskeletons, and augmented reality.

Symbol grounding through neural networks

Shridhar M, Mittal D, Hsu D., INGRESS: Interactive visual grounding of referring expressions, The International Journal of Robotics Research. January 2020, DOI: 10.1177/0278364919897133.

This article presents INGRESS, a robot system that follows human natural language instructions to pick and place everyday objects. The key question here is to ground referring expressions: understand expressions about objects and their relationships from image and natural language inputs. INGRESS allows unconstrained object categories and rich language expressions. Further, it asks questions to clarify ambiguous referring expressions interactively. To achieve these, we take the approach of grounding by generation and propose a two-stage neural-network model for grounding. The first stage uses a neural network to generate visual descriptions of objects, compares them with the input language expressions, and identifies a set of candidate objects. The second stage uses another neural network to examine all pairwise relations between the candidates and infers the most likely referred objects. The same neural networks are used for both grounding and question generation for disambiguation. Experiments show that INGRESS outperformed a state-of-the-art method on the RefCOCO dataset and in robot experiments with humans. The INGRESS source code is available at https://github.com/MohitShridhar/ingress.

Do we prefer that our predictions fit observations -to validate our expectations- or that they surprise us -to acquire new knowledge-?

Clare Press, Peter Kok, Daniel Yon, The Perceptual Prediction Paradox, Trends in Cognitive Sciences, Volume 24, Issue 1, January 2020, Pages 4-6, DOI: 10.1016/j.tics.2019.11.003.

From the noisy information bombarding our senses, our brains must construct percepts that are veridical – reflecting the true state of the world – and informative – conveying what we did not already know. Influential theories suggest that both challenges are met through mechanisms that use expectations about the likely state of the world to shape perception. However, current models explaining how expectations render perception either veridical or informative are mutually incompatible. While the former propose that perceptual experiences are dominated by events we expect, the latter propose that perception of expected events is suppressed. To solve this paradox we propose a two-process model in which probabilistic knowledge initially biases perception towards what is likely and subsequently upweights events that are particularly surprising.