Category Archives: Psycho-physiological Bases Of Engineering

Consciousness as a learning framework

Axel Cleeremans, Dalila Achoui, Arnaud Beauny, Lars Keuninckx, Jean-Remy Martin, Santiago Muñoz-Moldes, Laurène Vuillaume, Adélaïde de Heering, Learning to Be Conscious, Trends in Cognitive Sciences, Volume 24, Issue 2, 2020, Pages 112-123 DOI: 10.1016/j.tics.2019.11.011.

Consciousness remains a formidable challenge. Different theories of consciousness have proposed vastly different mechanisms to account for phenomenal experience. Here, appealing to aspects of global workspace theory, higher-order theories, social theories, and predictive processing, we introduce a novel framework: the self-organizing metarerpresentational account (SOMA), in which consciousness is viewed as something that the brain learns to do. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of metarepresentations that qualify target first-order representations. Thus, experiences only occur in experiencers that have learned to know they possess certain first-order states and that have learned to care more about certain states than about others. In this sense, consciousness is the brain’s (unconscious, embodied, enactive, nonconceptual) theory about itself.

A model of the psychomotor behaviour of humans intended to be useful for integration with robots

Stephen Fox, Adrian Kotelba, Ilari Marstio, Jari Montonen, Aligning human psychomotor characteristics with robots, exoskeletons and augmented reality, Robotics and Computer-Integrated Manufacturing, Volume 63, 2020, DOI: 10.1016/j.rcim.2019.101922.

In previous production literature, the uncertainty of human behaviour has been recognized as a source of productivity, quality, and safety problems. However, fundamental reasons for the uncertainty of human behavior have received little analysis in the production literature. Furthermore, potential for these fundamental reasons to be aligned with production technologies in order to improve production performance has not been addressed. By contrast, in this paper, fundamental reasons for the uncertainty of human behaviour are explained through a model of psychomotor characteristics that encompasses physiology, past experiences, personality, gender, culture, emotion, reasoning, and biocybernetics. Through reference to 10 action research cases, the formal model is applied to provide guidelines for planning production work that includes robots, exoskeletons, and augmented reality.

Symbol grounding through neural networks

Shridhar M, Mittal D, Hsu D., INGRESS: Interactive visual grounding of referring expressions, The International Journal of Robotics Research. January 2020, DOI: 10.1177/0278364919897133.

This article presents INGRESS, a robot system that follows human natural language instructions to pick and place everyday objects. The key question here is to ground referring expressions: understand expressions about objects and their relationships from image and natural language inputs. INGRESS allows unconstrained object categories and rich language expressions. Further, it asks questions to clarify ambiguous referring expressions interactively. To achieve these, we take the approach of grounding by generation and propose a two-stage neural-network model for grounding. The first stage uses a neural network to generate visual descriptions of objects, compares them with the input language expressions, and identifies a set of candidate objects. The second stage uses another neural network to examine all pairwise relations between the candidates and infers the most likely referred objects. The same neural networks are used for both grounding and question generation for disambiguation. Experiments show that INGRESS outperformed a state-of-the-art method on the RefCOCO dataset and in robot experiments with humans. The INGRESS source code is available at https://github.com/MohitShridhar/ingress.

Do we prefer that our predictions fit observations -to validate our expectations- or that they surprise us -to acquire new knowledge-?

Clare Press, Peter Kok, Daniel Yon, The Perceptual Prediction Paradox, Trends in Cognitive Sciences, Volume 24, Issue 1, January 2020, Pages 4-6, DOI: 10.1016/j.tics.2019.11.003.

From the noisy information bombarding our senses, our brains must construct percepts that are veridical – reflecting the true state of the world – and informative – conveying what we did not already know. Influential theories suggest that both challenges are met through mechanisms that use expectations about the likely state of the world to shape perception. However, current models explaining how expectations render perception either veridical or informative are mutually incompatible. While the former propose that perceptual experiences are dominated by events we expect, the latter propose that perception of expected events is suppressed. To solve this paradox we propose a two-process model in which probabilistic knowledge initially biases perception towards what is likely and subsequently upweights events that are particularly surprising.

Similarities between motor control and cognitive control

Harrison Ritz, Romy Frömer, Amitai Shenhav, Bridging Motor and Cognitive Control: It’s About Time!, Trends in Cognitive Sciences, Volume 24, Issue 1, January 2020, Pages 4-6, DOI: 10.1016/j.tics.2019.11.005.

Is how we control our thoughts similar to how we control our movements? Egger et al. show that the neural dynamics underlying the control of internal states exhibit similar algorithmic properties as those that control movements. This experiment reveals a promising connection between how we control our brain and our body.

Interesting alternative to the classical “maximize expected utility” rule for decision making

EtienneKoechlin, Human Decision-Making beyond the Rational Decision Theory, Trends in Cognitive Sciences, Volume 24, Issue 1, January 2020, Pages 4-6, DOI: 10.1016/j.tics.2019.11.001.

Two recent studies (Farashahi et al. and Rouault et al.) provide compelling evidence refuting the Subjective Expected Utility (SEU) hypothesis as a ground model describing human decision-making. Together, these studies pave the way towards a new model that subsumes the notion of decision-making and adaptive behavior into a single account.

On the importance of dynamics and diversity in (cognitive) symbol systems

Tadahiro Taniguchi; Emre Ugur; Matej Hoffmann; Lorenzo Jamone; Takayuki Nagai; Benjamin Rosman, Symbol Emergence in Cognitive Developmental Systems: A Survey, IEEE Transactions on Cognitive and Developmental Systems ( Volume: 11, Issue: 4, Dec. 2019), DOI: 10.1109/TCDS.2018.2867772.

Humans use signs, e.g., sentences in a spoken language, for communication and thought. Hence, symbol systems like language are crucial for our communication with other agents and adaptation to our real-world environment. The symbol systems we use in our human society adaptively and dynamically change over time. In the context of artificial intelligence (AI) and cognitive systems, the symbol grounding problem has been regarded as one of the central problems related to symbols. However, the symbol grounding problem was originally posed to connect symbolic AI and sensorimotor information and did not consider many interdisciplinary phenomena in human communication and dynamic symbol systems in our society, which semiotics considered. In this paper, we focus on the symbol emergence problem, addressing not only cognitive dynamics but also the dynamics of symbol systems in society, rather than the symbol grounding problem. We first introduce the notion of a symbol in semiotics from the humanities, to leave the very narrow idea of symbols in symbolic AI. Furthermore, over the years, it became more and more clear that symbol emergence has to be regarded as a multifaceted problem. Therefore, second, we review the history of the symbol emergence problem in different fields, including both biological and artificial systems, showing their mutual relations. We summarize the discussion and provide an integrative viewpoint and comprehensive overview of symbol emergence in cognitive systems. Additionally, we describe the challenges facing the creation of cognitive systems that can be part of symbol emergence systems.

Interesting related work on internal models for action prediction and on the exploration/exploitation trade-off

Simón C. Smith; J. Michael Herrmann, Evaluation of Internal Models in Autonomous Learning, IEEE Transactions on Cognitive and Developmental Systems ( Volume: 11, Issue: 4, Dec. 2019), DOI: 10.1109/TCDS.2018.2865999.

Internal models (IMs) can represent relations between sensors and actuators in natural and artificial agents. In autonomous robots, the adaptation of IMs and the adaptation of the behavior are interdependent processes which have been studied under paradigms for self-organization of behavior such as homeokinesis. We compare the effect of various types of IMs on the generation of behavior in order to evaluate model quality across different behaviors. The considered IMs differ in the degree of flexibility and expressivity related to, respectively, learning speed and structural complexity of the model. We show that the different IMs generate different error characteristics which in turn lead to variations of the self-generated behavior of the robot. Due to the tradeoff between error minimization and complexity of the explored environment, we compare the models in the sense of Pareto optimality. Among the linear and nonlinear models that we analyze, echo-state networks achieve a particularly high performance which we explain as a result of the combination of fast learning and complex internal dynamics. More generally, we provide evidence that Pareto optimization is preferable in autonomous learning as it allows that a special solution can be negotiated in any particular environment.

On rewards and values when the RL theory is applied to human brain

Keno Juechems, Christopher Summerfield, Where Does Value Come From?. Trends in Cognitive Sciences, Volume 23, Issue 10, 2019, Pages 836-850, DOI: 10.1016/j.tics.2019.07.012.

The computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this opinion, we highlight open challenges for RL as a model of animal behaviour in natural environments. We ask how the external reward function is designed for biological systems, and how we can account for the context sensitivity of valuation. We summarise both old and new theories proposing that animals track current and desired internal states and seek to minimise the distance to a goal across multiple value dimensions. We suggest that this framework readily accounts for canonical phenomena observed in the fields of psychology, behavioural ecology, and economics, and recent findings from brain-imaging studies of value-guided decision-making.

On the integer numbers in the brain

Susan Carey, David Barner, Ontogenetic Origins of Human Integer Representations. Trends in Cognitive Sciences, Volume 23, Issue 10, 2019, Pages 823-835, DOI: 10.1016/j.tics.2019.07.004.

Do children learn number words by associating them with perceptual magnitudes? Recent studies argue that approximate numerical magnitudes play a foundational role in the development of integer concepts. Against this, we argue that approximate number representations fail both empirically and in principle to provide the content required of integer concepts. Instead, we suggest that children\u2019s understanding of integer concepts proceeds in two phases. In the first phase, children learn small exact number word meanings by associating words with small sets. In the second phase, children learn the meanings of larger number words by mastering the logic of exact counting algorithms, which implement the successor function and Hume\u2019s principle (that one-to-one correspondence guarantees exact equality). In neither phase do approximate number representations play a foundational role.