Category Archives: Psycho-physiological Bases Of Engineering

On how the exploitation-exploration dicotomy shifts to exploitation as humans get older

R. Nathan Spreng, Gary R. Turner, From exploration to exploitation: a shifting mental mode in late life development, Trends in Cognitive Sciences, Volume 25, Issue 12, 2021 DOI: 10.1016/j.tics.2021.09.0010.

Changes in cognition, affect, and brain function combine to promote a shift in the nature of mentation in older adulthood, favoring exploitation of prior knowledge over exploratory search as the starting point for thought and action. Age-related exploitation biases result from the accumulation of prior knowledge, reduced cognitive control, and a shift toward affective goals. These are accompanied by changes in cortical networks, as well as attention and reward circuits. By incorporating these factors into a unified account, the exploration-to-exploitation shift offers an integrative model of cognitive, affective, and brain aging. Here, we review evidence for this model, identify determinants and consequences, and survey the challenges and opportunities posed by an exploitation-biased mental mode in later life.

On how physical movements shape the perception of time

Rose De Kock, Keri Anne Gladhill, Minaz Numa Ali, Wilsaan Mychal Joiner, Martin Wiener, How movements shape the perception of time, Trends in Cognitive Sciences, Volume 25, Issue 11, 2021, Pages 950-963 DOI: 10.1016/j.tics.2021.08.002.

In order to keep up with a changing environment, mobile organisms must be capable of deciding both where and when to move. This precision necessitates a strong sense of time, as otherwise we would fail in many of our movement goals. Yet, despite this intrinsic link, only recently have researchers begun to understand how these two features interact. Primarily, two effects have been observed: movements can bias time estimates, but they can also make them more precise. Here we review this literature and propose that both effects can be explained by a Bayesian cue combination framework, in which movement itself affords the most precise representation of time, which can influence perception in either feedforward or active sensing modes.

Solving the “self-recognition on a mirror” problem for robots

Arianna Pipitone, Antonio Chella, Robot passes the mirror test by inner speech, . Robotics and Autonomous Systems, Volume 144, 2021 DOI: 10.1016/j.robot.2021.103838.

The mirror test is a well-known task in Robotics. The existing strategies are based on kinesthetic-visual matching techniques and manipulate perceptual and motion data. The proposed work attempts to demonstrate that it is possible to implement a robust robotic self-recognition method by the inner speech, i.e. the self-dialogue that enables reasoning on symbolic information. The robot self-talks and conceptually reasons on the symbolic forms of signals, and infers if the robot it sees in the mirror is itself or not. The idea is supported by the existing literature in psychology, where the importance of inner speech in self-reflection and self-concept emergence for solving the mirror test was empirically demonstrated.

Trying to reach general AI through just decision-making (rewards) instead of using a diversity of paradigms

avid Silver, Satinder Singh, Doina Precup, Richard S. Sutton, Reward is enough, . Artificial Intelligence, Volume 299, 2021 DOI: 10.1016/j.artint.2021.103535.

In this article we hypothesise that intelligence, and its associated abilities, can be understood as subserving the maximisation of reward. Accordingly, reward is enough to drive behaviour that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalisation and imitation. This is in contrast to the view that specialised problem formulations are needed for each ability, based on other signals or objectives. Furthermore, we suggest that agents that learn through trial and error experience to maximise reward could learn behaviour that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence.

NOTES:

  • The computational and physical limitations of the agent to cope with a too complex world is the main reason to use learning instead of pre-built knowledge (evolution): it allows the agent to focus on acquiring skills for its own circumstances first, that are the most important for it.
  • Argument why classification (supervised learning) is less powerful and efficient than RL.
  • Same with multi-agent settings vs. one agent confronted with a single complex environment (containing other agents).

A nice survey on active learning, in particular for robotics

Annalisa T. Taylor, Thomas A. Berrueta, Todd D. Murphey, Active learning in robotics: A review of control principles, . Mechatronics, Volume 77, 2021 DOI: 10.1016/j.mechatronics.2021.102576.

Active learning is a decision-making process. In both abstract and physical settings, active learning demands
both analysis and action. This is a review of active learning in robotics, focusing on methods amenable to
the demands of embodied learning systems. Robots must be able to learn efficiently and flexibly through
continuous online deployment. This poses a distinct set of control-oriented challenges??one must choose
suitable measures as objectives, synthesize real-time control, and produce analyses that guarantee performance
and safety with limited knowledge of the environment or robot itself. In this work, we survey the fundamental
components of robotic active learning systems. We discuss classes of learning tasks that robots typically
encounter, measures with which they gauge the information content of observations, and algorithms for
generating action plans. Moreover, we provide a variety of examples ?? from environmental mapping to
nonparametric shape estimation ?? that highlight the qualitative differences between learning tasks, information
measures, and control techniques. We conclude with a discussion of control-oriented open challenges, including
safety-constrained learning and distributed learning.

NOTES:

  • RL can be considered one of the areas within computational learning theory, that usually ignore physical embodiment aspects of the learning agent. However, that is only so when RL explores through decision-making, not when it explores randomly, without much purpose of enhancing learning itself through its actions.
  • RL caveats (particularly Deep RL): their large data requirements, lack of generalizability between tasks, as well as their inability to learn incrementally and guarantee
    safety.
  • Bayesian filters can be seen as learner systems: they learn parameters of objects (pose) or environments (maps) aided by some models. However, they are more active learners when they use the robot actions to improve that parameter learning.
  • Gaussian processes can be effective in learning those models when no parameterical form is available or much first-principle knowledge, for instance, when the robot has to learn the model only observing a small part of the environment (local).
  • Entropy/information, Fisher’s information (conditional information) and ergodicity are the main ways of measuring information gain in active learning.

The Evolutionary History of Brains for Numbers

Andreas Nieder, The Evolutionary History of Brains for Numbers, . Trends in Cognitive Sciences, Volume 25, Issue 7, 2021, Pages 608-621 DOI: 10.1016/j.tics.2021.03.012.

Humans and other animals share a number sense’, an intuitive understanding of countable quantities. Having evolved independent from one another for hundreds of millions of years, the brains of these diverse species, including monkeys, crows, zebrafishes, bees, and squids, differ radically. However, in all vertebrates investigated, the pallium of the telencephalon has been implicated in number processing. This suggests that properties of the telencephalon make it ideally suited to host number representations that evolved by convergent evolution as a result of common selection pressures. In addition, promising candidate regions in the brains of invertebrates, such as insects, spiders, and cephalopods, can be identified, opening the possibility of even deeper commonalities for number sense.

Including attention mechanisms in long-short term memory

Lin, X., Zhong, G., Chen, K. et al, Attention-Augmented Machine Memory, . Cogn Comput 13, 751–760 (2021) DOI: 10.1007/s12559-021-09854-5.

Attention mechanism plays an important role in the perception and cognition of human beings. Among others, many machine learning models have been developed to memorize the sequential data, such as the Long Short-Term Memory (LSTM) network and its extensions. However, due to lack of the attention mechanism, they cannot pay special attention to the important parts of the sequences. In this paper, we present a novel machine learning method called attention-augmented machine memory (AAMM). It seamlessly integrates the attention mechanism into the memory cell of LSTM. As a result, it facilitates the network to focus on valuable information in the sequences and ignore irrelevant information during its learning. We have conducted experiments on two sequence classification tasks for pattern classification and sentiment analysis, respectively. The experimental results demonstrate the advantages of AAMM over LSTM and some other related approaches. Hence, AAMM can be considered as a substitute of LSTM in the sequence learning applications.

Physiological bases of navigation

Eva Zita Patai, Hugo J. Spiers, The Versatile Wayfinder: Prefrontal Contributions to Spatial Navigation, . Trends in Cognitive Sciences, Volume 25, Issue 6, 2021, Pages 520-533 DOI: 10.1016/j.tics.2021.02.010.

The prefrontal cortex (PFC) supports decision-making, goal tracking, and planning. Spatial navigation is a behavior that taxes these cognitive processes, yet the role of the PFC in models of navigation has been largely overlooked. In humans, activity in dorsolateral PFC (dlPFC) and ventrolateral PFC (vlPFC) during detours, reveal a role in inhibition and replanning. Dorsal anterior cingulate cortex (dACC) is implicated in planning and spontaneous internally-generated changes of route. Orbitofrontal cortex (OFC) integrates representations of the environment with the value of actions, providing a ‘map’ of possible decisions. In rodents, medial frontal areas interact with hippocampus during spatial decisions and switching between navigation strategies. In reviewing these advances, we provide a framework for how different prefrontal regions may contribute to different stages of navigation.

Studying magician tricks to understand decision making and how to influence it

Alice Pailhès, Gustav Kuhn, Mind Control Tricks: Magicians’ Forcing and Free Will, . Trends in Cognitive Sciences, Volume 25, Issue 5, 2021, Pages 338-341 DOI: 10.1016/j.tics.2021.02.001.

A new research program has recently emerged that investigates magicians’ mind control tricks, also called forces. This research highlights the psychological processes that underpin decision-making, illustrates the ease by which our decisions can be covertly influenced, and helps answer questions about our sense of free will and agency over choices.

Formalization of “making sense” of sensory perceptions and use in several practical cases that compare favourably, because of the use of induction, to neural network approaches

Richard Evans, José Hernández-Orallo, Johannes Welbl, Pushmeet Kohli, Marek Sergot, Making sense of sensory input, . Artificial Intelligence, Volume 293, 2021 DOI: 10.1016/j.artint.2020.103438.

This paper attempts to answer a central question in unsupervised learning: what does it mean to “make sense” of a sensory sequence? In our formalization, making sense involves constructing a symbolic causal theory that both explains the sensory sequence and also satisfies a set of unity conditions. The unity conditions insist that the constituents of the causal theory – objects, properties, and laws – must be integrated into a coherent whole. On our account, making sense of sensory input is a type of program synthesis, but it is unsupervised program synthesis. Our second contribution is a computer implementation, the Apperception Engine, that was designed to satisfy the above requirements. Our system is able to produce interpretable human-readable causal theories from very small amounts of data, because of the strong inductive bias provided by the unity conditions. A causal theory produced by our system is able to predict future sensor readings, as well as retrodict earlier readings, and impute (fill in the blanks of) missing sensory readings, in any combination. In fact, it is able to do all three tasks simultaneously. We tested the engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems, occlusion tasks, and sequence induction intelligence tests. In each domain, we test our engine’s ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data. The Apperception Engine performs well in all these domains, significantly out-performing neural net baselines. We note in particular that in the sequence induction intelligence tests, our system achieved human-level performance. This is notable because our system is not a bespoke system designed specifically to solve intelligence tests, but a general-purpose system that was designed to make sense of any sensory sequence.

Continuation paper: https://doi.org/10.1016/j.artint.2021.103521

Notes:

  • Use HMMs with the states being sets of atomic propositions and the transition function logical predicates, therefore mixing a non-symbolic framework (HMM) with a completely symbolic one.
  • Assume perceptions to be previously discretized and modelled as grounded atoms.
  • Need to be provided with both the sensory (discretized) input and commonsense knowledge about the predicates used for making sense.
  • Include a very clear and simple representation of deduction, induction and abduction (Fig. 1).