Category Archives: Psycho-physiological Bases Of Engineering

On the existence of prior knowledge, “pre-wired” in animal brains, that guides further learning

Elisabetta Versace, Antone Martinho-Truswell, Alex Kacelnik, Giorgio Vallortigara, Priors in Animal and Artificial Intelligence: Where Does Learning Begin?, Trends in Cognitive Sciences, Volume 22, Issue 11, 2018, Pages 963-965, DOI: 10.1016/j.tics.2018.07.005.

A major goal for the next generation of artificial intelligence (AI) is to build machines that are able to reason and cope with novel tasks, environments, and situations in a manner that approaches the abilities of animals. Evidence from precocial species suggests that driving learning through suitable priors can help to successfully face this challenge.

A new model of reinforcement learning based on the human brain that copes with continuous spaces through continuous rewards, with a short but nice state-of-the-art of RL applied to large, continuous spaces

Feifei Zhao, Yi Zeng, Guixiang Wang, Jun Bai, Bo Xu, A Brain-Inspired Decision Making Model Based on Top-Down Biasing of Prefrontal Cortex to Basal Ganglia and Its Application in Autonomous UAV Explorations, Cognitive Computation, Volume 10, Issue 2, pp 296–306, DOI: 10.1007/s12559-017-9511-3.

Decision making is a fundamental ability for intelligent agents (e.g., humanoid robots and unmanned aerial vehicles). During decision making process, agents can improve the strategy for interacting with the dynamic environment through reinforcement learning. Many state-of-the-art reinforcement learning models deal with relatively smaller number of state-action pairs, and the states are preferably discrete, such as Q-learning and Actor-Critic algorithms. While in practice, in many scenario, the states are continuous and hard to be properly discretized. Better autonomous decision making methods need to be proposed to handle these problems. Inspired by the mechanism of decision making in human brain, we propose a general computational model, named as prefrontal cortex-basal ganglia (PFC-BG) algorithm. The proposed model is inspired by the biological reinforcement learning pathway and mechanisms from the following perspectives: (1) Dopamine signals continuously update reward-relevant information for both basal ganglia and working memory in prefrontal cortex. (2) We maintain the contextual reward information in working memory. This has a top-down biasing effect on reinforcement learning in basal ganglia. The proposed model separates the continuous states into smaller distinguishable states, and introduces continuous reward function for each state to obtain reward information at different time. To verify the performance of our model, we apply it to many UAV decision making experiments, such as avoiding obstacles and flying through window and door, and the experiments support the effectiveness of the model. Compared with traditional Q-learning and Actor-Critic algorithms, the proposed model is more biologically inspired, and more accurate and faster to make decision.

Z-numbers: an extension of fuzzy variables for cognitive decision making, and the concept of cognitive information

Hong-gang Peng, Jian-qiang Wang, Outranking Decision-Making Method with Z-Number Cognitive Information, Cognitive Computation, Volume 10, Issue 5, pp 752–768, DOI: 10.1007/s12559-018-9556-y.

The Z-number provides an adequate and reliable description of cognitive information. The nature of Z-numbers is complex, however, and important issues in Z-number computation remain to be addressed. This study focuses on developing a computationally simple method with Z-numbers to address multicriteria decision-making (MCDM) problems. Processing Z-numbers requires the direct computation of fuzzy and probabilistic uncertainties. We used an effective method to analyze the Z-number construct. Next, we proposed some outranking relations of Z-numbers and defined the dominance degree of discrete Z-numbers. Also, after analyzing the characteristics of elimination and choice translating reality III (ELECTRE III) and qualitative flexible multiple criteria method (QUALIFLEX), we developed an improved outranking method. To demonstrate this method, we provided an illustrative example concerning job-satisfaction evaluation. We further verified the validity of the method by a criteria test and comparative analysis. The results demonstrate that the method can be successfully applied to real-world decision-making problems, and it can identify more reasonable outcomes than previous methods. This study overcomes the high computational complexity in existing Z-number computation frameworks by exploring the pairwise comparison of Z-numbers. The method inherits the merits of the classical outranking method and considers the non-compensability of criteria. Therefore, it has remarkable potential to address practical decision-making problems involving Z-information.

On how psychological time emerges from execution of actions in the environment

Jennifer T. Coull, Sylvie Droit-Volet, Explicit Understanding of Duration Develops Implicitly through Action, Trends in Cognitive Sciences, Volume 22, Issue 10, 2018, Pages 923-937, DOI: 10.1016/j.tics.2018.07.011.

Time is relative. Changes in cognitive state or sensory context make it appear to speed up or slow down. Our perception of time is a rather fragile mental construct derived from the way events in the world are processed and integrated in memory. Nevertheless, the slippery concept of time can be structured by draping it over more concrete functional scaffolding. Converging evidence from developmental studies of children and neuroimaging in adults indicates that we can represent time in spatial or motor terms. We hypothesise that explicit processing of time is mediated by motor structures of the brain in adulthood because we implicitly learn about time through action during childhood. Future challenges will be to harness motor or spatial representations of time to optimise behaviour, potentially for therapeutic gain.

A very interesting analysis on how reinforcement learning depends on time, both for MDPs and for the psychological basis of RL in the human brain

Elijah A. Petter, Samuel J. Gershman, Warren H. Meck, Integrating Models of Interval Timing and Reinforcement Learning, Trends in Cognitive Sciences, Volume 22, Issue 10, 2018, Pages 911-922 DOI: 10.1016/j.tics.2018.08.004.

We present an integrated view of interval timing and reinforcement learning (RL) in the brain. The computational goal of RL is to maximize future rewards, and this depends crucially on a representation of time. Different RL systems in the brain process time in distinct ways. A model-based system learns ‘what happens when’, employing this internal model to generate action plans, while a model-free system learns to predict reward directly from a set of temporal basis functions. We describe how these systems are subserved by a computational division of labor between several brain regions, with a focus on the basal ganglia and the hippocampus, as well as how these regions are influenced by the neuromodulator dopamine.

Some quotes beyond the abstract:

The Markov assumption also makes explicit the requirements for temporal representation. All temporal dynamics must be captured by the state-transition function, which means that the state representation must encode the time-invariant structure of the environment.

A summary on reward processing in psychophysiology

Dan Foti, Anna Weinberg, Reward and feedback processing: State of the field, best practices, and future directions, International Journal of Psychophysiology, Volume 132, Part B, 2018, Pages 171-174, DOI: 10.1016/j.ijpsycho.2018.08.006.

There is a long history of studies using event-related potentials (ERPs) to examine how the brain monitors performance. Many initial studies focused on error processing, both internal (i.e., neural activity elicited by error commission) (Falkenstein et al., 1991; Gehring et al., 1993) and external (i.e. neural activity elicited by feedback indicating an unfavorable outcome) (Gehring and Willoughby, 2002; Miltner et al., 1997). A frequent assumption in this line of research has been that correct performance and favorable outcomes served as reference conditions, and that any effects on ERP amplitudes specifically reflected error processing. This starting premise is at odds with the large human and animal neuroscience literatures on reward processing, which focus on the motivated pursuit of said favorable outcomes. In fact, reward and error processing are intrinsically linked, and both undergird effective task performance: the brain is highly sensitive to events that are better or worse than expected in order to continuously modulate behavior in line with task goals (Holroyd and Coles, 2002). In recent years, the ERP literature on feedback processing has broadened to explicitly incorporate reward processing, thereby enriching traditional studies focused on error processing. Specific developments in this regard include an expanded focus on multiple stages of reward processing (e.g., anticipation versus outcome), charting the development of reward processing across the lifespan, and the examination of aberrant sensitivity to reward in psychiatric illnesses. While these advances are highly promising, the general ERP literature on feedback processing continues to be fragmented with regard to terminology, analytic techniques, task designs, and interpretation of findings, ultimately limiting progress in the field.

The overarching goal of this special issue was to carefully examine the state of the art in our current understanding of feedback processing. The aim was to provide an integrative overview that covers multiple theoretical perspectives and methodological approaches. Consideration has been given in this collection of articles to both basic and applied research topics, and throughout the special issue there is an emphasis on providing specific recommendations for study design and the identification of important future research directions. In the remainder of this introductory editorial, we set the stage for these articles by highlighting complementary results and points of intersection across four themes: integrating perspectives on reward and error processing; experimental manipulations, psychometrics, and individual differences.

An interesting model of Basal Ganglia that performs similarly to Q learning when applied to a robot

Y. Zeng, G. Wang and B. Xu, A Basal Ganglia Network Centric Reinforcement Learning Model and Its Application in Unmanned Aerial Vehicle, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 2, pp. 290-303 DOI: 10.1109/TCDS.2017.2649564.

Reinforcement learning brings flexibility and generality for machine learning, while most of them are mathematical optimization driven approaches, and lack of cognitive and neural evidence. In order to provide a more cognitive and neural mechanisms driven foundation and validate its applicability in complex task, we develop a basal ganglia (BG) network centric reinforcement learning model. Compared to existing work on modeling BG, this paper is unique from the following perspectives: 1) the orbitofrontal cortex (OFC) is taken into consideration. OFC is critical in decision making because of its responsibility for reward representation and is critical in controlling the learning process, while most of the BG centric models do not include OFC; 2) to compensate the inaccurate memory of numeric values, precise encoding is proposed to enable working memory system remember important values during the learning process. The method combines vector convolution and the idea of storage by digit bit and is efficient for accurate value storage; and 3) for information coding, the Hodgkin-Huxley model is used to obtain a more biological plausible description of action potential with plenty of ionic activities. To validate the effectiveness of the proposed model, we apply the model to the unmanned aerial vehicle (UAV) autonomous learning process in a 3-D environment. Experimental results show that our model is able to give the UAV the ability of free exploration in the environment and has comparable learning speed as the Q learning algorithm, while the major advances for our model is that it is with solid cognitive and neural basis.

A new model of cognition

Howard, N. & Hussain, A. The Fundamental Code Unit of the Brain: Towards a New Model for Cognitive Geometry, Cogn Comput (2018) 10: 426 DOI: 10.1007/s12559-017-9538-5.

This paper discusses the problems arising from the multidisciplinary nature of cognitive research and the need to conceptually unify insights from multiple fields into the phenomena that drive cognition. Specifically, the Fundamental Code Unit (FCU) is proposed as a means to better quantify the intelligent thought process at multiple levels of analysis. From the linguistic and behavioral output, FCU produces to the chemical and physical processes within the brain that drive it. The proposed method efficiently model the most complex decision-making process performed by the brain.

Adapting inverse reinforcement learning for including the risk-aversion of the agent

Sumeet Singh, Jonathan Lacotte, Anirudha Majumdar, and Marco Pavone, Risk-sensitive inverse reinforcement learning via semi- and non-parametric methods , The International Journal of Robotics Research First Published May 22, 2018 DOI: 10.1177/0278364918772017.

The literature on inverse reinforcement learning (IRL) typically assumes that humans take actions to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive (RS) IRL to explicitly account for a human’s risk sensitivity. To this end, we propose a flexible class of models based on coherent risk measures, which allow us to capture an entire spectrum of risk preferences from risk neutral to worst case. We propose efficient non-parametric algorithms based on linear programming and semi-parametric algorithms based on maximum likelihood for inferring a human’s underlying risk measure and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with 10 human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk averse to risk neutral in a data-efficient manner. Moreover, comparisons of the RS-IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively, especially in scenarios where catastrophic outcomes such as collisions can occur.

On how sleep improves our problem-solving capabilities

Penelope A. Lewis, Günther Knoblich, Gina Poe, How Memory Replay in Sleep Boosts Creative Problem-Solving, Trends in Cognitive Sciences, Volume 22, Issue 6, 2018, Pages 491-503 DOI: 10.1016/j.tics.2018.03.009.

Creative thought relies on the reorganisation of existing knowledge. Sleep is known to be important for creative thinking, but there is a debate about which sleep stage is most relevant, and why. We address this issue by proposing that rapid eye movement sleep, or ‘REM’, and non-REM sleep facilitate creativity in different ways. Memory replay mechanisms in non-REM can abstract rules from corpuses of learned information, while replay in REM may promote novel associations. We propose that the iterative interleaving of REM and non-REM across a night boosts the formation of complex knowledge frameworks, and allows these frameworks to be restructured, thus facilitating creative thought. We outline a hypothetical computational model which will allow explicit testing of these hypotheses.