Category Archives: Psycho-physiological Bases Of Engineering

Empirical evidence of the negative correlation between cognitive workload and attention in humans

Kyle J. Jaquess, Rodolphe J. Gentili, Li-Chuan Lo, Hyuk Oh, Jing Zhang, Jeremy C. Rietschel, Matthew W. Miller, Ying Ying Tan, Bradley D. Hatfield, Empirical evidence for the relationship between cognitive workload and attentional reserve, International Journal of Psychophysiology, Volume 121, 2017, Pages 46-55, DOI: 10.1016/j.ijpsycho.2017.09.007.

While the concepts of cognitive workload and attentional reserve have been thought to have an inverse relationship for some time, such a relationship has never been empirically tested. This was the purpose of the present study. Aspects of the electroencephalogram were used to assess both cognitive workload and attentional reserve. Specifically, spectral measures of cortical activation were used to assess cognitive workload, while amplitudes of the event-related potential from the presentation of unattended “novel” sounds were used to assess attentional reserve. The relationship between these two families of measures was assessed using canonical correlation. Twenty-seven participants performed a flight simulator task under three levels of challenge. Verification of manipulation was performed using self-report measures of task demand, objective task performance, and heart rate variability using electrocardiography. Results revealed a strong, negative relationship between the spectral measures of cortical activation, believed to be representative of cognitive workload, and ERP amplitudes, believed to be representative of attentional reserve. This finding provides support for the theoretical and intuitive notion that cognitive workload and attentional reserve are inversely related. The practical implications of this result include improved state classification using advanced machine learning techniques, enhanced personnel selection/recruitment/placement, and augmented learning/training.

On how humans run simulations for reasoning about physics

James R. Kubricht, Keith J. Holyoak, Hongjing Lu, Intuitive Physics: Current Research and Controversies, Trends in Cognitive Sciences, Volume 21, Issue 10, 2017, Pages 749-759, DOI: 10.1016/j.tics.2017.06.002.

Early research in the field of intuitive physics provided extensive evidence that humans succumb to common misconceptions and biases when predicting, judging, and explaining activity in the physical world. Recent work has demonstrated that, across a diverse range of situations, some biases can be explained by the application of normative physical principles to noisy perceptual inputs. However, it remains unclear how knowledge of physical principles is learned, represented, and applied to novel situations. In this review we discuss theoretical advances from heuristic models to knowledge-based, probabilistic simulation models, as well as recent deep-learning models. We also consider how recent work may be reconciled with earlier findings that favored heuristic models.

On the roots in the ability to control outcomes of human motivation

Justin M. Moscarello, Catherine A. Hartley, Agency and the Calibration of Motivated Behavior, Trends in Cognitive Sciences, Volume 21, Issue 10, 2017, Pages 725-735, DOI: 10.1016/j.tics.2017.06.008.

The controllability of positive or negative environmental events has long been recognized as a critical factor determining their impact on an organism. In studies across species, controllable and uncontrollable reinforcement have been found to yield divergent effects on subsequent behavior. Here we present a model of the organizing influence of control, or a lack thereof, on the behavioral repertoire. We propose that individuals derive a generalizable estimate of agency from controllable and uncontrollable outcomes, which serves to calibrate their behavioral strategies in a manner that is most likely to be adaptive given their prior experience.

Evidence of the dicotomy reactive/predictive control in the brain

Mattie Tops, Markus Quirin, Maarten A.S. Boksem, Sander L. Koole, Large-scale neural networks and the lateralization of motivation and emotion, International Journal of Psychophysiology, Volume 119, 2017, Pages 41-49, DOI: 10.1016/j.ijpsycho.2017.02.004.

Several lines of research in animals and humans converge on the distinction between two basic large-scale brain networks of self-regulation, giving rise to predictive and reactive control systems (PARCS). Predictive (internally-driven) and reactive (externally-guided) control are supported by dorsal versus ventral corticolimbic systems, respectively. Based on extant empirical evidence, we demonstrate how the PARCS produce frontal laterality effects in emotion and motivation. In addition, we explain how this framework gives rise to individual differences in appraising and coping with challenges. PARCS theory integrates separate fields of research, such as research on the motivational correlates of affect, EEG frontal alpha power asymmetry and implicit affective priming effects on cardiovascular indicators of effort during cognitive task performance. Across these different paradigms, converging evidence points to a qualitative motivational division between, on the one hand, angry and happy emotions, and, on the other hand, sad and fearful emotions. PARCS suggests that those two pairs of emotions are associated with predictive and reactive control, respectively. PARCS theory may thus generate important new insights on the motivational and emotional dynamics that drive autonomic and homeostatic control processes.

On how the simplification on physics made in computer games for real-time execution can explain the simplification on physics made by infants when understanding the world

Tomer D. Ullman, Elizabeth Spelke, Peter Battaglia, Joshua B. Tenenbaum, Mind Games: Game Engines as an Architecture for Intuitive Physics, Trends in Cognitive Sciences, Volume 21, Issue 9, 2017, Pages 649-665, DOI: 10.1016/j.tics.2017.05.012.

We explore the hypothesis that many intuitive physical inferences are based on a mental physics engine that is analogous in many ways to the machine physics engines used in building interactive video games. We describe the key features of game physics engines and their parallels in human mental representation, focusing especially on the intuitive physics of young infants where the hypothesis helps to unify many classic and otherwise puzzling phenomena, and may provide the basis for a computational account of how the physical knowledge of infants develops. This hypothesis also explains several ‘physics illusions’, and helps to inform the development of artificial intelligence (AI) systems with more human-like common sense.

Reinterpretation of evolutionary processes as algorithms for Bayesian inference

Jordan W. Suchow, David D. Bourgin, Thomas L. Griffiths, Evolution in Mind: Evolutionary Dynamics, Cognitive Processes, and Bayesian Inference, Trends in Cognitive Sciences, Volume 21, Issue 7, July 2017, Pages 522-530, ISSN 1364-6613, DOI: 10.1016/j.tics.2017.04.005.

Evolutionary theory describes the dynamics of population change in settings affected by reproduction, selection, mutation, and drift. In the context of human cognition, evolutionary theory is most often invoked to explain the origins of capacities such as language, metacognition, and spatial reasoning, framing them as functional adaptations to an ancestral environment. However, evolutionary theory is useful for understanding the mind in a second way: as a mathematical framework for describing evolving populations of thoughts, ideas, and memories within a single mind. In fact, deep correspondences exist between the mathematics of evolution and of learning, with perhaps the deepest being an equivalence between certain evolutionary dynamics and Bayesian inference. This equivalence permits reinterpretation of evolutionary processes as algorithms for Bayesian inference and has relevance for understanding diverse cognitive capacities, including memory and creativity.

Evidences that the human brain has quantifying properties -i.e., ability to discriminate between sets of different sizes- as a result of evolution, but that numerical cognition is a result of culture

Rafael E. Núñez, Is There Really an Evolved Capacity for Number?, Trends in Cognitive Sciences, Volume 21, Issue 6, June 2017, Pages 409-424, ISSN 1364-6613, DOI: 10.1016/j.tics.2017.03.005.

Humans and other species have biologically endowed abilities for discriminating quantities. A widely accepted view sees such abilities as an evolved capacity specific for number and arithmetic. This view, however, is based on an implicit teleological rationale, builds on inaccurate conceptions of biological evolution, downplays human data from non-industrialized cultures, overinterprets results from trained animals, and is enabled by loose terminology that facilitates teleological argumentation. A distinction between quantical (e.g., quantity discrimination) and numerical (exact, symbolic) cognition is needed: quantical cognition provides biologically evolved preconditions for numerical cognition but it does not scale up to number and arithmetic, which require cultural mediation. The argument has implications for debates about the origins of other special capacities – geometry, music, art, and language.

A summary of the Clarion cognitive architecture

Ron Sun, Anatomy of the Mind: a Quick Overview, Cognitive Computation, February 2017, Volume 9, Issue 1, pp 1–4, DOI: 10.1007/s12559-016-9444-2.

The recently published book, “Anatomy of the Mind,” explains psychological (cognitive) mechanisms, processes, and functionalities through a comprehensive computational theory of the human mind—that is, a cognitive architecture. The goal of the work has been to develop a unified framework and then to develop process-based mechanistic understanding of psychological phenomena within the unified framework. In this article, I will provide a quick overview of the work.

Approach to explain gaze: gaze is directed to task- and goal-relevant scene regions

John M. Henderson, Gaze Control as Prediction, Trends in Cognitive Sciences, Volume 21, Issue 1, January 2017, Pages 15-23, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.11.003.

The recent study of overt attention during complex scene viewing has emphasized explaining gaze behavior in terms of image properties and image salience independently of the viewer’s intentions and understanding of the scene. In this Opinion article, I outline an alternative approach proposing that gaze control in natural scenes can be characterized as the result of knowledge-driven prediction. This view provides a theoretical context for integrating and unifying many of the disparate phenomena observed in active scene viewing, offers the potential for integrating the behavioral study of gaze with the neurobiological study of eye movements, and provides a theoretical framework for bridging gaze control and other related areas of perception and cognition at both computational and neurobiological levels of analysis.

Demonstration that a theory of cortical function (“predictive coding”) can perform bayesian inference in some tasks, with a nice related work of physiological foundations of probability distribution representation in neurons and of bayesian inference

M. W. Spratling, A neural implementation of Bayesian inference based on predictive coding, Connection Science, Volume 28, 2016 – Issue 4, DOI: 10.1080/09540091.2016.1243655.

Predictive coding (PC) is a leading theory of cortical function that has previously been shown to explain a great deal of neurophysiological and psychophysical data. Here it is shown that PC can perform almost exact Bayesian inference when applied to computing with population codes. It is demonstrated that the proposed algorithm, based on PC, can: decode probability distributions encoded as noisy population codes; combine priors with likelihoods to calculate posteriors; perform cue integration and cue segregation; perform function approximation; be extended to perform hierarchical inference; simultaneously represent and reason about multiple stimuli; and perform inference with multi-modal and non-Gaussian probability distributions. PC thus provides a neural network-based method for performing probabilistic computation and provides a simple, yet comprehensive, theory of how the cerebral cortex performs Bayesian inference.