Category Archives: Cognitive Sciences

On the need of integrating emotions in robotic architectures

Luiz Pessoa, Do Intelligent Robots Need Emotion?,Trends in Cognitive Sciences, Volume 21, Issue 11, 2017, Pages 817-819, DOI: 10.1016/j.tics.2017.06.010.

What is the place of emotion in intelligent robots? Researchers have advocated the inclusion of some emotion-related components in the information-processing architecture of autonomous agents. It is argued here that emotion needs to be merged with all aspects of the architecture: cognitive–emotional integration should be a key design principle.

Cognitive informatics: simulation of cognition through direct simulation of neurons

Shivhare, R., Cherukuri, A.K. & Li, Establishment of Cognitive Relations Based on Cognitive Informatics, J. Cogn Comput (2017) 9: 721, DOI: 10.1007/s12559-017-9498-9.

Cognitive informatics (CI) is an interdisciplinary study on modelling of the brain in terms of knowledge and information processing. In CI, objects/attributes are considered as neurons connected to each other via synapse. The relation represents the synapse in CI. In order to represent new information the brain generates new synapse or relation between the existing neurons. Therefore, the establishment of cognitive relations is essential to represent new information. In order to represent new information, we propose an algorithm which creates cognitive relation between the pair of objects and attributes by using the relational attribute and object method. Further, the cognitive relations between the pair of objects or attributes within the context could be checked with newly defined conditions, i.e. the necessary and sufficient condition. These conditions will evaluate whether the relational object and attribute is adequate to have relations between the pair of objects and attributes. The new information is obtained without increasing the number of neurons in brain. It is achieved by creating cognitive relations between the pair of objects and attributes. The obtained results are beneficial to simulate the intelligence behaviour of brain such as learning and memorizing. Integrating the idea of CI into cognitive relations is a promising and challenging research direction. In this paper, we have discussed it from the aspects of cognitive mechanism, cognitive computing and cognitive process.

Empirical evidence of the negative correlation between cognitive workload and attention in humans

Kyle J. Jaquess, Rodolphe J. Gentili, Li-Chuan Lo, Hyuk Oh, Jing Zhang, Jeremy C. Rietschel, Matthew W. Miller, Ying Ying Tan, Bradley D. Hatfield, Empirical evidence for the relationship between cognitive workload and attentional reserve, International Journal of Psychophysiology, Volume 121, 2017, Pages 46-55, DOI: 10.1016/j.ijpsycho.2017.09.007.

While the concepts of cognitive workload and attentional reserve have been thought to have an inverse relationship for some time, such a relationship has never been empirically tested. This was the purpose of the present study. Aspects of the electroencephalogram were used to assess both cognitive workload and attentional reserve. Specifically, spectral measures of cortical activation were used to assess cognitive workload, while amplitudes of the event-related potential from the presentation of unattended “novel” sounds were used to assess attentional reserve. The relationship between these two families of measures was assessed using canonical correlation. Twenty-seven participants performed a flight simulator task under three levels of challenge. Verification of manipulation was performed using self-report measures of task demand, objective task performance, and heart rate variability using electrocardiography. Results revealed a strong, negative relationship between the spectral measures of cortical activation, believed to be representative of cognitive workload, and ERP amplitudes, believed to be representative of attentional reserve. This finding provides support for the theoretical and intuitive notion that cognitive workload and attentional reserve are inversely related. The practical implications of this result include improved state classification using advanced machine learning techniques, enhanced personnel selection/recruitment/placement, and augmented learning/training.

On how humans run simulations for reasoning about physics

James R. Kubricht, Keith J. Holyoak, Hongjing Lu, Intuitive Physics: Current Research and Controversies, Trends in Cognitive Sciences, Volume 21, Issue 10, 2017, Pages 749-759, DOI: 10.1016/j.tics.2017.06.002.

Early research in the field of intuitive physics provided extensive evidence that humans succumb to common misconceptions and biases when predicting, judging, and explaining activity in the physical world. Recent work has demonstrated that, across a diverse range of situations, some biases can be explained by the application of normative physical principles to noisy perceptual inputs. However, it remains unclear how knowledge of physical principles is learned, represented, and applied to novel situations. In this review we discuss theoretical advances from heuristic models to knowledge-based, probabilistic simulation models, as well as recent deep-learning models. We also consider how recent work may be reconciled with earlier findings that favored heuristic models.

On the roots in the ability to control outcomes of human motivation

Justin M. Moscarello, Catherine A. Hartley, Agency and the Calibration of Motivated Behavior, Trends in Cognitive Sciences, Volume 21, Issue 10, 2017, Pages 725-735, DOI: 10.1016/j.tics.2017.06.008.

The controllability of positive or negative environmental events has long been recognized as a critical factor determining their impact on an organism. In studies across species, controllable and uncontrollable reinforcement have been found to yield divergent effects on subsequent behavior. Here we present a model of the organizing influence of control, or a lack thereof, on the behavioral repertoire. We propose that individuals derive a generalizable estimate of agency from controllable and uncontrollable outcomes, which serves to calibrate their behavioral strategies in a manner that is most likely to be adaptive given their prior experience.

Evidence of the dicotomy reactive/predictive control in the brain

Mattie Tops, Markus Quirin, Maarten A.S. Boksem, Sander L. Koole, Large-scale neural networks and the lateralization of motivation and emotion, International Journal of Psychophysiology, Volume 119, 2017, Pages 41-49, DOI: 10.1016/j.ijpsycho.2017.02.004.

Several lines of research in animals and humans converge on the distinction between two basic large-scale brain networks of self-regulation, giving rise to predictive and reactive control systems (PARCS). Predictive (internally-driven) and reactive (externally-guided) control are supported by dorsal versus ventral corticolimbic systems, respectively. Based on extant empirical evidence, we demonstrate how the PARCS produce frontal laterality effects in emotion and motivation. In addition, we explain how this framework gives rise to individual differences in appraising and coping with challenges. PARCS theory integrates separate fields of research, such as research on the motivational correlates of affect, EEG frontal alpha power asymmetry and implicit affective priming effects on cardiovascular indicators of effort during cognitive task performance. Across these different paradigms, converging evidence points to a qualitative motivational division between, on the one hand, angry and happy emotions, and, on the other hand, sad and fearful emotions. PARCS suggests that those two pairs of emotions are associated with predictive and reactive control, respectively. PARCS theory may thus generate important new insights on the motivational and emotional dynamics that drive autonomic and homeostatic control processes.

On how the simplification on physics made in computer games for real-time execution can explain the simplification on physics made by infants when understanding the world

Tomer D. Ullman, Elizabeth Spelke, Peter Battaglia, Joshua B. Tenenbaum, Mind Games: Game Engines as an Architecture for Intuitive Physics, Trends in Cognitive Sciences, Volume 21, Issue 9, 2017, Pages 649-665, DOI: 10.1016/j.tics.2017.05.012.

We explore the hypothesis that many intuitive physical inferences are based on a mental physics engine that is analogous in many ways to the machine physics engines used in building interactive video games. We describe the key features of game physics engines and their parallels in human mental representation, focusing especially on the intuitive physics of young infants where the hypothesis helps to unify many classic and otherwise puzzling phenomena, and may provide the basis for a computational account of how the physical knowledge of infants develops. This hypothesis also explains several ‘physics illusions’, and helps to inform the development of artificial intelligence (AI) systems with more human-like common sense.

An interesting soft-partition method based on hierarchical graphs (trees, actually) applied to topic detection in documents

Peixian Chen, Nevin L. Zhang, Tengfei Liu, Leonard K.M. Poon, Zhourong Chen, Farhan Khawar, Latent tree models for hierarchical topic detection, Artificial Intelligence, Volume 250, 2017, Pages 105-124, DOI: 10.1016/j.artint.2017.06.004.

We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence/absence of words in a document. The variables at other levels are binary latent variables that represent word co-occurrence patterns or co-occurrences of such patterns. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. In comparison with LDA-based methods, a key advantage of the new method is that it represents co-occurrence patterns explicitly using model structures. Extensive empirical results show that the new method significantly outperforms the LDA-based methods in term of model quality and meaningfulness of topics and topic hierarchies.

POMDPs with multicriteria in the cost to optimize – a hierarchical approach

Seyedshams Feyzabadi, Stefano Carpin, Planning using hierarchical constrained Markov decision processes, Autonomous Robots, Volume 41, Issue 8, pp 1589–1607, DOI: 10.1007/s10514-017-9630-4.

Constrained Markov decision processes offer a principled method to determine policies for sequential stochastic decision problems where multiple costs are concurrently considered. Although they could be very valuable in numerous robotic applications, to date their use has been quite limited. Among the reasons for their limited adoption is their computational complexity, since policy computation requires the solution of constrained linear programs with an extremely large number of variables. To overcome this limitation, we propose a hierarchical method to solve large problem instances. States are clustered into macro states and the parameters defining the dynamic behavior and the costs of the clustered model are determined using a Monte Carlo approach. We show that the algorithm we propose to create clustered states maintains valuable properties of the original model, like the existence of a solution for the problem. Our algorithm is validated in various planning problems in simulation and on a mobile robot platform, and we experimentally show that the clustered approach significantly outperforms the non-hierarchical solution while experiencing only moderate losses in terms of objective functions.

Reinterpretation of evolutionary processes as algorithms for Bayesian inference

Jordan W. Suchow, David D. Bourgin, Thomas L. Griffiths, Evolution in Mind: Evolutionary Dynamics, Cognitive Processes, and Bayesian Inference, Trends in Cognitive Sciences, Volume 21, Issue 7, July 2017, Pages 522-530, ISSN 1364-6613, DOI: 10.1016/j.tics.2017.04.005.

Evolutionary theory describes the dynamics of population change in settings affected by reproduction, selection, mutation, and drift. In the context of human cognition, evolutionary theory is most often invoked to explain the origins of capacities such as language, metacognition, and spatial reasoning, framing them as functional adaptations to an ancestral environment. However, evolutionary theory is useful for understanding the mind in a second way: as a mathematical framework for describing evolving populations of thoughts, ideas, and memories within a single mind. In fact, deep correspondences exist between the mathematics of evolution and of learning, with perhaps the deepest being an equivalence between certain evolutionary dynamics and Bayesian inference. This equivalence permits reinterpretation of evolutionary processes as algorithms for Bayesian inference and has relevance for understanding diverse cognitive capacities, including memory and creativity.