Category Archives: Cognitive Sciences

A survey on HTN planning

Ilche Georgievski, Marco Aiello, HTN planning: Overview, comparison, and beyond, Artificial Intelligence, Volume 222, 2015, Pages 124-156 DOI: 10.1016/j.artint.2015.02.002.

Hierarchies are one of the most common structures used to understand and conceptualise the world. Within the field of Artificial Intelligence (AI) planning, which deals with the automation of world-relevant problems, Hierarchical Task Network (HTN) planning is the branch that represents and handles hierarchies. In particular, the requirement for rich domain knowledge to characterise the world enables HTN planning to be very useful, and also to perform well. However, the history of almost 40 years obfuscates the current understanding of HTN planning in terms of accomplishments, planning models, similarities and differences among hierarchical planners, and its current and objective image. On top of these issues, the ability of hierarchical planning to truly cope with the requirements of real-world applications has been often questioned. As a remedy, we propose a framework-based approach where we first provide a basis for defining different formal models of hierarchical planning, and define two models that comprise a large portion of HTN planners. Second, we provide a set of concepts that helps in interpreting HTN planners from the aspect of their search space. Then, we analyse and compare the planners based on a variety of properties organised in five segments, namely domain authoring, expressiveness, competence, computation and applicability. Furthermore, we select Web service composition as a real-world and current application, and classify and compare the approaches that employ HTN planning to solve the problem of service composition. Finally, we conclude with our findings and present directions for future work. In summary, we provide a novel and comprehensive viewpoint on a core AI planning technique.

On how attention, modelled by bayesian inference (for category learning), can structure the way reinforcement learning works

Angela Radulescu, Yael Niv, Ian Ballard, Holistic Reinforcement Learning: The Role of Structure and Attention, Trends in Cognitive Sciences, Volume 23, Issue 4, 2019, Pages 278-292, DOI: 10.1016/j.tics.2019.01.010.

Compact representations of the environment allow humans to behave efficiently in a complex world. Reinforcement learning models capture many behavioral and neural effects but do not explain recent findings showing that structure in the environment influences learning. In parallel, Bayesian cognitive models predict how humans learn structured knowledge but do not have a clear neurobiological implementation. We propose an integration of these two model classes in which structured knowledge learned via approximate Bayesian inference acts as a source of selective attention. In turn, selective attention biases reinforcement learning towards relevant dimensions of the environment. An understanding of structure learning will help to resolve the fundamental challenge in decision science: explaining why people make the decisions they do.

On how value of actions (in the RL sense) can be coded in the brain

Rory J. Bufacchi, Gian Domenico Iannetti, The Value of Actions, in Time and Space, Trends in Cognitive Sciences, Volume 23, Issue 4, 2019, Pages 270-271, DOI: 10.1016/j.tics.2019.01.011.

This value-output function can be a neural network, in which case the assumptions about the future are stored in the precise network configuration. The values that such a network outputs, or at least the intermediate steps necessary for calculating the final values, are the ‘action relevances’ we mention in our original paper (in the case of the brain, the inputs to such a value-calculating network should be state estimators, which likely include activity coming from the ventral stream, frontal areas, and limbic regions [3]). Our claim was thus that PPS-related measures reflect the instantaneous value of particular types of actions, and not that PPS measures explicitly reflect the value of any possible action at any given time (i.e., for any possible state): PPS measures reflect the instantaneous output of a function rather than the infinite array of values that the output of this function could take. We might have contributed to this misunderstanding when claiming that a field is ‘a quantity that has a magnitude for each point in space and time’. We should have clarified that the magnitude of a PPS measure can be seen as a specific sample from a field in the here and now rather than as a database containing all possible field values.

Models of brain based on artificial neural networks

James C.R. Whittington, Rafal Bogacz, Theories of Error Back-Propagation in the Brain, Trends in Cognitive Sciences, Volume 23, Issue 3, 2019, Pages 235-250 DOI: 10.1016/j.tics.2018.12.005.

This review article summarises recently proposed theories on how neural circuits in the brain could approximate the error back-propagation algorithm used by artificial neural networks. Computational models implementing these theories achieve learning as efficient as artificial neural networks, but they use simple synaptic plasticity rules based on activity of presynaptic and postsynaptic neurons. The models have similarities, such as including both feedforward and feedback connections, allowing information about error to propagate throughout the network. Furthermore, they incorporate experimental evidence on neural connectivity, responses, and plasticity. These models provide insights on how brain networks might be organised such that modification of synaptic weights on multiple levels of cortical hierarchy leads to improved performance on tasks.

Weighting relations between concepts to form (hierarchically) further concepts

T. Nakamura and T. Nagai, Ensemble-of-Concept Models for Unsupervised Formation of Multiple Categories, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1043-1057, DOI: 10.1109/TCDS.2017.2745502.

Recent studies have shown that robots can form concepts and understand the meanings of words through inference. The key idea underlying these studies is the “multimodal categorization” of a robot’s experiences. Despite the success in the formation of concepts by robots, a major drawback of previous studies stems from the fact that they have been mainly focused on object concepts. Obviously, human concepts are limited not only to object concepts but also to other kinds such as those connected to the tactile sense and color. In this paper, we propose a novel model called the ensemble-of-concept models (EoCMs) to form various kinds of concepts. In EoCMs, we introduce weights that represent the strength connecting modalities and concepts. By changing these weights, many concepts that are connected to particular modalities can be formed; however, meaningless concepts for humans are included in these concepts. To communicate with humans, robots are required to form meaningful concepts for us. Therefore, we utilize utterances taught by human users as the robot observes objects. The robot connects words included in the teaching utterances with formed concepts and selects meaningful concepts to communicate with users. The experimental results show that the robot can form not only object concepts but also others such as color-related concepts and haptic concepts. Furthermore, using word2vec, we compare the meanings of the words acquired by the robot in connecting them to the concepts formed.

A definition of emergence and its application to emergence in robots

R. L. Sturdivant and E. K. P. Chong, The Necessary and Sufficient Conditions for Emergence in Systems Applied to Symbol Emergence in Robots, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1035-1042, DOI: 10.1109/TCDS.2017.2731361.

A conceptual model for emergence with downward causation is developed. In addition, the necessary and sufficient conditions are identified for a phenomenon to be considered emergent in a complex system. It is then applied to symbol emergence in robots. This paper is motivated by the usefulness of emergence to explain a wide variety of phenomena in systems, and cognition in natural and artificial creatures. Downward causation is shown to be a critical requirement for potentially emergent phenomena to be considered actually emergent. Models of emergence with and without downward causation are described and how weak emergence can include downward causation. A process flow is developed for distinguishing emergence from nonemergence based upon the application of reductionism and detection of downward causation. Examples are shown for applying the necessary and sufficient conditions to filter out actually emergent phenomena from nonemergent ones. Finally, this approach for detecting emergence is applied to complex projects and symbol emergence in robots.

A cognitive architecture for self-development in robots that interact with humans, with a nice state-of-the-art of robot cognitive architectures

C. Moulin-Frier et al., DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1005-1022, DOI: 10.1109/TCDS.2017.2754143.

This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both human and robot. The framework, based on a biologically grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the-art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.

On the existence of prior knowledge, “pre-wired” in animal brains, that guides further learning

Elisabetta Versace, Antone Martinho-Truswell, Alex Kacelnik, Giorgio Vallortigara, Priors in Animal and Artificial Intelligence: Where Does Learning Begin?, Trends in Cognitive Sciences, Volume 22, Issue 11, 2018, Pages 963-965, DOI: 10.1016/j.tics.2018.07.005.

A major goal for the next generation of artificial intelligence (AI) is to build machines that are able to reason and cope with novel tasks, environments, and situations in a manner that approaches the abilities of animals. Evidence from precocial species suggests that driving learning through suitable priors can help to successfully face this challenge.

A new model of reinforcement learning based on the human brain that copes with continuous spaces through continuous rewards, with a short but nice state-of-the-art of RL applied to large, continuous spaces

Feifei Zhao, Yi Zeng, Guixiang Wang, Jun Bai, Bo Xu, A Brain-Inspired Decision Making Model Based on Top-Down Biasing of Prefrontal Cortex to Basal Ganglia and Its Application in Autonomous UAV Explorations, Cognitive Computation, Volume 10, Issue 2, pp 296–306, DOI: 10.1007/s12559-017-9511-3.

Decision making is a fundamental ability for intelligent agents (e.g., humanoid robots and unmanned aerial vehicles). During decision making process, agents can improve the strategy for interacting with the dynamic environment through reinforcement learning. Many state-of-the-art reinforcement learning models deal with relatively smaller number of state-action pairs, and the states are preferably discrete, such as Q-learning and Actor-Critic algorithms. While in practice, in many scenario, the states are continuous and hard to be properly discretized. Better autonomous decision making methods need to be proposed to handle these problems. Inspired by the mechanism of decision making in human brain, we propose a general computational model, named as prefrontal cortex-basal ganglia (PFC-BG) algorithm. The proposed model is inspired by the biological reinforcement learning pathway and mechanisms from the following perspectives: (1) Dopamine signals continuously update reward-relevant information for both basal ganglia and working memory in prefrontal cortex. (2) We maintain the contextual reward information in working memory. This has a top-down biasing effect on reinforcement learning in basal ganglia. The proposed model separates the continuous states into smaller distinguishable states, and introduces continuous reward function for each state to obtain reward information at different time. To verify the performance of our model, we apply it to many UAV decision making experiments, such as avoiding obstacles and flying through window and door, and the experiments support the effectiveness of the model. Compared with traditional Q-learning and Actor-Critic algorithms, the proposed model is more biologically inspired, and more accurate and faster to make decision.

Z-numbers: an extension of fuzzy variables for cognitive decision making, and the concept of cognitive information

Hong-gang Peng, Jian-qiang Wang, Outranking Decision-Making Method with Z-Number Cognitive Information, Cognitive Computation, Volume 10, Issue 5, pp 752–768, DOI: 10.1007/s12559-018-9556-y.

The Z-number provides an adequate and reliable description of cognitive information. The nature of Z-numbers is complex, however, and important issues in Z-number computation remain to be addressed. This study focuses on developing a computationally simple method with Z-numbers to address multicriteria decision-making (MCDM) problems. Processing Z-numbers requires the direct computation of fuzzy and probabilistic uncertainties. We used an effective method to analyze the Z-number construct. Next, we proposed some outranking relations of Z-numbers and defined the dominance degree of discrete Z-numbers. Also, after analyzing the characteristics of elimination and choice translating reality III (ELECTRE III) and qualitative flexible multiple criteria method (QUALIFLEX), we developed an improved outranking method. To demonstrate this method, we provided an illustrative example concerning job-satisfaction evaluation. We further verified the validity of the method by a criteria test and comparative analysis. The results demonstrate that the method can be successfully applied to real-world decision-making problems, and it can identify more reasonable outcomes than previous methods. This study overcomes the high computational complexity in existing Z-number computation frameworks by exploring the pairwise comparison of Z-numbers. The method inherits the merits of the classical outranking method and considers the non-compensability of criteria. Therefore, it has remarkable potential to address practical decision-making problems involving Z-information.