Category Archives: Cognitive Sciences

Weighting relations between concepts to form (hierarchically) further concepts

T. Nakamura and T. Nagai, Ensemble-of-Concept Models for Unsupervised Formation of Multiple Categories, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1043-1057, DOI: 10.1109/TCDS.2017.2745502.

Recent studies have shown that robots can form concepts and understand the meanings of words through inference. The key idea underlying these studies is the “multimodal categorization” of a robot’s experiences. Despite the success in the formation of concepts by robots, a major drawback of previous studies stems from the fact that they have been mainly focused on object concepts. Obviously, human concepts are limited not only to object concepts but also to other kinds such as those connected to the tactile sense and color. In this paper, we propose a novel model called the ensemble-of-concept models (EoCMs) to form various kinds of concepts. In EoCMs, we introduce weights that represent the strength connecting modalities and concepts. By changing these weights, many concepts that are connected to particular modalities can be formed; however, meaningless concepts for humans are included in these concepts. To communicate with humans, robots are required to form meaningful concepts for us. Therefore, we utilize utterances taught by human users as the robot observes objects. The robot connects words included in the teaching utterances with formed concepts and selects meaningful concepts to communicate with users. The experimental results show that the robot can form not only object concepts but also others such as color-related concepts and haptic concepts. Furthermore, using word2vec, we compare the meanings of the words acquired by the robot in connecting them to the concepts formed.

A definition of emergence and its application to emergence in robots

R. L. Sturdivant and E. K. P. Chong, The Necessary and Sufficient Conditions for Emergence in Systems Applied to Symbol Emergence in Robots, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1035-1042, DOI: 10.1109/TCDS.2017.2731361.

A conceptual model for emergence with downward causation is developed. In addition, the necessary and sufficient conditions are identified for a phenomenon to be considered emergent in a complex system. It is then applied to symbol emergence in robots. This paper is motivated by the usefulness of emergence to explain a wide variety of phenomena in systems, and cognition in natural and artificial creatures. Downward causation is shown to be a critical requirement for potentially emergent phenomena to be considered actually emergent. Models of emergence with and without downward causation are described and how weak emergence can include downward causation. A process flow is developed for distinguishing emergence from nonemergence based upon the application of reductionism and detection of downward causation. Examples are shown for applying the necessary and sufficient conditions to filter out actually emergent phenomena from nonemergent ones. Finally, this approach for detecting emergence is applied to complex projects and symbol emergence in robots.

A cognitive architecture for self-development in robots that interact with humans, with a nice state-of-the-art of robot cognitive architectures

C. Moulin-Frier et al., DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1005-1022, DOI: 10.1109/TCDS.2017.2754143.

This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both human and robot. The framework, based on a biologically grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the-art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.

On the existence of prior knowledge, “pre-wired” in animal brains, that guides further learning

Elisabetta Versace, Antone Martinho-Truswell, Alex Kacelnik, Giorgio Vallortigara, Priors in Animal and Artificial Intelligence: Where Does Learning Begin?, Trends in Cognitive Sciences, Volume 22, Issue 11, 2018, Pages 963-965, DOI: 10.1016/j.tics.2018.07.005.

A major goal for the next generation of artificial intelligence (AI) is to build machines that are able to reason and cope with novel tasks, environments, and situations in a manner that approaches the abilities of animals. Evidence from precocial species suggests that driving learning through suitable priors can help to successfully face this challenge.

A new model of reinforcement learning based on the human brain that copes with continuous spaces through continuous rewards, with a short but nice state-of-the-art of RL applied to large, continuous spaces

Feifei Zhao, Yi Zeng, Guixiang Wang, Jun Bai, Bo Xu, A Brain-Inspired Decision Making Model Based on Top-Down Biasing of Prefrontal Cortex to Basal Ganglia and Its Application in Autonomous UAV Explorations, Cognitive Computation, Volume 10, Issue 2, pp 296–306, DOI: 10.1007/s12559-017-9511-3.

Decision making is a fundamental ability for intelligent agents (e.g., humanoid robots and unmanned aerial vehicles). During decision making process, agents can improve the strategy for interacting with the dynamic environment through reinforcement learning. Many state-of-the-art reinforcement learning models deal with relatively smaller number of state-action pairs, and the states are preferably discrete, such as Q-learning and Actor-Critic algorithms. While in practice, in many scenario, the states are continuous and hard to be properly discretized. Better autonomous decision making methods need to be proposed to handle these problems. Inspired by the mechanism of decision making in human brain, we propose a general computational model, named as prefrontal cortex-basal ganglia (PFC-BG) algorithm. The proposed model is inspired by the biological reinforcement learning pathway and mechanisms from the following perspectives: (1) Dopamine signals continuously update reward-relevant information for both basal ganglia and working memory in prefrontal cortex. (2) We maintain the contextual reward information in working memory. This has a top-down biasing effect on reinforcement learning in basal ganglia. The proposed model separates the continuous states into smaller distinguishable states, and introduces continuous reward function for each state to obtain reward information at different time. To verify the performance of our model, we apply it to many UAV decision making experiments, such as avoiding obstacles and flying through window and door, and the experiments support the effectiveness of the model. Compared with traditional Q-learning and Actor-Critic algorithms, the proposed model is more biologically inspired, and more accurate and faster to make decision.

Z-numbers: an extension of fuzzy variables for cognitive decision making, and the concept of cognitive information

Hong-gang Peng, Jian-qiang Wang, Outranking Decision-Making Method with Z-Number Cognitive Information, Cognitive Computation, Volume 10, Issue 5, pp 752–768, DOI: 10.1007/s12559-018-9556-y.

The Z-number provides an adequate and reliable description of cognitive information. The nature of Z-numbers is complex, however, and important issues in Z-number computation remain to be addressed. This study focuses on developing a computationally simple method with Z-numbers to address multicriteria decision-making (MCDM) problems. Processing Z-numbers requires the direct computation of fuzzy and probabilistic uncertainties. We used an effective method to analyze the Z-number construct. Next, we proposed some outranking relations of Z-numbers and defined the dominance degree of discrete Z-numbers. Also, after analyzing the characteristics of elimination and choice translating reality III (ELECTRE III) and qualitative flexible multiple criteria method (QUALIFLEX), we developed an improved outranking method. To demonstrate this method, we provided an illustrative example concerning job-satisfaction evaluation. We further verified the validity of the method by a criteria test and comparative analysis. The results demonstrate that the method can be successfully applied to real-world decision-making problems, and it can identify more reasonable outcomes than previous methods. This study overcomes the high computational complexity in existing Z-number computation frameworks by exploring the pairwise comparison of Z-numbers. The method inherits the merits of the classical outranking method and considers the non-compensability of criteria. Therefore, it has remarkable potential to address practical decision-making problems involving Z-information.

On how psychological time emerges from execution of actions in the environment

Jennifer T. Coull, Sylvie Droit-Volet, Explicit Understanding of Duration Develops Implicitly through Action, Trends in Cognitive Sciences, Volume 22, Issue 10, 2018, Pages 923-937, DOI: 10.1016/j.tics.2018.07.011.

Time is relative. Changes in cognitive state or sensory context make it appear to speed up or slow down. Our perception of time is a rather fragile mental construct derived from the way events in the world are processed and integrated in memory. Nevertheless, the slippery concept of time can be structured by draping it over more concrete functional scaffolding. Converging evidence from developmental studies of children and neuroimaging in adults indicates that we can represent time in spatial or motor terms. We hypothesise that explicit processing of time is mediated by motor structures of the brain in adulthood because we implicitly learn about time through action during childhood. Future challenges will be to harness motor or spatial representations of time to optimise behaviour, potentially for therapeutic gain.

A very interesting analysis on how reinforcement learning depends on time, both for MDPs and for the psychological basis of RL in the human brain

Elijah A. Petter, Samuel J. Gershman, Warren H. Meck, Integrating Models of Interval Timing and Reinforcement Learning, Trends in Cognitive Sciences, Volume 22, Issue 10, 2018, Pages 911-922 DOI: 10.1016/j.tics.2018.08.004.

We present an integrated view of interval timing and reinforcement learning (RL) in the brain. The computational goal of RL is to maximize future rewards, and this depends crucially on a representation of time. Different RL systems in the brain process time in distinct ways. A model-based system learns ‘what happens when’, employing this internal model to generate action plans, while a model-free system learns to predict reward directly from a set of temporal basis functions. We describe how these systems are subserved by a computational division of labor between several brain regions, with a focus on the basal ganglia and the hippocampus, as well as how these regions are influenced by the neuromodulator dopamine.

Some quotes beyond the abstract:

The Markov assumption also makes explicit the requirements for temporal representation. All temporal dynamics must be captured by the state-transition function, which means that the state representation must encode the time-invariant structure of the environment.

A nice introduction to psychological time

Lindsey Drayton, Moran Furman, Thy Mind, Thy Brain and Time, Trends in Cognitive Sciences, olume 22, Issue 10, 2018, Pages 841-843 DOI: 10.1016/j.tics.2018.08.007.

The passage of time has fascinated the human mind for millennia. Tools for measuring time emerged early in civilization: lunar calendars appear in the archeological record as far back as 10 000 years ago and water clocks some 6000 years ago. Later technological innovations such as mechanical clocks, and more recently atomic clocks, have allowed the tracking of time with ever-increasing precision. And yet, arguably, the most sophisticated ‘time piece’ is the brain. Our brains can not only track the duration and succession of events, but they can also coordinate complex motor movements at striking levels of precision; communicate effectively by generating and interpreting sounds and speech; determine how to maximize rewards over time in the face of uncertainty; reflect upon the past; plan for the future; respond to temporal regularities and irregularities in the environment; and adapt to change in temporal scales that range from millisecond resolution up to evolutionary processes spanning millions of years.

A new variant of A* that is more computationally efficient

Adam Niewola, Leszek Podsedkowski, L* Algorithm—A Linear Computational Complexity Graph Searching Algorithm for Path Planning, Journal of Intelligent & Robotic Systems, September 2018, Volume 91, Issue 3–4, pp 425–444, DOI: 10.1007/s10846-017-0748-6.

The state-of-the-art graph searching algorithm applied to the optimal global path planning problem for mobile robots is the A* algorithm with the heap structured open list. In this paper, we present a novel algorithm, called the L* algorithm, which can be applied to global path planning and is faster than the A* algorithm. The structure of the open list with the use of bidirectional sublists (buckets) ensures the linear computational complexity of the L* algorithm because the nodes in the current bucket can be processed in any sequence and it is not necessary to sort the bucket. Our approach can maintain the optimality and linear computational complexity with the use of the cost expressed by floating-point numbers. The paper presents the requirements of the L* algorithm use and the proof of the admissibility of this algorithm. The experiments confirmed that the L* algorithm is faster than the A* algorithm in various path planning scenarios. We also introduced a method of estimating the execution time of the A* and the L* algorithm. The method was compared with the experimental results.