Category Archives: Cognitive Sciences

How hierarchical reinforcement learning resembles human creativity, i.e., matching the psychological aspects with the engineering ones

Thomas R. Colin, Tony Belpaeme, Angelo Cangelosi, Nikolas Hemion, Hierarchical reinforcement learning as creative problem solving, Robotics and Autonomous Systems, Volume 86, 2016, Pages 196-206, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.08.021.

Although creativity is studied from philosophy to cognitive robotics, a definition has proven elusive. We argue for emphasizing the creative process (the cognition of the creative agent), rather than the creative product (the artifact or behavior). Owing to developments in experimental psychology, the process approach has become an increasingly attractive way of characterizing creative problem solving. In particular, the phenomenon of insight, in which an individual arrives at a solution through a sudden change in perspective, is a crucial component of the process of creativity. These developments resonate with advances in machine learning, in particular hierarchical and modular approaches, as the field of artificial intelligence aims for general solutions to problems that typically rely on creativity in humans or other animals. We draw a parallel between the properties of insight according to psychology and the properties of Hierarchical Reinforcement Learning (HRL) systems for embodied agents. Using the Creative Systems Framework developed by Wiggins and Ritchie, we analyze both insight and HRL, establishing that they are creative in similar ways. We highlight the key challenges to be met in order to call an artificial system “insightful”.

Interesting mixture of automated planning with reinforcement learning

Matteo Leonetti, Luca Iocchi, Peter Stone, A synthesis of automated planning and reinforcement learning for efficient, robust decision-making, Artificial Intelligence, Volume 241, 2016, Pages 103-130, ISSN 0004-3702, DOI: 10.1016/j.artint.2016.07.004.

Automated planning and reinforcement learning are characterized by complementary views on decision making: the former relies on previous knowledge and computation, while the latter on interaction with the world, and experience. Planning allows robots to carry out different tasks in the same domain, without the need to acquire knowledge about each one of them, but relies strongly on the accuracy of the model. Reinforcement learning, on the other hand, does not require previous knowledge, and allows robots to robustly adapt to the environment, but often necessitates an infeasible amount of experience. We present Domain Approximation for Reinforcement LearnING (DARLING), a method that takes advantage of planning to constrain the behavior of the agent to reasonable choices, and of reinforcement learning to adapt to the environment, and increase the reliability of the decision making process. We demonstrate the effectiveness of the proposed method on a service robot, carrying out a variety of tasks in an office building. We find that when the robot makes decisions by planning alone on a given model it often fails, and when it makes decisions by reinforcement learning alone it often cannot complete its tasks in a reasonable amount of time. When employing DARLING, even when seeded with the same model that was used for planning alone, however, the robot can quickly learn a behavior to carry out all the tasks, improves over time, and adapts to the environment as it changes.

Survey of Cognitive Offloading

Evan F. Risko, Sam J. Gilbert, Cognitive Offloading, Trends in Cognitive Sciences, Volume 20, Issue 9, 2016, Pages 676-688, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.07.002.

If you have ever tilted your head to perceive a rotated image, or programmed a smartphone to remind you of an upcoming appointment, you have engaged in cognitive offloading: the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand. Despite the ubiquity of this type of behavior, it has only recently become the target of systematic investigation in and of itself. We review research from several domains that focuses on two main questions: (i) what mechanisms trigger cognitive offloading, and (ii) what are the cognitive consequences of this behavior? We offer a novel metacognitive framework that integrates results from diverse domains and suggests avenues for future research.

Learning concepts from graphs in robotics, through first-order logic and discovery of subgraphs, forming arbitrary hierarchies

Ana C. Tenorio-González, Eduardo F. Morales, Automatic discovery of relational concepts by an incremental graph-based representation, Robotics and Autonomous Systems, Volume 83, 2016, Pages 1-14, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.06.012.

Automatic discovery of concepts has been an elusive area in machine learning. In this paper, we describe a system, called ADC, that automatically discovers concepts in a robotics domain, performing predicate invention. Unlike traditional approaches of concept discovery, our approach automatically finds and collects instances of potential relational concepts. An agent, using ADC, creates an incremental graph-based representation with the information it gathers while exploring its environment, from which common sub-graphs are identified. The subgraphs discovered are instances of potential relational concepts which are induced with Inductive Logic Programming and predicate invention. Several concepts can be induced concurrently and the learned concepts can form arbitrarily hierarchies. The approach was tested for learning concepts of polygons, furniture, and floors of buildings with a simulated robot and compared with concepts suggested by users.

A nive review of reinforcement learning from the perspective of its physiological foundations and its application to Robotics

Cornelius Weber, Mark Elshaw, Stefan Wermter, Jochen Triesch and Christopher Willmot, Reinforcement Learning Embedded in Brains and Robots, Reinforcement Learning: Theory and Applications, Book edited by Cornelius Weber, Mark Elshaw and Norbert Michael Mayer, ISBN 978-3-902613-14-1, pp.424, January 2008, I-Tech Education and Publishing, Vienna, Austria. (Local copy)

A computational cognitive architecture that models emotion

Ron Sun, Nick Wilson, Michael Lynch, Emotion: A Unified Mechanistic Interpretation from a Cognitive Architecture, Cognitive Computation, February 2016, Volume 8, Issue 1, pp 1–14, DOI: 10.1007/s12559-015-9374-4.

This paper reviews a project that attempts to interpret emotion, a complex and multifaceted phenomenon, from a mechanistic point of view, facilitated by an existing comprehensive computational cognitive architecture—CLARION. This cognitive architecture consists of a number of subsystems: the action-centered, non-action-centered, motivational, and metacognitive subsystems. From this perspective, emotion is, first and foremost, motivationally based. It is also action-oriented. It involves many other identifiable cognitive functionalities within these subsystems. Based on these functionalities, we fit the pieces together mechanistically (computationally) within the CLARION framework and capture a variety of important aspects of emotion as documented in the literature.

On how the calculus of utility of actions drives many human behaviours

Julian Jara-Ettinger, Hyowon Gweon, Laura E. Schulz, Joshua B. Tenenbaum, The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology, Trends in Cognitive Sciences, Volume 20, Issue 8, 2016, Pages 589-604, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.05.011.

We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This \u2018naïve utility calculus\u2019 allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy.

Evidences that the brain encodes numbers on an internal continous line and that the zero value is also represented

Luca Rinaldi, Luisa Girelli, A Place for Zero in the Brain, Trends in Cognitive Sciences, Volume 20, Issue 8, 2016, Pages 563-564, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.06.006.

It has long been thought that the primary cognitive and neural systems responsible for processing numerosities are not predisposed to encode empty sets (i.e., numerosity zero). A new study challenges this view by demonstrating that zero is translated into an abstract quantity along the numerical continuum by the primate parietofrontal magnitude system.

A formal study of the guarantees that deep neural network offer for classification

R. Giryes, G. Sapiro and A. M. Bronstein, “Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?,” in IEEE Transactions on Signal Processing, vol. 64, no. 13, pp. 3444-3457, July1, 1 2016. DOI: 10.1109/TSP.2016.2546221.

Three important properties of a classification machinery are i) the system preserves the core information of the input data; ii) the training examples convey information about unseen data; and iii) the system is able to treat differently points from different classes. In this paper, we show that these fundamental properties are satisfied by the architecture of deep neural networks. We formally prove that these networks with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data. Similar points at the input of the network are likely to have a similar output. The theoretical analysis of deep networks here presented exploits tools used in the compressed sensing and dictionary learning literature, thereby making a formal connection between these important topics. The derived results allow drawing conclusions on the metric learning properties of the network and their relation to its structure, as well as providing bounds on the required size of the training set such that the training examples would represent faithfully the unseen data. The results are validated with state-of-the-art trained networks.

A new theoretical framework for modeling concepts that allows them to combine reflecting the way humans do, with a good related-work on other concept frameworks in AI

Martha Lewis, Jonathan Lawry, Hierarchical conceptual spaces for concept combination, Artificial Intelligence, Volume 237, August 2016, Pages 204-227, ISSN 0004-3702, DOI: 10.1016/j.artint.2016.04.008.

We introduce a hierarchical framework for conjunctive concept combination based on conceptual spaces and random set theory. The model has the flexibility to account for composition of concepts at various levels of complexity. We show that the conjunctive model includes linear combination as a special case, and that the more general model can account for non-compositional behaviours such as overextension, non-commutativity, preservation of necessity and impossibility of attributes and to some extent, attribute loss or emergence. We investigate two further aspects of human concept use, the conjunction fallacy and the “guppy effect”.