Category Archives: Psycho-physiological Bases Of Engineering

Reinforcement learning when a human is the one providing the rewards to the algorithm

W. Bradley Knox, Peter Stone, Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance, Artificial Intelligence, Volume 225, August 2015, Pages 24-50, ISSN 0004-3702, DOI: 10.1016/j.artint.2015.03.009.

Several studies have demonstrated that reward from a human trainer can be a powerful feedback signal for control-learning algorithms. However, the space of algorithms for learning from such human reward has hitherto not been explored systematically. Using model-based reinforcement learning from human reward, this article investigates the problem of learning from human reward through six experiments, focusing on the relationships between reward positivity, which is how generally positive a trainer’s reward values are; temporal discounting, the extent to which future reward is discounted in value; episodicity, whether task learning occurs in discrete learning episodes instead of one continuing session; and task performance, the agent’s performance on the task the trainer intends to teach. This investigation is motivated by the observation that an agent can pursue different learning objectives, leading to different resulting behaviors. We search for learning objectives that lead the agent to behave as the trainer intends.
We identify and empirically support a “positive circuits” problem with low discounting (i.e., high discount factors) for episodic, goal-based tasks that arises from an observed bias among humans towards giving positive reward, resulting in an endorsement of myopic learning for such domains. We then show that converting simple episodic tasks to be non-episodic (i.e., continuing) reduces and in some cases resolves issues present in episodic tasks with generally positive reward and—relatedly—enables highly successful learning with non-myopic valuation in multiple user studies. The primary learning algorithm introduced in this article, which we call “vi-tamer”, is the first algorithm to successfully learn non-myopically from reward generated by a human trainer; we also empirically show that such non-myopic valuation facilitates higher-level understanding of the task. Anticipating the complexity of real-world problems, we perform further studies—one with a failure state added—that compare (1) learning when states are updated asynchronously with local bias—i.e., states quickly reachable from the agent’s current state are updated more often than other states—to (2) learning with the fully synchronous sweeps across each state in the vi-tamer algorithm. With these locally biased updates, we find that the general positivity of human reward creates problems even for continuing tasks, revealing a distinct research challenge for future work.

A bayesian framework to explain magnitude estimation in the human mind

Frederike H. Petzschner, Stefan Glasauer, Klaas E. Stephan, A Bayesian perspective on magnitude estimation, Trends in Cognitive Sciences, Volume 19, Issue 5, May 2015, Pages 285-293, ISSN 1364-6613, DOI: 10.1016/j.tics.2015.03.002.

Our representation of the physical world requires judgments of magnitudes, such as loudness, distance, or time. Interestingly, magnitude estimates are often not veridical but subject to characteristic biases. These biases are strikingly similar across different sensory modalities, suggesting common processing mechanisms that are shared by different sensory systems. However, the search for universal neurobiological principles of magnitude judgments requires guidance by formal theories. Here, we discuss a unifying Bayesian framework for understanding biases in magnitude estimation. This Bayesian perspective enables a re-interpretation of a range of established psychophysical findings, reconciles seemingly incompatible classical views on magnitude estimation, and can guide future investigations of magnitude estimation and its neurobiological mechanisms in health and in psychiatric diseases, such as schizophrenia.

Reinforcement learning to explain emotions

Joost Broekensa, Elmer Jacobsa & Catholijn M. Jonker, A reinforcement learning model of joy, distress, hope and fear, Connection Science, DOI: 10.1080/09540091.2015.1031081.

In this paper we computationally study the relation between adaptive behaviour and emotion. Using the reinforcement learning framework, we propose that learned state utility, V(s), models fear (negative) and hope (positive) based on the fact that both signals are about anticipation of loss or gain. Further, we propose that joy/distress is a signal similar to the error signal. We present agent-based simulation experiments that show that this model replicates psychological and behavioural dynamics of emotion. This work distinguishes itself by assessing the dynamics of emotion in an adaptive agent framework – coupling it to the literature on habituation, development, extinction and hope theory. Our results support the idea that the function of emotion is to provide a complex feedback signal for an organism to adapt its behaviour. Our work is relevant for understanding the relation between emotion and adaptation in animals, as well as for human–robot interaction, in particular how emotional signals can be used to communicate between adaptive agents and humans.

Example of both bottom-up and top-down processes that are integrated in a solution for the recognition of shapes

Ching L. Teo, Cornelia Fermüller, and Yiannis Aloimonos, A Gestaltist approach to contour-based object recognition: Combining bottom-up and top-down cues, The International Journal of Robotics Research April 2015 34: 627-652, first published on March 25, 2015, DOI: 10.1177/0278364914558493.

This paper proposes a method for detecting generic classes of objects from their representative contours that can be used by a robot with vision to find objects in cluttered environments. The approach uses a mid-level image operator to group edges into contours which likely correspond to object boundaries. This mid-level operator is used in two ways, bottom-up on simple edges and top-down incorporating object shape information, thus acting as the intermediary between low-level and high-level information. First, the mid-level operator, called the image torque, is applied to simple edges to extract likely fixation locations of objects. Using the operator’s output, a novel contour-based descriptor is created that extends the shape context descriptor to include boundary ownership information and accounts for rotation. This descriptor is then used in a multi-scale matching approach to modulate the torque operator towards the target, so it indicates its location and size. Unlike other approaches that use edges directly to guide the independent edge grouping and matching processes for recognition, both of these steps are effectively combined using the proposed method. We evaluate the performance of our approach using four diverse datasets containing a variety of object categories in clutter, occlusion and viewpoint changes. Compared with current state-of-the-art approaches, our approach is able to detect the target with fewer false alarms in most object categories. The performance is further improved when we exploit depth information available from the Kinect RGB-Depth sensor by imposing depth consistency when applying the image torque.

Study of the explanation of probability and reasoning in the human mind through mental models, probability logic and classical logic

P.N. Johnson-Laird, Sangeet S. Khemlani, Geoffrey P. Goodwin, Logic, probability, and human reasoning, Trends in Cognitive Sciences, Volume 19, Issue 4, April 2015, Pages 201-214, ISSN 1364-6613, DOI: 10.1016/j.tics.2015.02.006.

This review addresses the long-standing puzzle of how logic and probability fit together in human reasoning. Many cognitive scientists argue that conventional logic cannot underlie deductions, because it never requires valid conclusions to be withdrawn – not even if they are false; it treats conditional assertions implausibly; and it yields many vapid, although valid, conclusions. A new paradigm of probability logic allows conclusions to be withdrawn and treats conditionals more plausibly, although it does not address the problem of vapidity. The theory of mental models solves all of these problems. It explains how people reason about probabilities and postulates that the machinery for reasoning is itself probabilistic. Recent investigations accordingly suggest a way to integrate probability and deduction.

Neurological evidences of the hierarchical arrangement of the process of motor skill learning

Jörn Diedrichsen, Katja Kornysheva, Motor skill learning between selection and execution, Trends in Cognitive Sciences, Volume 19, Issue 4, April 2015, Pages 227-233, ISSN 1364-6613, DOI: 10.1016/j.tics.2015.02.003.

Learning motor skills evolves from the effortful selection of single movement elements to their combined fast and accurate production. We review recent trends in the study of skill learning which suggest a hierarchical organization of the representations that underlie such expert performance, with premotor areas encoding short sequential movement elements (chunks) or particular component features (timing/spatial organization). This hierarchical representation allows the system to utilize elements of well-learned skills in a flexible manner. One neural correlate of skill development is the emergence of specialized neural circuits that can produce the required elements in a stable and invariant fashion. We discuss the challenges in detecting these changes with fMRI.

Novelty detection as a way for enhancing learning capabilities of a robot, and a brief but interesting survey of motivational theories and their difference with attention

Y. Gatsoulis, T.M. McGinnity, Intrinsically motivated learning systems based on biologically-inspired novelty detection, Robotics and Autonomous Systems, Volume 68, June 2015, Pages 12-20, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.02.006.

Intrinsic motivations play an important role in human learning, particularly in the early stages of childhood development, and ideas from this research field have influenced robotic learning and adaptability. In this paper we investigate one specific type of intrinsic motivation, that of novelty detection and we discuss the reasons that make it a powerful facility for continuous learning. We formulate and present one original type of biologically inspired novelty detection architecture and implement it on a robotic system engaged in a perceptual classification task. The results of real-world robot experiments we conducted show how this original architecture conforms to behavioural observations and demonstrate its effectiveness in terms of focusing the system’s attention in areas that are potential for effective learning.

Reinforcement learning used for an adaptive attention mechanism, and integrated in an architecture with both top-down and bottom-up vision processing

Ognibene, D.; Baldassare, G., Ecological Active Vision: Four Bioinspired Principles to Integrate Bottom–Up and Adaptive Top–Down Attention Tested With a Simple Camera-Arm Robot, Autonomous Mental Development, IEEE Transactions on , vol.7, no.1, pp.3,25, March 2015. DOI: 10.1109/TAMD.2014.2341351.

Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture (“BITPIC”) to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob “objects.” The results show that the architecture solves the problems, and hence the tasks, very efficiently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions.

On the process of the brain for detecting similarities, with a proposal for its structure and its timing

Qingfei Chen, Xiuling Liang, Peng Li, Chun Ye, Fuhong Li, Yi Lei, Hong Li, 2015, The processing of perceptual similarity with different features or spatial relations as revealed by P2/P300 amplitude, International Journal of Psychophysiology, Volume 95, Issue 3, March 2015, Pages 379-387, ISSN 0167-8760, DOI: 10.1016/j.ijpsycho.2015.01.009.

Visual features such as “color” and spatial relations such as “above” or “beside” have complex effects on similarity and difference judgments. We examined the relative impact of features and spatial relations on similarity and difference judgments via ERPs in an S1–S2 paradigm. Subjects were required to compare a remembered geometric shape (S1) with a second one (S2), and made a “high” or “low” judgment of either similarity or difference in separate blocks of trials. We found three main differences that suggest that the processing of features and spatial relations engages distinct neural processes. The first difference is a P2 effect in fronto-central regions which is sensitive to the presence of a feature difference. The second difference is a P300 in centro-parietal regions that is larger for difference judgments than for similarity judgments. Finally, the P300 effect elicited by feature differences was larger relative to spatial relation differences. These results supported the view that similarity judgments involve structural alignment rather than simple feature and relation matches, and furthermore, indicate the similarity judgment could be divided into three phases: feature or relation comparison (P2), structural alignment (P3 at 300–400 ms), and categorization (P3 at 450–550 ms).

On the not-so-domain-generic nature of statistical learning in the human brain

Ram Frost, Blair C. Armstrong, Noam Siegelman, Morten H. Christiansen, 2015, Domain generality versus modality specificity: the paradox of statistical learning, Trends in Cognitive Sciences, Volume 19, Issue 3, March 2015, Pages 117-125, DOI: 10.1016/j.tics.2014.12.010.

Statistical learning (SL) is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying distributional properties of the input. However, recent studies examining whether there are commonalities in the learning of distributional information across different domains or modalities consistently reveal modality and stimulus specificity. Therefore, important questions are how and why a hypothesized domain-general learning mechanism systematically produces such effects. Here, we offer a theoretical framework according to which SL is not a unitary mechanism, but a set of domain-general computational principles that operate in different modalities and, therefore, are subject to the specific constraints characteristic of their respective brain regions. This framework offers testable predictions and we discuss its computational and neurobiological plausibility.