Category Archives: Cognitive Sciences

A new variant of A* that is more computationally efficient

Adam Niewola, Leszek Podsedkowski, L* Algorithm—A Linear Computational Complexity Graph Searching Algorithm for Path Planning, Journal of Intelligent & Robotic Systems, September 2018, Volume 91, Issue 3–4, pp 425–444, DOI: 10.1007/s10846-017-0748-6.

The state-of-the-art graph searching algorithm applied to the optimal global path planning problem for mobile robots is the A* algorithm with the heap structured open list. In this paper, we present a novel algorithm, called the L* algorithm, which can be applied to global path planning and is faster than the A* algorithm. The structure of the open list with the use of bidirectional sublists (buckets) ensures the linear computational complexity of the L* algorithm because the nodes in the current bucket can be processed in any sequence and it is not necessary to sort the bucket. Our approach can maintain the optimality and linear computational complexity with the use of the cost expressed by floating-point numbers. The paper presents the requirements of the L* algorithm use and the proof of the admissibility of this algorithm. The experiments confirmed that the L* algorithm is faster than the A* algorithm in various path planning scenarios. We also introduced a method of estimating the execution time of the A* and the L* algorithm. The method was compared with the experimental results.

A summary on reward processing in psychophysiology

Dan Foti, Anna Weinberg, Reward and feedback processing: State of the field, best practices, and future directions, International Journal of Psychophysiology, Volume 132, Part B, 2018, Pages 171-174, DOI: 10.1016/j.ijpsycho.2018.08.006.

There is a long history of studies using event-related potentials (ERPs) to examine how the brain monitors performance. Many initial studies focused on error processing, both internal (i.e., neural activity elicited by error commission) (Falkenstein et al., 1991; Gehring et al., 1993) and external (i.e. neural activity elicited by feedback indicating an unfavorable outcome) (Gehring and Willoughby, 2002; Miltner et al., 1997). A frequent assumption in this line of research has been that correct performance and favorable outcomes served as reference conditions, and that any effects on ERP amplitudes specifically reflected error processing. This starting premise is at odds with the large human and animal neuroscience literatures on reward processing, which focus on the motivated pursuit of said favorable outcomes. In fact, reward and error processing are intrinsically linked, and both undergird effective task performance: the brain is highly sensitive to events that are better or worse than expected in order to continuously modulate behavior in line with task goals (Holroyd and Coles, 2002). In recent years, the ERP literature on feedback processing has broadened to explicitly incorporate reward processing, thereby enriching traditional studies focused on error processing. Specific developments in this regard include an expanded focus on multiple stages of reward processing (e.g., anticipation versus outcome), charting the development of reward processing across the lifespan, and the examination of aberrant sensitivity to reward in psychiatric illnesses. While these advances are highly promising, the general ERP literature on feedback processing continues to be fragmented with regard to terminology, analytic techniques, task designs, and interpretation of findings, ultimately limiting progress in the field.

The overarching goal of this special issue was to carefully examine the state of the art in our current understanding of feedback processing. The aim was to provide an integrative overview that covers multiple theoretical perspectives and methodological approaches. Consideration has been given in this collection of articles to both basic and applied research topics, and throughout the special issue there is an emphasis on providing specific recommendations for study design and the identification of important future research directions. In the remainder of this introductory editorial, we set the stage for these articles by highlighting complementary results and points of intersection across four themes: integrating perspectives on reward and error processing; experimental manipulations, psychometrics, and individual differences.

A survey on decision making for multiagent systems, including multirobot systems

Y. Rizk, M. Awad and E. W. Tunstel, Decision Making in Multiagent Systems: A Survey, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 3, pp. 514-529, DOI: 10.1109/TCDS.2018.2840971.

Intelligent transport systems, efficient electric grids, and sensor networks for data collection and analysis are some examples of the multiagent systems (MAS) that cooperate to achieve common goals. Decision making is an integral part of intelligent agents and MAS that will allow such systems to accomplish increasingly complex tasks. In this survey, we investigate state-of-the-art work within the past five years on cooperative MAS decision making models, including Markov decision processes, game theory, swarm intelligence, and graph theoretic models. We survey algorithms that result in optimal and suboptimal policies such as reinforcement learning, dynamic programming, evolutionary computing, and neural networks. We also discuss the application of these models to robotics, wireless sensor networks, cognitive radio networks, intelligent transport systems, and smart electric grids. In addition, we define key terms in the area and discuss remaining challenges that include incorporating big data advancements to decision making, developing autonomous, scalable and computationally efficient algorithms, tackling more complex tasks, and developing standardized evaluation metrics. While recent surveys have been published on this topic, we present a broader discussion of related models and applications.Note to Practitioners:Future smart cities will rely on cooperative MAS that make decisions about what actions to perform that will lead to the completion of their tasks. Decision making models and algorithms have been developed and reported in the literature to generate such sequences of actions. These models are based on a wide variety of principles including human decision making and social animal behavior. In this paper, we survey existing decision making models and algorithms that generate optimal and suboptimal sequences of actions. We also discuss some of the remaining challenges faced by the research community before more effective MAS deployment can be achieved in this age of Internet of Things, robotics, and mobile devices. These challenges include developing more scalable and efficient algorithms, utilizing the abundant sensory data available, tackling more complex tasks, and developing evaluation standards for decision making.

Interpreting time series patterns through reasoning

T. Teijeiro, P. Félix, On the adoption of abductive reasoning for time series interpretation, Artificial Intelligence, Volume 262, 2018, Pages 163-188, DOI: 10.1016/j.artint.2018.06.005.

Time series interpretation aims to provide an explanation of what is observed in terms of its underlying processes. The present work is based on the assumption that the common classification-based approaches to time series interpretation suffer from a set of inherent weaknesses, whose ultimate cause lies in the monotonic nature of the deductive reasoning paradigm. In this document we propose a new approach to this problem, based on the initial hypothesis that abductive reasoning properly accounts for the human ability to identify and characterize the patterns appearing in a time series. The result of this interpretation is a set of conjectures in the form of observations, organized into an abstraction hierarchy and explaining what has been observed. A knowledge-based framework and a set of algorithms for the interpretation task are provided, implementing a hypothesize-and-test cycle guided by an attentional mechanism. As a representative application domain, interpretation of the electrocardiogram allows us to highlight the strengths of the proposed approach in comparison with traditional classification-based approaches.

An interesting model of Basal Ganglia that performs similarly to Q learning when applied to a robot

Y. Zeng, G. Wang and B. Xu, A Basal Ganglia Network Centric Reinforcement Learning Model and Its Application in Unmanned Aerial Vehicle, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 2, pp. 290-303 DOI: 10.1109/TCDS.2017.2649564.

Reinforcement learning brings flexibility and generality for machine learning, while most of them are mathematical optimization driven approaches, and lack of cognitive and neural evidence. In order to provide a more cognitive and neural mechanisms driven foundation and validate its applicability in complex task, we develop a basal ganglia (BG) network centric reinforcement learning model. Compared to existing work on modeling BG, this paper is unique from the following perspectives: 1) the orbitofrontal cortex (OFC) is taken into consideration. OFC is critical in decision making because of its responsibility for reward representation and is critical in controlling the learning process, while most of the BG centric models do not include OFC; 2) to compensate the inaccurate memory of numeric values, precise encoding is proposed to enable working memory system remember important values during the learning process. The method combines vector convolution and the idea of storage by digit bit and is efficient for accurate value storage; and 3) for information coding, the Hodgkin-Huxley model is used to obtain a more biological plausible description of action potential with plenty of ionic activities. To validate the effectiveness of the proposed model, we apply the model to the unmanned aerial vehicle (UAV) autonomous learning process in a 3-D environment. Experimental results show that our model is able to give the UAV the ability of free exploration in the environment and has comparable learning speed as the Q learning algorithm, while the major advances for our model is that it is with solid cognitive and neural basis.

Relation between optimization and reinforcement learning

Megumi Miyashita, Shiro Yano, Toshiyuki Kondo Mirror descent search and its acceleration, Robotics and Autonomous Systems, Volume 106, 2018, Pages 107-116 DOI: 10.1016/j.robot.2018.04.009.

In recent years, attention has been focused on the relationship between black-box optimization problem and reinforcement learning problem. In this research, we propose the Mirror Descent Search (MDS) algorithm which is applicable both for black box optimization problems and reinforcement learning problems. Our method is based on the mirror descent method, which is a general optimization algorithm. The contribution of this research is roughly twofold. We propose two essential algorithms, called MDS and Accelerated Mirror Descent Search (AMDS), and two more approximate algorithms: Gaussian Mirror Descent Search (G-MDS) and Gaussian Accelerated Mirror Descent Search (G-AMDS). This research shows that the advanced methods developed in the context of the mirror descent research can be applied to reinforcement learning problem. We also clarify the relationship between an existing reinforcement learning algorithm and our method. With two evaluation experiments, we show our proposed algorithms converge faster than some state-of-the-art methods.

A new model of cognition

Howard, N. & Hussain, A. The Fundamental Code Unit of the Brain: Towards a New Model for Cognitive Geometry, Cogn Comput (2018) 10: 426 DOI: 10.1007/s12559-017-9538-5.

This paper discusses the problems arising from the multidisciplinary nature of cognitive research and the need to conceptually unify insights from multiple fields into the phenomena that drive cognition. Specifically, the Fundamental Code Unit (FCU) is proposed as a means to better quantify the intelligent thought process at multiple levels of analysis. From the linguistic and behavioral output, FCU produces to the chemical and physical processes within the brain that drive it. The proposed method efficiently model the most complex decision-making process performed by the brain.

Adapting inverse reinforcement learning for including the risk-aversion of the agent

Sumeet Singh, Jonathan Lacotte, Anirudha Majumdar, and Marco Pavone, Risk-sensitive inverse reinforcement learning via semi- and non-parametric methods , The International Journal of Robotics Research First Published May 22, 2018 DOI: 10.1177/0278364918772017.

The literature on inverse reinforcement learning (IRL) typically assumes that humans take actions to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive (RS) IRL to explicitly account for a human’s risk sensitivity. To this end, we propose a flexible class of models based on coherent risk measures, which allow us to capture an entire spectrum of risk preferences from risk neutral to worst case. We propose efficient non-parametric algorithms based on linear programming and semi-parametric algorithms based on maximum likelihood for inferring a human’s underlying risk measure and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with 10 human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk averse to risk neutral in a data-efficient manner. Moreover, comparisons of the RS-IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively, especially in scenarios where catastrophic outcomes such as collisions can occur.

All the information about our cognitive processes that can be deduced from our mouse movements

Paul E. Stillman, Xi Shen, Melissa J. Ferguson, How Mouse-tracking Can Advance Social Cognitive Theory, Trends in Cognitive Sciences, Volume 22, Issue 6, 2018, Pages 531-543 DOI: 10.1016/j.tics.2018.03.012.

Mouse-tracking – measuring computer-mouse movements made by participants while they choose between response options – is an emerging tool that offers an accessible, data-rich, and real-time window into how people categorize and make decisions. In the present article we review recent research in social cognition that uses mouse-tracking to test models and advance theory. In particular, mouse-tracking allows examination of nuanced predictions about both the nature of conflict (e.g., its antecedents and consequences) as well as how this conflict is resolved (e.g., how decisions evolve). We demonstrate how mouse-tracking can further our theoretical understanding by highlighting research in two domains − social categorization and self-control. We conclude with future directions and a discussion of the limitations of mouse-tracking as a method.

On how sleep improves our problem-solving capabilities

Penelope A. Lewis, Günther Knoblich, Gina Poe, How Memory Replay in Sleep Boosts Creative Problem-Solving, Trends in Cognitive Sciences, Volume 22, Issue 6, 2018, Pages 491-503 DOI: 10.1016/j.tics.2018.03.009.

Creative thought relies on the reorganisation of existing knowledge. Sleep is known to be important for creative thinking, but there is a debate about which sleep stage is most relevant, and why. We address this issue by proposing that rapid eye movement sleep, or ‘REM’, and non-REM sleep facilitate creativity in different ways. Memory replay mechanisms in non-REM can abstract rules from corpuses of learned information, while replay in REM may promote novel associations. We propose that the iterative interleaving of REM and non-REM across a night boosts the formation of complex knowledge frameworks, and allows these frameworks to be restructured, thus facilitating creative thought. We outline a hypothetical computational model which will allow explicit testing of these hypotheses.