Category Archives: Cognitive Sciences

Defining and measuring mathematically the level of knowledge, ignorance and uncertainty

Fujun Hou, Evangelos Triantaphyllou, Juri Yanase, Knowledge, ignorance, and uncertainty: An investigation from the perspective of some differential equations, Expert Systems with Applications, Volume 191, 2022 DOI: 10.1016/j.eswa.2021.116325.

People use knowledge on several cognitive tasks such as when they recognize objects, rank entities such as the alternatives in multi-criteria decision making, or for classification tasks of decision making / expert / intelligent systems. When people have sufficient relevant knowledge, they can make well-distinctive assessments among entities. Otherwise, they may exhibit some uncertainty. This paper establishes two differential equations, of which one is for the interaction between the knowledge level and the uncertainty level, and the other is for the interaction between the ignorance level and the uncertainty level. By solving these two differential equations under certain boundary conditions, one can derive that the proposed knowledge level indicator is equivalent to Wierman’s knowledge granularity measure up to a constant (exactly, ln2). Moreover, the knowledge level indicator and the ignorance level indicator are found to be in a complementary relationship with each other. That is, more knowledge implies less ignorance, and vice-versa. The results of this study bridge a critical gap that exists in the understanding of the concepts of knowledge and ignorance.

A nice survey on knowledge graphs for representing, well, knowledge, focused on explainability of AI, but whatever, they are interesting for many more things

Ilaria Tiddi, Stefan Schlobach, Knowledge graphs as tools for explainable machine learning: A survey, Artificial Intelligence, Volume 302, 2022 DOI: 10.1016/j.artint.2021.103627.

This paper provides an extensive overview of the use of knowledge graphs in the context of Explainable Machine Learning. As of late, explainable AI has become a very active field of research by addressing the limitations of the latest machine learning solutions that often provide highly accurate, but hardly scrutable and interpretable decisions. An increasing interest has also been shown in the integration of Knowledge Representation techniques in Machine Learning applications, mostly motivated by the complementary strengths and weaknesses that could lead to a new generation of hybrid intelligent systems. Following this idea, we hypothesise that knowledge graphs, which naturally provide domain background knowledge in a machine-readable format, could be integrated in Explainable Machine Learning approaches to help them provide more meaningful, insightful and trustworthy explanations. Using a systematic literature review methodology we designed an analytical framework to explore the current landscape of Explainable Machine Learning. We focus particularly on the integration with structured knowledge at large scale, and use our framework to analyse a variety of Machine Learning domains, identifying the main characteristics of such knowledge-based, explainable systems from different perspectives. We then summarise the strengths of such hybrid systems, such as improved understandability, reactivity, and accuracy, as well as their limitations, e.g. in handling noise or extracting knowledge efficiently. We conclude by discussing a list of open challenges left for future research.

A general model of abstraction of graphs

Christer Bäckström, Peter Jonsson, A framework for analysing state-abstraction methods, Artificial Intelligence, Volume 302, 2022 DOI: 10.1016/j.artint.2021.103608.

Abstraction has been used in combinatorial search and action planning from the very beginning of AI. Many different methods and formalisms for state abstraction have been proposed in the literature, but they have been designed from various points of view and with varying purposes. Hence, these methods have been notoriously difficult to analyse and compare in a structured way. In order to improve upon this situation, we present a coherent and flexible framework for modelling abstraction (and abstraction-like) methods based on graph transformations. The usefulness of the framework is demonstrated by applying it to problems in both search and planning. We model six different abstraction methods from the planning literature and analyse their intrinsic properties. We show how to capture many search abstraction concepts (such as avoiding backtracking between levels) and how to put them into a broader context. We also use the framework to identify and investigate connections between refinement and heuristics—two concepts that have usually been considered as unrelated in the literature. This provides new insights into various topics, e.g. Valtorta’s theorem and spurious states. We finally extend the framework with composition of transformations to accommodate for abstraction hierarchies, and other multi-level concepts. We demonstrate the latter by modelling and analysing the merge-and-shrink abstraction method.

On how the exploitation-exploration dicotomy shifts to exploitation as humans get older

R. Nathan Spreng, Gary R. Turner, From exploration to exploitation: a shifting mental mode in late life development, Trends in Cognitive Sciences, Volume 25, Issue 12, 2021 DOI: 10.1016/j.tics.2021.09.0010.

Changes in cognition, affect, and brain function combine to promote a shift in the nature of mentation in older adulthood, favoring exploitation of prior knowledge over exploratory search as the starting point for thought and action. Age-related exploitation biases result from the accumulation of prior knowledge, reduced cognitive control, and a shift toward affective goals. These are accompanied by changes in cortical networks, as well as attention and reward circuits. By incorporating these factors into a unified account, the exploration-to-exploitation shift offers an integrative model of cognitive, affective, and brain aging. Here, we review evidence for this model, identify determinants and consequences, and survey the challenges and opportunities posed by an exploitation-biased mental mode in later life.

Analysis of the under-optimality of path lengths when path planning is carried out on a grid instead of the continuous world

James P. Bailey, Alex Nash, Craig A. Tovey, Sven Koenig, Path-length analysis for grid-based path planning, Artificial Intelligence, Volume 301, 2021, DOI: 10.1016/j.artint.2021.103560.

In video games and robotics, one often discretizes a continuous 2D environment into a regular grid with blocked and unblocked cells and then finds shortest paths for the agents on the resulting grid graph. Shortest grid paths, of course, are not necessarily true shortest paths in the continuous 2D environment. In this article, we therefore study how much longer a shortest grid path can be than a corresponding true shortest path on all regular grids with blocked and unblocked cells that tessellate continuous 2D environments. We study 5 different vertex connectivities that result from both different tessellations and different definitions of the neighbors of a vertex. Our path-length analysis yields either tight or asymptotically tight worst-case bounds in a unified framework. Our results show that the percentage by which a shortest grid path can be longer than a corresponding true shortest path decreases as the vertex connectivity increases. Our path-length analysis is topical because it determines the largest path-length reduction possible for any-angle path-planning algorithms (and thus their benefit), a class of path-planning algorithms in artificial intelligence and robotics that has become popular.

Building explanations for AI plans by modifying user’s models to make those plans optimal within them

Sarath Sreedharan, Tathagata Chakraborti, Subbarao Kambhampati, Foundations of explanations as model reconciliation, Artificial Intelligence, Volume 301,
2021, DOI: 10.1016/j.artint.2021.103558.

Past work on plan explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model. Such soliloquy is wholly inadequate in most realistic scenarios where users have domain and task models that differ from that used by the AI system. We posit that the explanations are best studied in light of these differing models. In particular, we show how explanation can be seen as a \u201cmodel reconciliation problem\u201d (MRP), where the AI system in effect suggests changes to the user’s mental model so as to make its plan be optimal with respect to that changed user model. We will study the properties of such explanations, present algorithms for automatically computing them, discuss relevant extensions to the basic framework, and evaluate the performance of the proposed algorithms both empirically and through controlled user studies.

On how physical movements shape the perception of time

Rose De Kock, Keri Anne Gladhill, Minaz Numa Ali, Wilsaan Mychal Joiner, Martin Wiener, How movements shape the perception of time, Trends in Cognitive Sciences, Volume 25, Issue 11, 2021, Pages 950-963 DOI: 10.1016/j.tics.2021.08.002.

In order to keep up with a changing environment, mobile organisms must be capable of deciding both where and when to move. This precision necessitates a strong sense of time, as otherwise we would fail in many of our movement goals. Yet, despite this intrinsic link, only recently have researchers begun to understand how these two features interact. Primarily, two effects have been observed: movements can bias time estimates, but they can also make them more precise. Here we review this literature and propose that both effects can be explained by a Bayesian cue combination framework, in which movement itself affords the most precise representation of time, which can influence perception in either feedforward or active sensing modes.

Safety in MDPs by measuring the probability of reaching dangerous states

Rafal Wisniewski, Luminita-Manuela Bujorianu, Safety of stochastic systems: An analytic and computational approach, . Automatica, Volume 133, 2021 DOI: 10.1016/j.automatica.2021.109839.

We refine the concept of stochastic reach avoidance for a general class of Markov processes introducing a threshold of p for the reaching probability. This new problem is called p-safety, and it aims to ensure that the given process reaches a forbidden set before leaving its ‘working’ state space with a probability of less than p. In the situation when an initial probability measure characterizes the initial states, a variant of p-safety is put forward. We call this form of safety weak p-safety. In this work, we characterize both p-safety and weak p-safety and show how to compute them. We employ semi-definite programming to compute p-safety and linear programming to compute weak p-safety. To get to this point, we use certificates of positivity of polynomials translated into the sum of squares and the Bernstein forms.

Solving the “self-recognition on a mirror” problem for robots

Arianna Pipitone, Antonio Chella, Robot passes the mirror test by inner speech, . Robotics and Autonomous Systems, Volume 144, 2021 DOI: 10.1016/j.robot.2021.103838.

The mirror test is a well-known task in Robotics. The existing strategies are based on kinesthetic-visual matching techniques and manipulate perceptual and motion data. The proposed work attempts to demonstrate that it is possible to implement a robust robotic self-recognition method by the inner speech, i.e. the self-dialogue that enables reasoning on symbolic information. The robot self-talks and conceptually reasons on the symbolic forms of signals, and infers if the robot it sees in the mirror is itself or not. The idea is supported by the existing literature in psychology, where the importance of inner speech in self-reflection and self-concept emergence for solving the mirror test was empirically demonstrated.

Trying to reach general AI through just decision-making (rewards) instead of using a diversity of paradigms

avid Silver, Satinder Singh, Doina Precup, Richard S. Sutton, Reward is enough, . Artificial Intelligence, Volume 299, 2021 DOI: 10.1016/j.artint.2021.103535.

In this article we hypothesise that intelligence, and its associated abilities, can be understood as subserving the maximisation of reward. Accordingly, reward is enough to drive behaviour that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalisation and imitation. This is in contrast to the view that specialised problem formulations are needed for each ability, based on other signals or objectives. Furthermore, we suggest that agents that learn through trial and error experience to maximise reward could learn behaviour that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence.

NOTES:

  • The computational and physical limitations of the agent to cope with a too complex world is the main reason to use learning instead of pre-built knowledge (evolution): it allows the agent to focus on acquiring skills for its own circumstances first, that are the most important for it.
  • Argument why classification (supervised learning) is less powerful and efficient than RL.
  • Same with multi-agent settings vs. one agent confronted with a single complex environment (containing other agents).