Category Archives: Artificial Intelligence

Interesting summary of photovoltaic modelling

Serhat Duman, Hamdi Tolga Kahraman, Yusuf Sonmez, Ugur Guvenc, Mehmet Kati, Sefa Aras, A powerful meta-heuristic search algorithm for solving global optimization and real-world solar photovoltaic parameter estimation problems, Engineering Applications of Artificial Intelligence, Volume 111, 2022 DOI: 10.1016/j.engappai.2022.104763.

The teaching-learning-based artificial bee colony (TLABC) is a new hybrid swarm-based metaheuristic search algorithm. It combines the exploitation of the teaching learning-based optimization (TLBO) with the exploration of the artificial bee colony (ABC). With the hybridization of these two nature-inspired swarm intelligence algorithms, a robust method has been proposed to solve global optimization problems. However, as with swarm-based algorithms, with the TLABC method, it is a great challenge to effectively simulate the selection process. Fitness-distance balance (FDB) is a powerful recently developed method to effectively imitate the selection process in nature. In this study, the three search phases of the TLABC algorithm were redesigned using the FDB method. In this way, the FDB-TLABC algorithm, which imitates nature more effectively and has a robust search performance, was developed. To investigate the exploitation, exploration, and balanced search capabilities of the proposed algorithm, it was tested on standard and complex benchmark suites (Classic, IEEE CEC 2014, IEEE CEC 2017, and IEEE CEC 2020). In order to verify the performance of the proposed FDB-TLABC for global optimization problems and in the photovoltaic parameter estimation problem (a constrained real-world engineering problem) a very comprehensive and qualified experimental study was carried out according to IEEE CEC standards. Statistical analysis results confirmed that the proposed FDB-TLABC provided the best optimum solution and yielded a superior performance compared to other optimization methods.

A nice survey on knowledge graphs for representing, well, knowledge, focused on explainability of AI, but whatever, they are interesting for many more things

Ilaria Tiddi, Stefan Schlobach, Knowledge graphs as tools for explainable machine learning: A survey, Artificial Intelligence, Volume 302, 2022 DOI: 10.1016/j.artint.2021.103627.

This paper provides an extensive overview of the use of knowledge graphs in the context of Explainable Machine Learning. As of late, explainable AI has become a very active field of research by addressing the limitations of the latest machine learning solutions that often provide highly accurate, but hardly scrutable and interpretable decisions. An increasing interest has also been shown in the integration of Knowledge Representation techniques in Machine Learning applications, mostly motivated by the complementary strengths and weaknesses that could lead to a new generation of hybrid intelligent systems. Following this idea, we hypothesise that knowledge graphs, which naturally provide domain background knowledge in a machine-readable format, could be integrated in Explainable Machine Learning approaches to help them provide more meaningful, insightful and trustworthy explanations. Using a systematic literature review methodology we designed an analytical framework to explore the current landscape of Explainable Machine Learning. We focus particularly on the integration with structured knowledge at large scale, and use our framework to analyse a variety of Machine Learning domains, identifying the main characteristics of such knowledge-based, explainable systems from different perspectives. We then summarise the strengths of such hybrid systems, such as improved understandability, reactivity, and accuracy, as well as their limitations, e.g. in handling noise or extracting knowledge efficiently. We conclude by discussing a list of open challenges left for future research.

A general model of abstraction of graphs

Christer Bäckström, Peter Jonsson, A framework for analysing state-abstraction methods, Artificial Intelligence, Volume 302, 2022 DOI: 10.1016/j.artint.2021.103608.

Abstraction has been used in combinatorial search and action planning from the very beginning of AI. Many different methods and formalisms for state abstraction have been proposed in the literature, but they have been designed from various points of view and with varying purposes. Hence, these methods have been notoriously difficult to analyse and compare in a structured way. In order to improve upon this situation, we present a coherent and flexible framework for modelling abstraction (and abstraction-like) methods based on graph transformations. The usefulness of the framework is demonstrated by applying it to problems in both search and planning. We model six different abstraction methods from the planning literature and analyse their intrinsic properties. We show how to capture many search abstraction concepts (such as avoiding backtracking between levels) and how to put them into a broader context. We also use the framework to identify and investigate connections between refinement and heuristics—two concepts that have usually been considered as unrelated in the literature. This provides new insights into various topics, e.g. Valtorta’s theorem and spurious states. We finally extend the framework with composition of transformations to accommodate for abstraction hierarchies, and other multi-level concepts. We demonstrate the latter by modelling and analysing the merge-and-shrink abstraction method.

Analysis of the under-optimality of path lengths when path planning is carried out on a grid instead of the continuous world

James P. Bailey, Alex Nash, Craig A. Tovey, Sven Koenig, Path-length analysis for grid-based path planning, Artificial Intelligence, Volume 301, 2021, DOI: 10.1016/j.artint.2021.103560.

In video games and robotics, one often discretizes a continuous 2D environment into a regular grid with blocked and unblocked cells and then finds shortest paths for the agents on the resulting grid graph. Shortest grid paths, of course, are not necessarily true shortest paths in the continuous 2D environment. In this article, we therefore study how much longer a shortest grid path can be than a corresponding true shortest path on all regular grids with blocked and unblocked cells that tessellate continuous 2D environments. We study 5 different vertex connectivities that result from both different tessellations and different definitions of the neighbors of a vertex. Our path-length analysis yields either tight or asymptotically tight worst-case bounds in a unified framework. Our results show that the percentage by which a shortest grid path can be longer than a corresponding true shortest path decreases as the vertex connectivity increases. Our path-length analysis is topical because it determines the largest path-length reduction possible for any-angle path-planning algorithms (and thus their benefit), a class of path-planning algorithms in artificial intelligence and robotics that has become popular.

Building explanations for AI plans by modifying user’s models to make those plans optimal within them

Sarath Sreedharan, Tathagata Chakraborti, Subbarao Kambhampati, Foundations of explanations as model reconciliation, Artificial Intelligence, Volume 301,
2021, DOI: 10.1016/j.artint.2021.103558.

Past work on plan explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model. Such soliloquy is wholly inadequate in most realistic scenarios where users have domain and task models that differ from that used by the AI system. We posit that the explanations are best studied in light of these differing models. In particular, we show how explanation can be seen as a \u201cmodel reconciliation problem\u201d (MRP), where the AI system in effect suggests changes to the user’s mental model so as to make its plan be optimal with respect to that changed user model. We will study the properties of such explanations, present algorithms for automatically computing them, discuss relevant extensions to the basic framework, and evaluate the performance of the proposed algorithms both empirically and through controlled user studies.

Safety in MDPs by measuring the probability of reaching dangerous states

Rafal Wisniewski, Luminita-Manuela Bujorianu, Safety of stochastic systems: An analytic and computational approach, . Automatica, Volume 133, 2021 DOI: 10.1016/j.automatica.2021.109839.

We refine the concept of stochastic reach avoidance for a general class of Markov processes introducing a threshold of p for the reaching probability. This new problem is called p-safety, and it aims to ensure that the given process reaches a forbidden set before leaving its ‘working’ state space with a probability of less than p. In the situation when an initial probability measure characterizes the initial states, a variant of p-safety is put forward. We call this form of safety weak p-safety. In this work, we characterize both p-safety and weak p-safety and show how to compute them. We employ semi-definite programming to compute p-safety and linear programming to compute weak p-safety. To get to this point, we use certificates of positivity of polynomials translated into the sum of squares and the Bernstein forms.

Example of non-NN approach that produces better results in classification tasks than NNs

Jiang, Zhiying and Yang, Matthew and Tsirlin, Mikhail and Tang, Raphael and Dai, Yiqin and Lin, Jimmy, Low-Resource Text Classification: A Parameter-Free Classification Method with Compressors, . Findings of the Association for Computational Linguistics: ACL 2023 URL.

Deep neural networks (DNNs) are often used for text classification due to their high accuracy. However, DNNs can be computationally intensive, requiring millions of parameters and large amounts of labeled data, which can make them expensive to use, to optimize, and to transfer to out-of-distribution (OOD) cases in practice. In this paper, we propose a non-parametric alternative to DNNs that??s easy, lightweight, and universal in text classification: a combination of a simple compressor like gzip with a k-nearest-neighbor classifier. Without any training parameters, our method achieves results that are competitive with non-pretrained deep learning methods on six in-distribution datasets.It even outperforms BERT on all five OOD datasets, including four low-resource languages. Our method also excels in the few-shot setting, where labeled data are too scarce to train DNNs effectively.

Building POMDPs under logical constraints

Bo Wu, Xiaobin Zhang, Hai Lin, Supervisor synthesis of POMDP via automata learning, . Automatica, Volume 129, 2021 DOI: 10.1016/j.automatica.2021.109654.

Partially observable Markov decision process (POMDP) is a comprehensive modeling framework that captures uncertainties from sensing noises, actuation errors, and environments. Traditional POMDP planning finds an optimal policy for reward maximization. However, for safety-critical applications, it is often necessary to guarantee system performance described by high-level temporal logic specifications. Hence, we are motivated to develop a supervisor synthesis framework for POMDP with respect to given formal specifications. We propose an iterative learning-based algorithm, which can learn a permissive policy in the form of a deterministic finite automaton. A human–robot collaboration case study validates the proposed algorithm.

State of the art of the convergence of Monte Carlo Exploring Starts RL, policy iteration kind, method

Jun Liu, On the convergence of reinforcement learning with Monte Carlo Exploring Starts, . Automatica, Volume 129, 2021 DOI: 10.1016/j.automatica.2021.109693.

A basic simulation-based reinforcement learning algorithm is the Monte Carlo Exploring Starts (MCES) method, also known as optimistic policy iteration, in which the value function is approximated by simulated returns and a greedy policy is selected at each iteration. The convergence of this algorithm in the general setting has been an open question. In this paper, we investigate the convergence of this algorithm for the case with undiscounted costs, also known as the stochastic shortest path problem. The results complement existing partial results on this topic and thereby help further settle the open problem.

Approximating the value function of RL through Max-Plus algebra

Vinicius Mariano Gonçalves, Max-plus approximation for reinforcement learning, . Automatica, Volume 129, 2021 DOI: 10.1016/j.automatica.2021.109623.

Max-Plus Algebra has been applied in several contexts, especially in the control of discrete events systems. In this article, we discuss another application closely related to control: the use of Max-Plus algebra concepts in the context of reinforcement learning. Max-Plus Algebra and reinforcement learning are strongly linked due to the latter’s dependence on the Bellman Equation which, in some cases, is a linear Max-Plus equation. This fact motivates the application of Max-Plus algebra to approximate the value function, central to the Bellman Equation and thus also to reinforcement learning. This article proposes conditions so that this approach can be done in a simple way and following the philosophy of reinforcement learning: explore the environment, receive the rewards and use this information to improve the knowledge of the value function. The proposed conditions are related to two matrices and impose on them a relationship that is analogous to the concept of weak inverses in traditional algebra.