Author Archives: Juan-antonio Fernández-madrigal

Evidences that the human brain has quantifying properties -i.e., ability to discriminate between sets of different sizes- as a result of evolution, but that numerical cognition is a result of culture

Rafael E. Núñez, Is There Really an Evolved Capacity for Number?, Trends in Cognitive Sciences, Volume 21, Issue 6, June 2017, Pages 409-424, ISSN 1364-6613, DOI: 10.1016/j.tics.2017.03.005.

Humans and other species have biologically endowed abilities for discriminating quantities. A widely accepted view sees such abilities as an evolved capacity specific for number and arithmetic. This view, however, is based on an implicit teleological rationale, builds on inaccurate conceptions of biological evolution, downplays human data from non-industrialized cultures, overinterprets results from trained animals, and is enabled by loose terminology that facilitates teleological argumentation. A distinction between quantical (e.g., quantity discrimination) and numerical (exact, symbolic) cognition is needed: quantical cognition provides biologically evolved preconditions for numerical cognition but it does not scale up to number and arithmetic, which require cultural mediation. The argument has implications for debates about the origins of other special capacities – geometry, music, art, and language.

Simultaneous localization and synchronization (SLAS) for multiple agents, with a nice state of the art including both SLAS for individual and multiple agents

B. Etzlinger, F. Meyer, F. Hlawatsch, A. Springer and H. Wymeersch, “Cooperative Simultaneous Localization and Synchronization in Mobile Agent Networks,” in IEEE Transactions on Signal Processing, vol. 65, no. 14, pp. 3587-3602, July15, 15 2017. DOI: 10.1109/TSP.2017.2691665.

Cooperative localization in agent networks based on interagent time-of-flight measurements is closely related to synchronization. To leverage this relation, we propose a Bayesian factor graph framework for cooperative simultaneous localization and synchronization (CoSLAS). This framework is suited to mobile agents and time-varying local clock parameters. Building on the CoSLAS factor graph, we develop a distributed (decentralized) belief propagation algorithm for CoSLAS in the practically important case of an affine clock model and asymmetric time stamping. Our algorithm is compatible with real-time operation and a time-varying network connectivity. To achieve high accuracy at reduced complexity and communication cost, the algorithm combines particle implementations with parametric message representations and takes advantage of a conditional independence property. Simulation results demonstrate the good performance of the proposed algorithm in a challenging scenario with time-varying network connectivity.

Modelling the implicit complexity of problem solving in exams

A. Shoufan, “Toward Modeling the Intrinsic Complexity of Test Problems,” in IEEE Transactions on Education, vol. 60, no. 2, pp. 157-163, May 2017.
DOI: 10.1109/TE.2016.2611666.

The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of solving time. Such a model allows prediction of the time to solve new problems in the same class and helps instructors develop more reliable test problems. A complexity model, furthermore, enhances understanding of the problem and can point to new aspects interesting for education and research. Students can use complexity models to assess and improve their learning level. The approach is explained using the K-map minimization problem as a case study. The implications of this research for other problems in electrical and computer engineering education are highlighted. An important aim of this paper is to stimulate future research in this area. An ideal outcome of such research is to provide complexity models for many, or even all, relevant problem classes in various electrical and computer engineering courses.

Personalizing the assessments generated automatically for students in order to minimize plagiarism: the case of programming

S. Manoharan, “Personalized Assessment as a Means to Mitigate Plagiarism,” in IEEE Transactions on Education, vol. 60, no. 2, pp. 112-119, May 2017.
DOI: 10.1109/TE.2016.2604210.

Although every educational institution has a code of academic honesty, they still encounter incidents of plagiarism. These are difficult and time-consuming to detect and deal with. This paper explores the use of personalized assessments with the goal of reducing incidents of plagiarism, proposing a personalized assessment software framework through which each student receives a unique problem set. The framework not only auto-generates the problem set but also auto-marks the solutions when submitted. The experience of using this framework is discussed, from the perspective of both students and staff, particularly with respect to its ability to mitigate plagiarism. A comparison of personalized and traditional assignments in the same class confirms that the former had far fewer observed plagiarism incidents. Although personalized assessment may not be cost-effective in all courses (such as language courses), it still can be effective in areas such as mathematics, engineering, science, and computing. This paper concludes that personalized assessment is a promising approach to counter plagiarism.

Reinforcement learning to learn the model of the world intrinsically motivated

Todd Hester, Peter Stone, Intrinsically motivated model learning for developing curious robots, Artificial Intelligence, Volume 247, June 2017, Pages 170-186, ISSN 0004-3702, DOI: 10.1016/j.artint.2015.05.002.

Reinforcement Learning (RL) agents are typically deployed to learn a specific, concrete task based on a pre-defined reward function. However, in some cases an agent may be able to gain experience in the domain prior to being given a task. In such cases, intrinsic motivation can be used to enable the agent to learn a useful model of the environment that is likely to help it learn its eventual tasks more efficiently. This paradigm fits robots particularly well, as they need to learn about their own dynamics and affordances which can be applied to many different tasks. This article presents the texplore with Variance-And-Novelty-Intrinsic-Rewards algorithm (texplore-vanir), an intrinsically motivated model-based RL algorithm. The algorithm learns models of the transition dynamics of a domain using random forests. It calculates two different intrinsic motivations from this model: one to explore where the model is uncertain, and one to acquire novel experiences that the model has not yet been trained on. This article presents experiments demonstrating that the combination of these two intrinsic rewards enables the algorithm to learn an accurate model of a domain with no external rewards and that the learned model can be used afterward to perform tasks in the domain. While learning the model, the agent explores the domain in a developing and curious way, progressively learning more complex skills. In addition, the experiments show that combining the agent’s intrinsic rewards with external task rewards enables the agent to learn faster than using external rewards alone. We also present results demonstrating the applicability of this approach to learning on robots.

State of the art and historical background of the classical divergence between AI and robotics

Kanna Rajan, Alessandro Saffiotti, Towards a science of integrated AI and Robotics, Artificial Intelligence, Volume 247, June 2017, Pages 1-9, ISSN 0004-3702, DOI: 10.1016/j.artint.2017.03.003.

The early promise of the impact of machine intelligence did not involve the partitioning of the nascent field of Artificial Intelligence. The founders of AI envisioned the notion of embedded intelligence as being conjoined between perception, reasoning and actuation. Yet over the years the fields of AI and Robotics drifted apart. Practitioners of AI focused on problems and algorithms abstracted from the real world. Roboticists, generally with a background in mechanical and electrical engineering, concentrated on sensori-motor functions. That divergence is slowly being bridged with the maturity of both fields and with the growing interest in autonomous systems. This special issue brings together the state of the art and practice of the emergent field of integrated AI and Robotics, and highlights the key areas along which this current evolution of machine intelligence is heading.

How “behaviour trees” generalize the subsumption architecture and some other control architecture frameworks

M. Colledanchise and P. Ögren, “How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees,” in IEEE Transactions on Robotics, vol. 33, no. 2, pp. 372-389, April 2017.DOI: 10.1109/TRO.2016.2633567.

Behavior trees (BTs) are a way of organizing the switching structure of a hybrid dynamical system (HDS), which was originally introduced in the computer game programming community. In this paper, we analyze how the BT representation increases the modularity of an HDS and how key system properties are preserved over compositions of such systems, in terms of combining two BTs into a larger one. We also show how BTs can be seen as a generalization of sequential behavior compositions, the subsumption architecture, and decisions trees. These three tools are powerful but quite different, and the fact that they are unified in a natural way in BTs might be a reason for their popularity in the gaming community. We conclude the paper by giving a set of examples illustrating how the proposed analysis tools can be applied to robot control BTs.

Modelling hierarchical stochastic signals (i.e., decomposable into sub-signals hierarchichally)

Truyen Tran, Dinh Phung, Hung Bui, Svetha Venkatesh, Hierarchical semi-Markov conditional random fields for deep recursive sequential data, Artificial Intelligence, Volume 246, May 2017, Pages 53-85, ISSN 0004-3702, DOI: 10.1016/j.artint.2017.02.003.

We present the hierarchical semi-Markov conditional random field (HSCRF), a generalisation of linear-chain conditional random fields to model deep nested Markov processes. It is parameterised as a conditional log-linear model and has polynomial time algorithms for learning and inference. We derive algorithms for partially-supervised learning and constrained inference. We develop numerical scaling procedures that handle the overflow problem. We show that when depth is two, the HSCRF can be reduced to the semi-Markov conditional random fields. Finally, we demonstrate the HSCRF on two applications: (i) recognising human activities of daily living (ADLs) from indoor surveillance cameras, and (ii) noun-phrase chunking. The HSCRF is capable of learning rich hierarchical models with reasonable accuracy in both fully and partially observed data cases.

A summary of the Clarion cognitive architecture

Ron Sun, Anatomy of the Mind: a Quick Overview, Cognitive Computation, February 2017, Volume 9, Issue 1, pp 1–4, DOI: 10.1007/s12559-016-9444-2.

The recently published book, “Anatomy of the Mind,” explains psychological (cognitive) mechanisms, processes, and functionalities through a comprehensive computational theory of the human mind—that is, a cognitive architecture. The goal of the work has been to develop a unified framework and then to develop process-based mechanistic understanding of psychological phenomena within the unified framework. In this article, I will provide a quick overview of the work.

How very simple digital signal processing techniques, such as numerical filtering and linear interpolation, may provide PDF estimates with improved statistical properties over the histogram and close to, or better than, what can be obtained using Kernel based estimators

P. Carbone, D. Petri and K. Barbé, “Nonparametric Probability Density Estimation via Interpolation Filtering,” in IEEE Transactions on Instrumentation and Measurement, vol. 66, no. 4, pp. 681-690, April 2017.DOI: 10.1109/TIM.2017.2657398.

In this paper, we discuss nonparametric estimation of the probability density function (PDF) of a univariate random variable. This problem has been the subject of a vast amount of scientific literature in many domains, while statisticians are mainly interested in the analysis of the properties of proposed estimators, and engineers treat the histogram as a ready-to-use tool for a data set analysis. By considering histogram data as a numerical sequence, a simple approach for PDF estimation is presented in this paper. It is based on basic notions related to the reconstruction of a continuous-time signal from a sequence of samples. When estimating continuous PDFs, it is shown that the proposed approach is as accurate as kernel-based estimators, widely adopted in the statistical literature. Conversely, it can provide better accuracy when the PDF to be estimated exhibits a discontinuous behavior. The main statistical properties of the proposed estimators are derived and then verified by simulations related to the common cases of normal and uniform density functions. The obtained results are also used to derive optimal, i.e., minimum integral of the mean square error, estimators.