Monthly Archives: June 2017

You are browsing the site archives by month.

State of the art and historical background of the classical divergence between AI and robotics

Kanna Rajan, Alessandro Saffiotti, Towards a science of integrated AI and Robotics, Artificial Intelligence, Volume 247, June 2017, Pages 1-9, ISSN 0004-3702, DOI: 10.1016/j.artint.2017.03.003.

The early promise of the impact of machine intelligence did not involve the partitioning of the nascent field of Artificial Intelligence. The founders of AI envisioned the notion of embedded intelligence as being conjoined between perception, reasoning and actuation. Yet over the years the fields of AI and Robotics drifted apart. Practitioners of AI focused on problems and algorithms abstracted from the real world. Roboticists, generally with a background in mechanical and electrical engineering, concentrated on sensori-motor functions. That divergence is slowly being bridged with the maturity of both fields and with the growing interest in autonomous systems. This special issue brings together the state of the art and practice of the emergent field of integrated AI and Robotics, and highlights the key areas along which this current evolution of machine intelligence is heading.

How “behaviour trees” generalize the subsumption architecture and some other control architecture frameworks

M. Colledanchise and P. Ögren, “How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees,” in IEEE Transactions on Robotics, vol. 33, no. 2, pp. 372-389, April 2017.DOI: 10.1109/TRO.2016.2633567.

Behavior trees (BTs) are a way of organizing the switching structure of a hybrid dynamical system (HDS), which was originally introduced in the computer game programming community. In this paper, we analyze how the BT representation increases the modularity of an HDS and how key system properties are preserved over compositions of such systems, in terms of combining two BTs into a larger one. We also show how BTs can be seen as a generalization of sequential behavior compositions, the subsumption architecture, and decisions trees. These three tools are powerful but quite different, and the fact that they are unified in a natural way in BTs might be a reason for their popularity in the gaming community. We conclude the paper by giving a set of examples illustrating how the proposed analysis tools can be applied to robot control BTs.

Modelling hierarchical stochastic signals (i.e., decomposable into sub-signals hierarchichally)

Truyen Tran, Dinh Phung, Hung Bui, Svetha Venkatesh, Hierarchical semi-Markov conditional random fields for deep recursive sequential data, Artificial Intelligence, Volume 246, May 2017, Pages 53-85, ISSN 0004-3702, DOI: 10.1016/j.artint.2017.02.003.

We present the hierarchical semi-Markov conditional random field (HSCRF), a generalisation of linear-chain conditional random fields to model deep nested Markov processes. It is parameterised as a conditional log-linear model and has polynomial time algorithms for learning and inference. We derive algorithms for partially-supervised learning and constrained inference. We develop numerical scaling procedures that handle the overflow problem. We show that when depth is two, the HSCRF can be reduced to the semi-Markov conditional random fields. Finally, we demonstrate the HSCRF on two applications: (i) recognising human activities of daily living (ADLs) from indoor surveillance cameras, and (ii) noun-phrase chunking. The HSCRF is capable of learning rich hierarchical models with reasonable accuracy in both fully and partially observed data cases.

A summary of the Clarion cognitive architecture

Ron Sun, Anatomy of the Mind: a Quick Overview, Cognitive Computation, February 2017, Volume 9, Issue 1, pp 1–4, DOI: 10.1007/s12559-016-9444-2.

The recently published book, “Anatomy of the Mind,” explains psychological (cognitive) mechanisms, processes, and functionalities through a comprehensive computational theory of the human mind—that is, a cognitive architecture. The goal of the work has been to develop a unified framework and then to develop process-based mechanistic understanding of psychological phenomena within the unified framework. In this article, I will provide a quick overview of the work.

How very simple digital signal processing techniques, such as numerical filtering and linear interpolation, may provide PDF estimates with improved statistical properties over the histogram and close to, or better than, what can be obtained using Kernel based estimators

P. Carbone, D. Petri and K. Barbé, “Nonparametric Probability Density Estimation via Interpolation Filtering,” in IEEE Transactions on Instrumentation and Measurement, vol. 66, no. 4, pp. 681-690, April 2017.DOI: 10.1109/TIM.2017.2657398.

In this paper, we discuss nonparametric estimation of the probability density function (PDF) of a univariate random variable. This problem has been the subject of a vast amount of scientific literature in many domains, while statisticians are mainly interested in the analysis of the properties of proposed estimators, and engineers treat the histogram as a ready-to-use tool for a data set analysis. By considering histogram data as a numerical sequence, a simple approach for PDF estimation is presented in this paper. It is based on basic notions related to the reconstruction of a continuous-time signal from a sequence of samples. When estimating continuous PDFs, it is shown that the proposed approach is as accurate as kernel-based estimators, widely adopted in the statistical literature. Conversely, it can provide better accuracy when the PDF to be estimated exhibits a discontinuous behavior. The main statistical properties of the proposed estimators are derived and then verified by simulations related to the common cases of normal and uniform density functions. The obtained results are also used to derive optimal, i.e., minimum integral of the mean square error, estimators.

On the current limitations of robotics research concerning the generalization of reported results to different set-ups

Francesco Amigoni, Matteo Luperto, Viola Schiaffonati,Toward generalization of experimental results for autonomous robots, Robotics and Autonomous Systems, Volume 90, April 2017, Pages 4-14, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.08.016.

In this paper we discuss some issues in the experimental evaluation of intelligent autonomous systems, focusing on systems, like autonomous robots, operating in physical environments. We argue that one of the weaknesses of current experimental practices is the low degree of generalization of experimental results, meaning that knowing the performance a robot system obtains in a test setting does not provide much information about the performance the same system could achieve in other settings. We claim that one of the main obstacles to achieve generalization of experimental results in autonomous robotics is the low degree of representativeness of the selected experimental settings. We survey and discuss the degree of representativeness of experimental settings used in a significant sample of current research and we propose some strategies to overcome the emerging limitations.

Robots that pre-compute a number of possible behaviours (in simulation) and then learn their performance with them (propragating that performance measures to similar behaviors through Gaussian Processes Regression) and select the best at each situation (through Bayesian Optimization), thus confronting varying environments and damages to the robot

A. Cully, et al. Robots that can adapt like animals, Nature, 521 (2015), pp. 503–507, DOI: 10.1038/nature14422.

Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot think outside the box to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robotâ €™ s prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.

How to improve statistical results obtained from limited set-ups through active sampling, and a nice review of possible pitfalls in conducting statistical research (and a mention to “pre-registration” of hypothesis and plans to be peer-reviewed before submitting the results)

Romy Lorenz, Adam Hampshire, Robert Leech, Neuroadaptive Bayesian Optimization and Hypothesis Testing, Trends in Cognitive Sciences, Volume 21, Issue 3, March 2017, Pages 155-167, ISSN 1364-6613, DOI: 10.1016/j.tics.2017.01.006.

Cognitive neuroscientists are often interested in broad research questions, yet use overly narrow experimental designs by considering only a small subset of possible experimental conditions. This limits the generalizability and reproducibility of many research findings. Here, we propose an alternative approach that resolves these problems by taking advantage of recent developments in real-time data analysis and machine learning. Neuroadaptive Bayesian optimization is a powerful strategy to efficiently explore more experimental conditions than is currently possible with standard methodology. We argue that such an approach could broaden the hypotheses considered in cognitive science, improving the generalizability of findings. In addition, Bayesian optimization can be combined with preregistration to cover exploration, mitigating researcher bias more broadly and improving reproducibility.

Value iteration applied in control systems when the model of the plant is substituted by data acquired from the plant

Yongqiang Li, Zhongsheng Hou, Yuanjing Feng, Ronghu Chi, Data-driven approximate value iteration with optimality error bound analysis, Automatica, Volume 78, April 2017, Pages 79-87, ISSN 0005-1098, DOI: 10.1016/j.automatica.2016.12.019.

Features of the data-driven approximate value iteration (AVI) algorithm, proposed in Li et al. (2014) for dealing with the optimal stabilization problem, include that only process data is required and that the estimate of the domain of attraction for the closed-loop is enlarged. However, the controller generated by the data-driven AVI algorithm is an approximate solution for the optimal control problem. In this work, a quantitative analysis result on the error bound between the optimal cost and the cost under the designed controller is given. This error bound is determined by the approximation error of the estimation for the optimal cost and the approximation error of the controller function estimator. The first one is concretely determined by the approximation error of the data-driven dynamic programming (DP) operator to the DP operator and the approximation error of the value function estimator. These three approximation errors are zeros when the data set of the plant is sufficient and infinitely complete, and the number of samples in the interested state space is infinite. This means that the cost under the designed controller equals to the optimal cost when the number of iterations is infinite.

NOTE: Another paper on the same issue in the same journal.

A study of the influence of uncertain, stochastic delays in the stability of LTI SISO systems

T. Qi, J. Zhu and J. Chen, “Fundamental Limits on Uncertain Delays: When Is a Delay System Stabilizable by LTI Controllers?,” in IEEE Transactions on Automatic Control, vol. 62, no. 3, pp. 1314-1328, March 2017. DOI: 10.1109/TAC.2016.2584007.

This paper concerns the stabilization of linear time-invariant (LTI) systems subject to uncertain, possibly time-varying delays. The fundamental issue under investigation, referred to as the delay margin problem, addresses the question: What is the largest range of delay such that there exists a single LTI feedback controller capable of stabilizing all the plants for delays within that range? Drawing upon analytic interpolation and rational approximation techniques, we derive fundamental bounds on the delay margin, within which the delay plant is guaranteed to be stabilizable by a certain LTI output feedback controller. Our contribution is threefold. First, for single-input single-output (SISO) systems with an arbitrary number of plant unstable poles and nonminimum phase zeros, we provide an explicit, computationally efficient bound on the delay margin, which requires computing only the largest real eigenvalue of a constant matrix. Second, for multi-input multi-output (MIMO) systems, we show that estimates on the variation ranges of multiple delays can be obtained by solving LMI problems, and further, by finding bounds on the radius of delay variations. Third, we show that these bounds and estimates can be extended to systems subject to time-varying delays. When specialized to more specific cases, e.g., to plants with one unstable pole but possibly multiple nonminimum phase zeros, our results give rise to analytical expressions exhibiting explicit dependence of the bounds and estimates on the pole and zeros, thus demonstrating how fundamentally unstable poles and nonminimum phase zeros may limit the range of delays over which a plant can be stabilized by a LTI controller.