Category Archives: Artificial Intelligence

Abstracting and representing tasks performed under Learning from Demonstration, using bayesian non-parametric time-series analysis (good review of both LfD and HMMs for time-series)

Scott Niekum, Sarah Osentoski, George Konidaris, Sachin Chitta, Bhaskara Marthi, Andrew G. Barto (2015), Learning grounded finite-state representations from unstructured demonstrations, The International Journal of Robotics Research, vol. 34, pp. 131-157. DOI: 10.1177/0278364914554471

Robots exhibit flexible behavior largely in proportion to their degree of knowledge about the world. Such knowledge is often meticulously hand-coded for a narrow class of tasks, limiting the scope of possible robot competencies. Thus, the primary limiting factor of robot capabilities is often not the physical attributes of the robot, but the limited time and skill of expert programmers. One way to deal with the vast number of situations and environments that robots face outside the laboratory is to provide users with simple methods for programming robots that do not require the skill of an expert. For this reason, learning from demonstration (LfD) has become a popular alternative to traditional robot programming methods, aiming to provide a natural mechanism for quickly teaching robots. By simply showing a robot how to perform a task, users can easily demonstrate new tasks as needed, without any special knowledge about the robot. Unfortunately, LfD often yields little knowledge about the world, and thus lacks robust generalization capabilities, especially for complex, multi-step tasks. We present a series of algorithms that draw from recent advances in Bayesian non-parametric statistics and control theory to automatically detect and leverage repeated structure at multiple levels of abstraction in demonstration data. The discovery of repeated structure provides critical insights into task invariants, features of importance, high-level task structure, and appropriate skills for the task. This culminates in the discovery of a finite-state representation of the task, composed of grounded skills that are flexible and reusable, providing robust generalization and transfer in complex, multi-step robotic tasks. These algorithms are tested and evaluated using a PR2 mobile manipulator, showing success on several complex real-world tasks, such as furniture assembly.

Scientific limitations to the non-scientific idea that super-intelligence will come (for exterminating humans)

Ernest Davis, Ethical guidelines for a superintelligence, Artificial Intelligence, Volume 220, March 2015, Pages 121-124, ISSN 0004-3702, DOI: 10.1016/j.artint.2014.12.003.

Nick Bostrom, in his new book SuperIntelligence, argues that the creation of an artificial intelligence with human-level intelligence will be followed fairly soon by the existence of an almost omnipotent superintelligence, with consequences that may well be disastrous for humanity. He considers that it is therefore a top priority for mankind to figure out how to imbue such a superintelligence with a sense of morality; however, he considers that this task is very difficult. I discuss a number of flaws in his analysis, particularly the viewpoint that implementing ethical behavior is an especially difficult problem in AI research.

How to bypass the NP-hardness of estimating the best explanation of given data (instantiated as MAP, i.e., Maximum A Posteriori, not as maximum likelihood) in discrete Bayesian Networks, through distinction of relevant and irrelevant variables

Johan Kwisthout, Most frugal explanations in Bayesian networks, Artificial Intelligence, Volume 218, January 2015, Pages 56-73, ISSN 0004-3702, DOI: 10.1016/j.artint.2014.10.001

Inferring the most probable explanation to a set of variables, given a partial observation of the remaining variables, is one of the canonical computational problems in Bayesian networks, with widespread applications in AI and beyond. This problem, known as MAP, is computationally intractable (NP-hard) and remains so even when only an approximate solution is sought. We propose a heuristic formulation of the MAP problem, denoted as Inference to the Most Frugal Explanation (MFE), based on the observation that many intermediate variables (that are neither observed nor to be explained) are irrelevant with respect to the outcome of the explanatory process. An explanation based on few samples (often even a singleton sample) from these irrelevant variables is typically almost as good as an explanation based on (the computationally costly) marginalization over these variables. We show that while MFE is computationally intractable in general (as is MAP), it can be tractably approximated under plausible situational constraints, and its inferences are fairly robust with respect to which intermediate variables are considered to be relevant.