Author Archives: Juan-antonio Fernández-madrigal

Estimating the execution time of programs before compiling

Peter Altenbernd, Jan Gustafsson, Björn Lisper, Friedhelm Stappert, Early execution time-estimation through automatically generated timing models,, Real-Time Systems, November 2016, Volume 52, Issue 6, pp 731–760, DOI: 10.1007/s11241-016-9250-7.

Traditional timing analysis, such as worst-case execution time analysis, is normally applied only in the late stages of embedded system software development, when the hardware is available and the code is compiled and linked. However, preliminary timing estimates are often needed in early stages of system development as an essential prerequisite for the configuration of the hardware setup and dimensioning of the system. During this phase the hardware is often not available, and the code might not be ready to link. This article describes an approach to predict the execution time of software through an early, source-level timing analysis. A timing model for source code is automatically derived from a given combination of hardware architecture and compiler. The model is identified from measured execution times for a set of synthetic training programs, compiled for the hardware platform in question. It can be used to estimate the execution time for code running on the platform: the estimation is then done directly from the source code, without compiling and running it. Our experiments show that, using this model, we can predict the execution times of the final, compiled code surprisingly well. For instance, we achieve an average deviation of 8 % for a set of benchmark programs for the ARM7 architecture.

Survey of Cognitive Offloading

Evan F. Risko, Sam J. Gilbert, Cognitive Offloading, Trends in Cognitive Sciences, Volume 20, Issue 9, 2016, Pages 676-688, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.07.002.

If you have ever tilted your head to perceive a rotated image, or programmed a smartphone to remind you of an upcoming appointment, you have engaged in cognitive offloading: the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand. Despite the ubiquity of this type of behavior, it has only recently become the target of systematic investigation in and of itself. We review research from several domains that focuses on two main questions: (i) what mechanisms trigger cognitive offloading, and (ii) what are the cognitive consequences of this behavior? We offer a novel metacognitive framework that integrates results from diverse domains and suggests avenues for future research.

Mapping (and navitating in) outdoor unstructured environments with low-cost and few sensors, using relations between landmarks instead of absolute or metrical positions

Mark McClelland, Mark Campbell, Tara Estlin, Qualitative relational mapping and navigation for planetary rovers, Robotics and Autonomous Systems, Volume 83, 2016, Pages 73-86, ISSN 0921-8890, DOI: j.robot.2016.05.017.

This paper presents a novel method for qualitative mapping of large scale spaces which decouples the mapping problem from that of position estimation. The proposed framework makes use of a graphical representation of the world in order to build a map consisting of qualitative constraints on the geometric relationships between landmark triplets. This process allows a mobile robot to extract information about landmark positions using a set of minimal sensors in the absence of GPS. A novel measurement method based on camera imagery is presented which extends previous work from the field of Qualitative Spatial Reasoning. A Branch-and-Bound approach is taken to solve a set of non-convex feasibility problems required for generating off-line operator lookup tables and on-line measurements, which are fused into the map using an iterative graph update. A navigation approach for travel between distant landmarks is developed, using estimates of the Relative Neighborhood Graph extracted from the qualitative map in order to generate a sequence of landmark objectives based on proximity. Average and asymptotic performance of the mapping algorithm is evaluated using Monte Carlo tests on randomly generated maps, and a data-driven simulation is presented for a robot traversing the Jet Propulsion Laboratory Mars Yard while building a relational map. These results demonstrate that the system can be effectively used to build a map sufficiently complete and accurate for long-distance navigation as well as other applications.

Sample-based approximation to POMDPs integrated with forward simulation for robot active exploration, with a nice related work about active exploration in robotics

Mikko Lauri, Risto Ritala, Planning for robotic exploration based on forward simulation, Robotics and Autonomous Systems, Volume 83, 2016, Pages 15-31, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.06.008.

We address the problem of controlling a mobile robot to explore a partially known environment. The robot’s objective is the maximization of the amount of information collected about the environment. We formulate the problem as a partially observable Markov decision process (POMDP) with an information-theoretic objective function, and solve it applying forward simulation algorithms with an open-loop approximation. We present a new sample-based approximation for mutual information useful in mobile robotics. The approximation can be seamlessly integrated with forward simulation planning algorithms. We investigate the usefulness of POMDP based planning for exploration, and to alleviate some of its weaknesses propose a combination with frontier based exploration. Experimental results in simulated and real environments show that, depending on the environment, applying POMDP based planning for exploration can improve performance over frontier exploration.

Learning concepts from graphs in robotics, through first-order logic and discovery of subgraphs, forming arbitrary hierarchies

Ana C. Tenorio-González, Eduardo F. Morales, Automatic discovery of relational concepts by an incremental graph-based representation, Robotics and Autonomous Systems, Volume 83, 2016, Pages 1-14, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.06.012.

Automatic discovery of concepts has been an elusive area in machine learning. In this paper, we describe a system, called ADC, that automatically discovers concepts in a robotics domain, performing predicate invention. Unlike traditional approaches of concept discovery, our approach automatically finds and collects instances of potential relational concepts. An agent, using ADC, creates an incremental graph-based representation with the information it gathers while exploring its environment, from which common sub-graphs are identified. The subgraphs discovered are instances of potential relational concepts which are induced with Inductive Logic Programming and predicate invention. Several concepts can be induced concurrently and the learned concepts can form arbitrarily hierarchies. The approach was tested for learning concepts of polygons, furniture, and floors of buildings with a simulated robot and compared with concepts suggested by users.

Survey of model-based reinforcement learning (and of reinforcement learning in general), for its application to improve learning time in robotics; a lot of references but not so many -or clear- explanations

Athanasios S. Polydoros, Lazaros Nalpantidis, Survey of Model-Based Reinforcement Learning: Applications on Robotics, Journal of Intelligent & Robotic Systems, May 2017, Volume 86, Issue 2, pp 153–173, DOI: 10.1007/s10846-017-0468-y.

Reinforcement learning is an appealing approach for allowing robots to learn new tasks. Relevant literature reveals a plethora of methods, but at the same time makes clear the lack of implementations for dealing with real life challenges. Current expectations raise the demand for adaptable robots. We argue that, by employing model-based reinforcement learning, the—now limited—adaptability characteristics of robotic systems can be expanded. Also, model-based reinforcement learning exhibits advantages that makes it more applicable to real life use-cases compared to model-free methods. Thus, in this survey, model-based methods that have been applied in robotics are covered. We categorize them based on the derivation of an optimal policy, the definition of the returns function, the type of the transition model and the learned task. Finally, we discuss the applicability of model-based reinforcement learning approaches in new applications, taking into consideration the state of the art in both algorithms and hardware.

Emergence of symbols in robotics as a “new” area of research in developmental robotics: a survey

Tadahiro Taniguchi, Takayuki Nagai, Tomoaki Nakamura, Naoto Iwahashi, Tetsuya Ogata, Hideki Asoh, Symbol Emergence in Robotics: A Survey, arXiv:1509.08973.

Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory–motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.

A nive review of reinforcement learning from the perspective of its physiological foundations and its application to Robotics

Cornelius Weber, Mark Elshaw, Stefan Wermter, Jochen Triesch and Christopher Willmot, Reinforcement Learning Embedded in Brains and Robots, Reinforcement Learning: Theory and Applications, Book edited by Cornelius Weber, Mark Elshaw and Norbert Michael Mayer, ISBN 978-3-902613-14-1, pp.424, January 2008, I-Tech Education and Publishing, Vienna, Austria. (Local copy)

A computational cognitive architecture that models emotion

Ron Sun, Nick Wilson, Michael Lynch, Emotion: A Unified Mechanistic Interpretation from a Cognitive Architecture, Cognitive Computation, February 2016, Volume 8, Issue 1, pp 1–14, DOI: 10.1007/s12559-015-9374-4.

This paper reviews a project that attempts to interpret emotion, a complex and multifaceted phenomenon, from a mechanistic point of view, facilitated by an existing comprehensive computational cognitive architecture—CLARION. This cognitive architecture consists of a number of subsystems: the action-centered, non-action-centered, motivational, and metacognitive subsystems. From this perspective, emotion is, first and foremost, motivationally based. It is also action-oriented. It involves many other identifiable cognitive functionalities within these subsystems. Based on these functionalities, we fit the pieces together mechanistically (computationally) within the CLARION framework and capture a variety of important aspects of emotion as documented in the literature.

Combination of several mobile robot localization methods in order to achieve high accuracy in industrial environments, with interesting figures for current localization accuracy achievable by standard solutions

Goran Vasiljevi, Damjan Mikli, Ivica Draganjac, Zdenko Kovai, Paolo Lista, High-accuracy vehicle localization for autonomous warehousing, Robotics and Computer-Integrated Manufacturing, Volume 42, December 2016, Pages 1-16, ISSN 0736-5845, DOI: 10.1016/j.rcim.2016.05.001.

The research presented in this paper aims to bridge the gap between the latest scientific advances in autonomous vehicle localization and the industrial state of the art in autonomous warehousing. Notwithstanding great scientific progress in the past decades, industrial autonomous warehousing systems still rely on external infrastructure for obtaining their precise location. This approach increases warehouse installation costs and decreases system reliability, as it is sensitive to measurement outliers and the external localization infrastructure can get dirty or damaged. Several approaches, well studied in scientific literature, are capable of determining vehicle position based only on information provided by on board sensors, most commonly wheel encoders and laser scanners. However, scientific results published to date either do not provide sufficient accuracy for industrial applications, or have not been extensively tested in realistic, industrial-like operating conditions. In this paper, we combine several well established algorithms into a high-precision localization pipeline, capable of computing the pose of an autonomous forklift to sub-centimeter precision. The algorithms use only odometry information from wheel encoders and range readings from an on board laser scanner. The effectiveness of the proposed solution is evaluated by an extensive experiment that lasted for several days, and was performed in a realistic industrial-like environment.