Category Archives: Robotics

A method to model trajectories that captures its essential parameters (for comparisons, clustering, etc.)

W. Lin et al., “A Tube-and-Droplet-Based Approach for Representing and Analyzing Motion Trajectories,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 8, pp. 1489-1503, Aug. 1 2017.DOI: 10.1109/TPAMI.2016.2608884.

Trajectory analysis is essential in many applications. In this paper, we address the problem of representing motion trajectories in a highly informative way, and consequently utilize it for analyzing trajectories. Our approach first leverages the complete information from given trajectories to construct a thermal transfer field which provides a context-rich way to describe the global motion pattern in a scene. Then, a 3D tube is derived which depicts an input trajectory by integrating its surrounding motion patterns contained in the thermal transfer field. The 3D tube effectively: 1) maintains the movement information of a trajectory, 2) embeds the complete contextual motion pattern around a trajectory, 3) visualizes information about a trajectory in a clear and unified way. We further introduce a droplet-based process. It derives a droplet vector from a 3D tube, so as to characterize the high-dimensional 3D tube information in a simple but effective way. Finally, we apply our tube-and-droplet representation to trajectory analysis applications including trajectory clustering, trajectory classification & abnormality detection, and 3D action recognition. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our approach.

Several strategies for exploring unknown environments based on graphs extracted from Voronoi diagrams

E. G. Tsardoulias, A. Iliakopoulou, A. Kargakos, L. Petrou, Cost-Based Target Selection Techniques Towards Full Space Exploration and Coverage for USAR applications in a Priori Unknown Environments, J Intell Robot Syst (2017) 87:313–340, DOI: 10.1007/s10846-016-0434-0.

Full coverage and exploration of an environment is essential in robot rescue operations where victim identification is required. Three methods of target selection towards full exploration and coverage of an unknown space oriented for Urban Search and Rescue (USAR) applications have been developed. These are the Selection of the closest topological node, the Selection of the minimum cost topological node and the Selection of the minimum cost sub-graph. All methods employ a topological graph extracted from the Generalized Voronoi Diagram (GVD), in order to select the next best target during exploration. The first method utilizes a distance metric for determining the next best target whereas the Selection of the minimum cost topological node method assigns four different weights on the graph’s nodes, based on certain environmental attributes. The Selection of the minimum cost sub-graph uses a similar technique, but instead of single nodes, sets of graph nodes are examined. In addition, a modification of A* algorithm for biased path creation towards uncovered areas, aiming at a faster spatial coverage, is introduced. The proposed methods’ performance is verified by experiments conducted in two heterogeneous simulated environments. Finally, the results are compared with two common exploration methods.

Prediction of changes in behaviors of cars for autohomous driving, based on POMDPs made efficient by separation of multiple policies

Enric Galceran, Alexander G. Cunningham, Ryan M. Eustice, Edwin Olson,Multipolicy decision-making for autonomous driving via changepoint-based behavior prediction: Theory and experiment, Autonomous Robots, August 2017, Volume 41, Issue 6, pp 1367–1382, DOI: 10.1007/s10514-017-9619-z.

This paper reports on an integrated inference and decision-making approach for autonomous driving that models vehicle behavior for both our vehicle and nearby vehicles as a discrete set of closed-loop policies. Each policy captures a distinct high-level behavior and intention, such as driving along a lane or turning at an intersection. We first employ Bayesian changepoint detection on the observed history of nearby cars to estimate the distribution over potential policies that each nearby car might be executing. We then sample policy assignments from these distributions to obtain high-likelihood actions for each participating vehicle, and perform closed-loop forward simulation to predict the outcome for each sampled policy assignment. After evaluating these predicted outcomes, we execute the policy with the maximum expected reward value. We validate behavioral prediction and decision-making using simulated and real-world experiments.

Interesting review of approaches to visually detect loop closings in robotics, and a novel, very efficient method that is independent on the image representation and based on not using the typical l2 norm (least squares), which leads to dense optimization problems

Yasir Latif, Guoquan Huang, John Leonard, José Neira, Sparse optimization for robust and efficient loop closing, Robotics and Autonomous Systems, Volume 93, July 2017, Pages 13-26, ISSN 0921-8890,DOI: 10.1016/j.robot.2017.03.016.

It is essential for a robot to be able to detect revisits or loop closures for long-term visual navigation. A key insight explored in this work is that the loop-closing event inherently occurs sparsely, i.e., the image currently being taken matches with only a small subset (if any) of previous images. Based on this observation, we formulate the problem of loop-closure detection as a sparse, convex
ℓ 1 -minimization problem. By leveraging fast convex optimization techniques, we are able to efficiently find loop closures, thus enabling real-time robot navigation. This novel formulation requires no offline dictionary learning, as required by most existing approaches, and thus allows online incremental operation. Our approach ensures a unique hypothesis by choosing only a single globally optimal match when making a loop-closure decision. Furthermore, the proposed formulation enjoys a flexible representation with no restriction imposed on how images should be represented, while requiring only that the representations are “close” to each other when the corresponding images are visually similar. The proposed algorithm is validated extensively using real-world datasets.

Reinforcement learning to learn the model of the world intrinsically motivated

Todd Hester, Peter Stone, Intrinsically motivated model learning for developing curious robots, Artificial Intelligence, Volume 247, June 2017, Pages 170-186, ISSN 0004-3702, DOI: 10.1016/j.artint.2015.05.002.

Reinforcement Learning (RL) agents are typically deployed to learn a specific, concrete task based on a pre-defined reward function. However, in some cases an agent may be able to gain experience in the domain prior to being given a task. In such cases, intrinsic motivation can be used to enable the agent to learn a useful model of the environment that is likely to help it learn its eventual tasks more efficiently. This paradigm fits robots particularly well, as they need to learn about their own dynamics and affordances which can be applied to many different tasks. This article presents the texplore with Variance-And-Novelty-Intrinsic-Rewards algorithm (texplore-vanir), an intrinsically motivated model-based RL algorithm. The algorithm learns models of the transition dynamics of a domain using random forests. It calculates two different intrinsic motivations from this model: one to explore where the model is uncertain, and one to acquire novel experiences that the model has not yet been trained on. This article presents experiments demonstrating that the combination of these two intrinsic rewards enables the algorithm to learn an accurate model of a domain with no external rewards and that the learned model can be used afterward to perform tasks in the domain. While learning the model, the agent explores the domain in a developing and curious way, progressively learning more complex skills. In addition, the experiments show that combining the agent’s intrinsic rewards with external task rewards enables the agent to learn faster than using external rewards alone. We also present results demonstrating the applicability of this approach to learning on robots.

State of the art and historical background of the classical divergence between AI and robotics

Kanna Rajan, Alessandro Saffiotti, Towards a science of integrated AI and Robotics, Artificial Intelligence, Volume 247, June 2017, Pages 1-9, ISSN 0004-3702, DOI: 10.1016/j.artint.2017.03.003.

The early promise of the impact of machine intelligence did not involve the partitioning of the nascent field of Artificial Intelligence. The founders of AI envisioned the notion of embedded intelligence as being conjoined between perception, reasoning and actuation. Yet over the years the fields of AI and Robotics drifted apart. Practitioners of AI focused on problems and algorithms abstracted from the real world. Roboticists, generally with a background in mechanical and electrical engineering, concentrated on sensori-motor functions. That divergence is slowly being bridged with the maturity of both fields and with the growing interest in autonomous systems. This special issue brings together the state of the art and practice of the emergent field of integrated AI and Robotics, and highlights the key areas along which this current evolution of machine intelligence is heading.

How “behaviour trees” generalize the subsumption architecture and some other control architecture frameworks

M. Colledanchise and P. Ögren, “How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees,” in IEEE Transactions on Robotics, vol. 33, no. 2, pp. 372-389, April 2017.DOI: 10.1109/TRO.2016.2633567.

Behavior trees (BTs) are a way of organizing the switching structure of a hybrid dynamical system (HDS), which was originally introduced in the computer game programming community. In this paper, we analyze how the BT representation increases the modularity of an HDS and how key system properties are preserved over compositions of such systems, in terms of combining two BTs into a larger one. We also show how BTs can be seen as a generalization of sequential behavior compositions, the subsumption architecture, and decisions trees. These three tools are powerful but quite different, and the fact that they are unified in a natural way in BTs might be a reason for their popularity in the gaming community. We conclude the paper by giving a set of examples illustrating how the proposed analysis tools can be applied to robot control BTs.

On the current limitations of robotics research concerning the generalization of reported results to different set-ups

Francesco Amigoni, Matteo Luperto, Viola Schiaffonati,Toward generalization of experimental results for autonomous robots, Robotics and Autonomous Systems, Volume 90, April 2017, Pages 4-14, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.08.016.

In this paper we discuss some issues in the experimental evaluation of intelligent autonomous systems, focusing on systems, like autonomous robots, operating in physical environments. We argue that one of the weaknesses of current experimental practices is the low degree of generalization of experimental results, meaning that knowing the performance a robot system obtains in a test setting does not provide much information about the performance the same system could achieve in other settings. We claim that one of the main obstacles to achieve generalization of experimental results in autonomous robotics is the low degree of representativeness of the selected experimental settings. We survey and discuss the degree of representativeness of experimental settings used in a significant sample of current research and we propose some strategies to overcome the emerging limitations.

Robots that pre-compute a number of possible behaviours (in simulation) and then learn their performance with them (propragating that performance measures to similar behaviors through Gaussian Processes Regression) and select the best at each situation (through Bayesian Optimization), thus confronting varying environments and damages to the robot

A. Cully, et al. Robots that can adapt like animals, Nature, 521 (2015), pp. 503–507, DOI: 10.1038/nature14422.

Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot think outside the box to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robotâ €™ s prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.

Qualitative robot navigation

Sergio Miguel-Tomé, Navigation through unknown and dynamic open spaces using topological notions, Connection Science, DOI: 10.1080/09540091.2016.1277691.

Until now, most algorithms used for navigation have had the purpose of directing system towards one point in space. However, humans communicate tasks by specifying spatial relations among elements or places. In addition, the environments in which humans develop their activities are extremely dynamic. The only option that allows for successful navigation in dynamic and unknown environments is making real-time decisions. Therefore, robots capable of collaborating closely with human beings must be able to make decisions based on the local information registered by the sensors and interpret and express spatial relations. Furthermore, when one person is asked to perform a task in an environment, this task is communicated given a category of goals so the person does not need to be supervised. Thus, two problems appear when one wants to create multifunctional robots: how to navigate in dynamic and unknown environments using spatial relations and how to accomplish this without supervision. In this article, a new architecture to address the two cited problems is presented, called the topological qualitative navigation architecture. In previous works, a qualitative heuristic called the heuristic of topological qualitative semantics (HTQS) has been developed to establish and identify spatial relations. However, that heuristic only allows for establishing one spatial relation with a specific object. In contrast, navigation requires a temporal sequence of goals with different objects. The new architecture attains continuous generation of goals and resolves them using HTQS. Thus, the new architecture achieves autonomous navigation in dynamic or unknown open environments.