Author Archives: Juan-antonio Fernández-madrigal

The problem of the interdependence among particles in PF after the resampling step, and an approach to solve it

R. Lamberti, Y. Petetin, F. Desbouvries and F. Septier, Independent Resampling Sequential Monte Carlo Algorithms, IEEE Transactions on Signal Processing, vol. 65, no. 20, pp. 5318-5333, DOI: 10.1109/TSP.2017.2726971.

Sequential Monte Carlo algorithms, or particle filters, are Bayesian filtering algorithms, which propagate in time a discrete and random approximation of the a posteriori distribution of interest. Such algorithms are based on importance sampling with a bootstrap resampling step, which aims at struggling against weight degeneracy. However, in some situations (informative measurements, high-dimensional model), the resampling step can prove inefficient. In this paper, we revisit the fundamental resampling mechanism, which leads us back to Rubin’s static resampling mechanism. We propose an alternative rejuvenation scheme in which the resampled particles share the same marginal distribution as in the classical setup, but are now independent. This set of independent particles provides a new alternative to compute a moment of the target distribution and the resulting estimate is analyzed through a CLT. We next adapt our results to the dynamic case and propose a particle filtering algorithm based on independent resampling. This algorithm can be seen as a particular auxiliary particle filter algorithm with a relevant choice of the first-stage weights and instrumental distributions. Finally, we validate our results via simulations, which carefully take into account the computational budget.

Using bad results during policy iteration, and not only good ones, to improve the learning process

A. Colomé and C. Torras, Dual REPS: A Generalization of Relative Entropy Policy Search Exploiting Bad Experiences, IEEE Transactions on Robotics, vol. 33, no. 4, pp. 978-985, DOI: 10.1109/TRO.2017.2679202.

Policy search (PS) algorithms are widely used for their simplicity and effectiveness in finding solutions for robotic problems. However, most current PS algorithms derive policies by statistically fitting the data from the best experiments only. This means that experiments yielding a poor performance are usually discarded or given too little influence on the policy update. In this paper, we propose a generalization of the relative entropy policy search (REPS) algorithm that takes bad experiences into consideration when computing a policy. The proposed approach, named dual REPS (DREPS) following the philosophical interpretation of the duality between good and bad, finds clusters of experimental data yielding a poor behavior and adds them to the optimization problem as a repulsive constraint. Thus, considering that there is a duality between good and bad data samples, both are taken into account in the stochastic search for a policy. Additionally, a cluster with the best samples may be included as an attractor to enforce faster convergence to a single optimal solution in multimodal problems. We first tested our proposed approach in a simulated reinforcement learning setting and found that DREPS considerably speeds up the learning process, especially during the early optimization steps and in cases where other approaches get trapped in between several alternative maxima. Further experiments in which a real robot had to learn a task with a multimodal reward function confirm the advantages of our proposed approach with respect to REPS.

Taking into account explicitly the dynamics of the environment, and in particular the diverse frequencies of changes, for mobile robot mapping

T. Krajník, J. P. Fentanes, J. M. Santos and T. Duckett, FreMEn: Frequency Map Enhancement for Long-Term Mobile Robot Autonomy in Changing Environments, IEEE Transactions on Robotics, vol. 33, no. 4, pp. 964-977, DOI: 10.1109/TRO.2017.2665664.

We present a new approach to long-term mobile robot mapping in dynamic indoor environments. Unlike traditional world models that are tailored to represent static scenes, our approach explicitly models environmental dynamics. We assume that some of the hidden processes that influence the dynamic environment states are periodic and model the uncertainty of the estimated state variables by their frequency spectra. The spectral model can represent arbitrary timescales of environment dynamics with low memory requirements. Transformation of the spectral model to the time domain allows for the prediction of the future environment states, which improves the robot’s long-term performance in changing environments. Experiments performed over time periods of months to years demonstrate that the approach can efficiently represent large numbers of observations and reliably predict future environment states. The experiments indicate that the model’s predictive capabilities improve mobile robot localization and navigation in changing environments.

Interleaving segmentation (semantics) and dense 3D reconstruction (metrics)

C. Häne, C. Zach, A. Cohen and M. Pollefeys, Dense Semantic 3D Reconstruction, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 9, pp. 1730-1743, DOI: 10.1109/TPAMI.2016.2613051.

Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being `too noisy’. These priors generally yield overly smooth reconstructions and/or segmentations in certain regions while they fail to constrain the solution sufficiently in other areas. In this paper, we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other’s task. As a consequence, we propose a mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. On the one hand knowing about the semantic class of the geometry provides information about the likelihood of the surface direction. On the other hand the surface direction provides information about the likelihood of the semantic class. Experimental results on several data sets highlight the advantages of our joint formulation. We show how weakly observed surfaces are reconstructed more faithfully compared to a geometry only reconstruction. Thanks to the volumetric nature of our formulation we also infer surfaces which cannot be directly observed for example the surface between the ground and a building. Finally, our method returns a semantic segmentation which is consistent across the whole dataset.

Clustering in hypergraphs

P. Purkait, T. J. Chin, A. Sadri and D. Suter, Clustering with Hypergraphs: The Case for Large Hyperedges, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 9, pp. 1697-1711, DOI: 10.1109/TPAMI.2016.2614980.

The extension of conventional clustering to hypergraph clustering, which involves higher order similarities instead of pairwise similarities, is increasingly gaining attention in computer vision. This is due to the fact that many clustering problems require an affinity measure that must involve a subset of data of size more than two. In the context of hypergraph clustering, the calculation of such higher order similarities on data subsets gives rise to hyperedges. Almost all previous work on hypergraph clustering in computer vision, however, has considered the smallest possible hyperedge size, due to a lack of study into the potential benefits of large hyperedges and effective algorithms to generate them. In this paper, we show that large hyperedges are better from both a theoretical and an empirical standpoint. We then propose a novel guided sampling strategy for large hyperedges, based on the concept of random cluster models. Our method can generate large pure hyperedges that significantly improve grouping accuracy without exponential increases in sampling costs. We demonstrate the efficacy of our technique on various higher-order grouping problems. In particular, we show that our approach improves the accuracy and efficiency of motion segmentation from dense, long-term, trajectories.

An interesting soft-partition method based on hierarchical graphs (trees, actually) applied to topic detection in documents

Peixian Chen, Nevin L. Zhang, Tengfei Liu, Leonard K.M. Poon, Zhourong Chen, Farhan Khawar, Latent tree models for hierarchical topic detection, Artificial Intelligence, Volume 250, 2017, Pages 105-124, DOI: 10.1016/j.artint.2017.06.004.

We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence/absence of words in a document. The variables at other levels are binary latent variables that represent word co-occurrence patterns or co-occurrences of such patterns. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. In comparison with LDA-based methods, a key advantage of the new method is that it represents co-occurrence patterns explicitly using model structures. Extensive empirical results show that the new method significantly outperforms the LDA-based methods in term of model quality and meaningfulness of topics and topic hierarchies.

POMDPs with multicriteria in the cost to optimize – a hierarchical approach

Seyedshams Feyzabadi, Stefano Carpin, Planning using hierarchical constrained Markov decision processes, Autonomous Robots, Volume 41, Issue 8, pp 1589–1607, DOI: 10.1007/s10514-017-9630-4.

Constrained Markov decision processes offer a principled method to determine policies for sequential stochastic decision problems where multiple costs are concurrently considered. Although they could be very valuable in numerous robotic applications, to date their use has been quite limited. Among the reasons for their limited adoption is their computational complexity, since policy computation requires the solution of constrained linear programs with an extremely large number of variables. To overcome this limitation, we propose a hierarchical method to solve large problem instances. States are clustered into macro states and the parameters defining the dynamic behavior and the costs of the clustered model are determined using a Monte Carlo approach. We show that the algorithm we propose to create clustered states maintains valuable properties of the original model, like the existence of a solution for the problem. Our algorithm is validated in various planning problems in simulation and on a mobile robot platform, and we experimentally show that the clustered approach significantly outperforms the non-hierarchical solution while experiencing only moderate losses in terms of objective functions.

A new robotic middleware that exposes “resources” to the network instead of functionality

Marcus V. D. VelosoJosé Tarcísio C. FilhoGuilherme A. Barreto, SOM4R: a Middleware for Robotic Applications Based on the Resource-Oriented Architecture, Journal of Intelligent & Robotic Systems, Volume 87, Issue 3–4, pp 487–506, DOI: 10.1007/s10846-017-0504-y.

This paper relies on the resource-oriented architecture (ROA) to propose a middleware that shares resources (sensors, actuators and services) of one or more robots through the TCP/IP network, providing greater efficiency in the development of software applications for robotics. The proposed middleware consists of a set of web services that provides access to representational state of resources through simple and high-level interfaces to implement a software architecture for autonomous robots. The benefits of the proposed approach are manifold: i) full abstraction of complexity and heterogeneity of robotic devices through web services and uniform interfaces, ii) scalability and independence of the operating system and programming language, iii) secure control of resources for local or remote applications through the TCP/IP network, iv) the adoption of the Resource Description Framework (RDF), XML language and HTTP protocol, and v) dynamic configuration of the connections between services at runtime. The middleware was developed using the Linux operating system (Ubuntu), with some applications built as proofs of concept for the Android operating system. The architecture specification and the open source implementation of the proposed middleware are detailed in this article, as well as applications for robot remote control via wireless networks, voice command functionality, and obstacle detection and avoidance.

Real-time modification of user inputs in the teleoperation of an UAV in order to avoid obstacles with a reactive algorithm, transparently from the user control

Daman Bareiss, Joseph R. Bourne & Kam K. Leang, On-board model-based automatic collision avoidance: application in remotely-piloted unmanned aerial vehicles, Auton Robot (2017) 41:1539–1554, DOI: 10.1007/s10514-017-9614-4.

This paper focuses on real-world implementation and verification of a local, model-based stochastic automatic collision avoidance algorithm, with application in
remotely-piloted (tele-operated) unmanned aerial vehicles (UAVs). Automatic collision detection and avoidance for tele-operated UAVs can reduce the workload of pilots to allow them to focus on the task at hand, such as searching for victims in a search and rescue scenario following a natural disaster. The proposed algorithm takes the pilot’s input and exploits the robot’s dynamics to predict the robot’s trajectory for determining whether a collision will occur. Using on-board sensors for obstacle detection, if a collision is imminent, the algorithm modifies the pilot’s input to avoid the collision while attempting to maintain the pilot’s intent. The algorithm is implemented using a low-cost on-board computer, flight-control system, and a two-dimensional laser illuminated detection and ranging sensor for obstacle detection along the trajectory of the robot. The sensor data is processed using a split-and-merge segmentation algorithm and an approximate Minkowski difference. Results from flight tests demonstrate the algorithm’s capabilities for teleoperated collision-free control of an experimental UAV.

Learning basic motion skills through modeling them as parameterized modules (learned by demonstration and babbling), and a nice state of the art of the development of motion skills

René Felix Reinhart, Autonomous exploration of motor skills by skill babbling, Auton Robot (2017) 41:1521–1537, DOI: 10.1007/s10514-016-9613-x.

Autonomous exploration of motor skills is a key capability of learning robotic systems. Learning motor skills can be formulated as inverse modeling problem, which targets at finding an inverse model that maps desired outcomes in some task space, e.g., via points of a motion, to appropriate actions, e.g., motion control policy parameters. In this paper, autonomous exploration of motor skills is achieved by incrementally learning inverse models starting from an initial demonstration. The algorithm is referred to as skill babbling, features sample-efficient learning, and scales to high-dimensional action spaces. Skill babbling extends ideas of goal-directed exploration, which organizes exploration in the space of goals. The proposed approach provides a modular framework for autonomous skill exploration by separating the learning of the inverse model from the exploration mechanism and a model of achievable targets, i.e. the workspace. The effectiveness of skill babbling is demonstrated for a range of motor tasks comprising the autonomous bootstrapping of inverse kinematics and parameterized motion primitives.