Category Archives: Robotics

Including the dynamics of the environment in robot motion planning (navigation)

María-Teresa Lorente, Eduardo Owen, and Luis Montano, Model-based robocentric planning and navigation for dynamic environments, The International Journal of Robotics Research Vol 37, Issue 8, pp. 867 – 889 DOI: 10.1177/0278364918775520.

This work addresses a new technique of motion planning and navigation for differential-drive robots in dynamic environments. Static and dynamic objects are represented directly on the control space of the robot, where decisions on the best motion are made. A new model representing the dynamism and the prediction of the future behavior of the environment is defined, the dynamic object velocity space (DOVS). A formal definition of this model is provided, establishing the properties for its characterization. An analysis of its complexity, compared with other methods, is performed. The model contains information about the future behavior of obstacles, mapped on the robot control space. It allows planning of near-time-optimal safe motions within the visibility space horizon, not only for the current sampling period. Navigation strategies are developed based on the identification of situations in the model. The planned strategy is applied and updated for each sampling time, adapting to changes occurring in the scenario. The technique is evaluated in randomly generated simulated scenarios, based on metrics defined using safety and time-to-goal criteria. An evaluation in real-world experiments is also presented.

A probabilistically rigurous formulation of the estimation of grid maps in dynamic scenarios, and a nice review and state-of-the-art of grid maps, both for static and dynamic scenarios

Dominik Nuss, Stephan Reuter, Markus Thom, Ting Yuan, Gunther Krehl, Michael Maile, Axel Gern, and Klaus Dietmayer, A random finite set approach for dynamic occupancy grid maps with real-time application, The International Journal of Robotics Research
Vol 37, Issue 8, pp. 841 – 866, DOI: 10.1177/0278364918775523.

Grid mapping is a well-established approach for environment perception in robotic and automotive applications. Early work suggests estimating the occupancy state of each grid cell in a robot’s environment using a Bayesian filter to recursively combine new measurements with the current posterior state estimate of each grid cell. This filter is often referred to as binary Bayes filter. A basic assumption of classical occupancy grid maps is a stationary environment. Recent publications describe bottom-up approaches using particles to represent the dynamic state of a grid cell and outline prediction-update recursions in a heuristic manner. This paper defines the state of multiple grid cells as a random finite set, which allows to model the environment as a stochastic, dynamic system with multiple obstacles, observed by a stochastic measurement system. It motivates an original filter called the probability hypothesis density / multi-instance Bernoulli (PHD/MIB) filter in a top-down manner. The paper presents a real-time application serving as a fusion layer for laser and radar sensor data and describes in detail a highly efficient parallel particle filter implementation. A quantitative evaluation shows that parameters of the stochastic process model affect the filter results as theoretically expected and that appropriate process and observation models provide consistent state estimation results.

Considering the robot and all the intermmediate objects that participate in the manipulation of another object as a MDP

Yilun Zhou, Benjamin Burchfiel, George Konidaris, Representing, learning, and controlling complex object interactions, Autonomous Robots, Volume 42, Issue 7, pp 1355–1367, DOI: 10.1007/s1051.

We present a framework for representing scenarios with complex object interactions, where a robot cannot directly interact with the object it wishes to control and must instead influence it via intermediate objects. For instance, a robot learning to drive a car can only change the car’s pose indirectly via the steering wheel, and must represent and reason about the relationship between its own grippers and the steering wheel, and the relationship between the steering wheel and the car. We formalize these interactions as chains and graphs of Markov decision processes (MDPs) and show how such models can be learned from data. We also consider how they can be controlled given known or learned dynamics. We show that our complex model can be collapsed into a single MDP and solved to find an optimal policy for the combined system. Since the resulting MDP may be very large, we also introduce a planning algorithm that efficiently produces a potentially suboptimal policy. We apply these models to two systems in which a robot uses learning from demonstration to achieve indirect control: playing a computer game using a joystick, and using a hot water dispenser to heat a cup of water.

Loop closure detection by optimization of finite sets of images that correspond to each place

Han, F., Wang, H., Huang, G. et al, Sequence-based sparse optimization methods for long-term loop closure detection in visual SLAM, Autonomous Robots, Volume 42, Issue 7, pp 1323–1335, DOI: 10.1007/s1051.

Loop closure detection is one of the most important module in Simultaneously Localization and Mapping (SLAM) because it enables to find the global topology among different places. A loop closure is detected when the current place is recognized to match the previous visited places. When the SLAM is executed throughout a long-term period, there will be additional challenges for the loop closure detection. The illumination, weather, and vegetation conditions can often change significantly during the life-long SLAM, resulting in the critical strong perceptual aliasing and appearance variation problems in loop closure detection. In order to address this problem, we propose a new Robust Multimodal Sequence-based (ROMS) method for robust loop closure detection in long-term visual SLAM. A sequence of images is used as the representation of places in our ROMS method, where each image in the sequence is encoded by multiple feature modalites so that different places can be recognized discriminatively. We formulate the robust place recognition problem as a convex optimization problem with structured sparsity regularization due to the fact that only a small set of template places can match the query place. In addition, we also develop a new algorithm to solve the formulated optimization problem efficiently, which guarantees to converge to the global optima theoretically. Our ROMS method is evaluated through extensive experiments on three large-scale benchmark datasets, which record scenes ranging from different times of the day, months, and seasons. Experimental results demonstrate that our ROMS method outperforms the existing loop closure detection methods in long-term SLAM, and achieves the state-of-the-art performance.

Distributing a neural network among the robots of a swarm

Michael Otte, An emergent group mind across a swarm of robots: Collective cognition and distributed sensing via a shared wireless neural network, The International Journal of Robotics Research, DOI: 10.1177/0278364918779704.

We pose the “trained-at-runtime heterogeneous swarm response problem,” in which a swarm of robots must do the following three things: (1) Learn to differentiate between multiple classes of environmental feature patterns (where the feature patterns are distributively sensed across all robots in the swarm). (2) Perform the particular collective behavior that is the appropriate response to the feature pattern that the swarm recognizes in the environment at runtime (where a collective behavior is defined by a mapping of robot actions to robots). (3) The data required for both (1) and (2) is uploaded to the swarm after it has been deployed, i.e., also at runtime (the data required for (1) is the specific environmental feature patterns that the swarm should learn to differentiate between, and the data required for (2) is the mapping from feature classes to swarm behaviors). To solve this problem, we propose a new form of emergent distributed neural network that we call an “artificial group mind.” The group mind transforms a robotic swarm into a single meta-computer that can be programmed at runtime. In particular, the swarm-spanning artificial neural network emerges as each robot maintains a slice of neurons and forms wireless neural connections between its neurons and those on nearby robots. The nearby robots are discovered at runtime. Experiments on real swarms containing up to 316 robots demonstrate that the group mind enables collective decision-making based on distributed sensor data, and solves the trained-at-runtime heterogeneous swarm response problem. The group mind is a new tool that can be used to create more complex emergent swarm behaviors. The group mind also enables swarm behaviors to be a function of global patterns observed across the environment—where the patterns are orders of magnitude larger than the robots themselves.

Convergence in reference tracking by a nonlinear system, with a known model, remotely controlled through WiFi

Ali Parsa, Alireza Farhadi, Measurement and control of nonlinear dynamic systems over the internet (IoT): Applications in remote control of autonomous vehicles, Automatica, Volume 95, 2018, Pages 93-103 DOI: 10.1016/j.automatica.2018.05.016.

This paper presents a new technique for almost sure asymptotic state tracking, stability and reference tracking of nonlinear dynamic systems by remote controller over the packet erasure channel, which is an abstract model for transmission via WiFi and the Internet. By implementing a suitable linearization method, a proper encoder and decoder are presented for tracking the state trajectory of nonlinear systems at the end of communication link when the measurements are sent through the packet erasure channel. Then, a controller for reference tracking of the system is designed. In the proposed technique linearization is applied when the error between the states and an estimate of these states at the decoder increases. It is shown that the proposed technique results in almost sure asymptotic reference tracking (and hence stability) over the packet erasure channel. The satisfactory performance of the proposed state trajectory and reference tracking technique is illustrated by computer simulations by applying this technique on the unicycle model, which represents the dynamic of autonomous vehicles.

A review on mobile robot navigation

Tzafestas, S.G. , Mobile Robot Control and Navigation: A Global Overview,J Intell Robot Syst (2018) 91: 35 DOI: 10.1007/s10846-018-0805-9.

The aim of this paper is to provide a global overview of mobile robot control and navigation methodologies developed over the last decades. Mobile robots have been a substantial contributor to the welfare of modern society over the years, including the industrial, service, medical, and socialization sectors. The paper starts with a list of books on autonomous mobile robots and an overview of survey papers that cover a wide range of decision, control and navigation areas. The organization of the material follows the structure of the author’s recent book on mobile robot control. Thus, the following aspects of wheeled mobile robots are considered: kinematic modeling, dynamic modeling, conventional control, affine model-based control, invariant manifold-based control, model reference adaptive control, sliding-mode control, fuzzy and neural control, vision-based control, path and motion planning, localization and mapping, and control and software architectures.

A parallel implementation of a new probabilistic filter for occupancy grid maps that deals with non-static environments

Dominik Nuss, Stephan Reuter, Markus Thom, …, A random finite set approach for dynamic occupancy grid maps with real-time application, The International Journal of Robotics Research DOI: 10.1177/0278364918775523.

Grid mapping is a well-established approach for environment perception in robotic and automotive applications. Early work suggests estimating the occupancy state of each grid cell in a robot’s environment using a Bayesian filter to recursively combine new measurements with the current posterior state estimate of each grid cell. This filter is often referred to as binary Bayes filter. A basic assumption of classical occupancy grid maps is a stationary environment. Recent publications describe bottom-up approaches using particles to represent the dynamic state of a grid cell and outline prediction-update recursions in a heuristic manner. This paper defines the state of multiple grid cells as a random finite set, which allows to model the environment as a stochastic, dynamic system with multiple obstacles, observed by a stochastic measurement system. It motivates an original filter called the probability hypothesis density / multi-instance Bernoulli (PHD/MIB) filter in a top-down manner. The paper presents a real-time application serving as a fusion layer for laser and radar sensor data and describes in detail a highly efficient parallel particle filter implementation. A quantitative evaluation shows that parameters of the stochastic process model affect the filter results as theoretically expected and that appropriate process and observation models provide consistent state estimation results.

High performance robotic computing (HPRC) vs. High performance computing, and its application to multirobot systems

Leonardo Camargo-Forero, Pablo Royo, Xavier Prats, Towards high performance robotic computing, Robotics and Autonomous Systems, Volume 107, 2018, Pages 167-181 DOI: 10.1016/j.robot.2018.05.011.

Embedding a robot with a companion computer is becoming a common practice nowadays. Such computer is installed with an operatingsystem, often a Linux distribution. Moreover, Graphic Processing Units (GPUs) can be embedded on a robot, giving it the capacity of performing complex on-board computing tasks while executing a mission. It seems that a next logical transition, consist of deploying a cluster of computers among embedded computing cards. With this approach, a multi-robot system can be set as a High Performance Computing (HPC) cluster. The advantages of such infrastructure are many, from providing higher computing power up to setting scalable multi-robot systems. While HPC has been always seen as a speeding-up tool, we believe that HPC in the world of robotics can do much more than simply accelerating the execution of complex computing tasks. In this paper, we introduce the novel concept of High Performance Robotic Computing — HPRC, an augmentation of the ideas behind traditional HPC to fit and enhance the world of robotics. As a proof of concept, we introduce novel HPC software developed to control the motion of a set of robots using the standard parallel MPI (Message Passing Interface) library. The parallel motion software includes two operation modes: Parallel motion to specific target and swarm-like behavior. Furthermore, the HPC software is virtually scalable to control any quantity of moving robots, including Unmanned Aerial Vehicles, Unmanned Ground Vehicles, etc.

Filters with quaternions for localization

Rangaprasad Arun Srivatsan, Mengyun Xu, Nicolas Zevallos, and Howie Choset, Probabilistic pose estimation using a Bingham distribution-based linear filter, The International Journal of Robotics Research DOI: 10.1177/0278364918778353.

Pose estimation is central to several robotics applications such as registration, hand–eye calibration, and simultaneous localization and mapping (SLAM). Online pose estimation methods typically use Gaussian distributions to describe the uncertainty in the pose parameters. Such a description can be inadequate when using parameters such as unit quaternions that are not unimodally distributed. A Bingham distribution can effectively model the uncertainty in unit quaternions, as it has antipodal symmetry, and is defined on a unit hypersphere. A combination of Gaussian and Bingham distributions is used to develop a truly linear filter that accurately estimates the distribution of the pose parameters. The linear filter, however, comes at the cost of state-dependent measurement uncertainty. Using results from stochastic theory, we show that the state-dependent measurement uncertainty can be evaluated exactly. To show the broad applicability of this approach, we derive linear measurement models for applications that use position, surface-normal, and pose measurements. Experiments assert that this approach is robust to initial estimation errors as well as sensor noise. Compared with state-of-the-art methods, our approach takes fewer iterations to converge onto the correct pose estimate. The efficacy of the formulation is illustrated with a number of examples on standard datasets as well as real-world experiments.