A theoretical framework based on hybrid models and logical verification to prove the guarantees for obstacle avoidance in mobile robot navigation

Stefan Mitsch, Khalil Ghorbal, David Vogelbacher, and André Platzer, Formal verification of obstacle avoidance and navigation of ground robots, The International Journal of Robotics Research Vol 36, Issue 12, pp. 1312 – 1340, DOI: 0.1177/0278364917733549.

This article answers fundamental safety questions for ground robot navigation: under which circumstances does which control decision make a ground robot safely avoid obstacles? Unsurprisingly, the answer depends on the exact formulation of the safety objective, as well as the physical capabilities and limitations of the robot and the obstacles. Because uncertainties about the exact future behavior of a robot’s environment make this a challenging problem, we formally verify corresponding controllers and provide rigorous safety proofs justifying why the robots can never collide with the obstacle in the respective physical model. To account for ground robots in which different physical phenomena are important, we analyze a series of increasingly strong properties of controllers for increasingly rich dynamics and identify the impact that the additional model parameters have on the required safety margins. We analyze and formally verify: (i) static safety, which ensures that no collisions can happen with stationary obstacles; (ii) passive safety, which ensures that no collisions can happen with stationary or moving obstacles while the robot moves; (iii) the stronger passive-friendly safety, in which the robot further maintains sufficient maneuvering distance for obstacles to avoid collision as well; and (iv) passive orientation safety, which allows for imperfect sensor coverage of the robot, i.e., the robot is aware that not everything in its environment will be visible. We formally prove that safety can be guaranteed despite sensor uncertainty and actuator perturbation. We complement these provably correct safety properties with liveness properties: we prove that provably safe motion is flexible enough to let the robot navigate waypoints and pass intersections. To account for the mixed influence of discrete control decisions and the continuous physical motion of the ground robot, we develop corresponding hybrid system models and use differential dynamic logic theorem-proving techniques to formally verify their correctness. Since these models identify a broad range of conditions under which control decisions are provably safe, our results apply to any control algorithm for ground robots with the same dynamics. As a demonstration, we also synthesize provably correct runtime monitor conditions that check the compliance of any control algorithm with the verified control decisions.

Extending bayesian fusion from Euclidean spaces to Lie groups

Kevin C. Wolfe, Michael Mashner, Gregory S. Chirikjian, Bayesian Fusion on Lie Groups, JOURNAL OF ALGEBRAIC STATISTICS Vol. 2, No. 1, 2011, 75-97, DOI: 10.18409/jas.v2i1.11.

An increasing number of real-world problems involve the measurement of data, and the computation of estimates, on Lie groups. Moreover, establishing confidence in the resulting estimates is important. This paper therefore seeks to contribute to a larger theoretical framework that generalizes classical multivariate statistical analysis from Euclidean space to the setting of Lie groups. The particular focus here is on extending Bayesian fusion, based on exponential families of probability densities, from the Euclidean setting to Lie groups. The definition and properties of a new kind of Gaussian distribution for connected unimodular Lie groups are articulated, and explicit formulas and algorithms are given for finding the mean and covariance of the fusion model based on the means and covariances of the constituent probability densities. The Lie groups that find the most applications in engineering are rotation groups and groups of rigid-body motions. Orientational (rotation-group) data and associated algorithms for estimation arise in problems including satellite attitude, molecular spectroscopy, and global geological studies. In robotics and manufacturing, quantifying errors in the position and orientation of tools and parts are important for task performance and quality control. Developing a general way to handle problems on Lie groups can be applied to all of these problems. In particular, we study the issue of how to ‘fuse’ two such Gaussians and how to obtain a new Gaussian of the same form that is ‘close to’ the fused density.This is done at two levels of approximation that result from truncating the Baker-Campbell-Hausdorff formula with different numbers of terms. Algorithms are developed and numerical results are presented that are shown to generate the equivalent fused density with good accuracy

First end-to-end implementation of (monocular) Visual Odometry with deep neural networks, including output with the uncertainty of the result

Sen Wang, Ronald Clark, Hongkai Wen, and Niki Trigoni, End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks, The International Journal of Robotics Research Vol 37, Issue 4-5, pp. 513 – 542, DOI: 0.1177/0278364917734298.

This paper studies visual odometry (VO) from the perspective of deep learning. After tremendous efforts in the robotics and computer vision communities over the past few decades, state-of-the-art VO algorithms have demonstrated incredible performance. However, since the VO problem is typically formulated as a pure geometric problem, one of the key features still missing from current VO systems is the capability to automatically gain knowledge and improve performance through learning. In this paper, we investigate whether deep neural networks can be effective and beneficial to the VO problem. An end-to-end, sequence-to-sequence probabilistic visual odometry (ESP-VO) framework is proposed for the monocular VO based on deep recurrent convolutional neural networks. It is trained and deployed in an end-to-end manner, that is, directly inferring poses and uncertainties from a sequence of raw images (video) without adopting any modules from the conventional VO pipeline. It can not only automatically learn effective feature representation encapsulating geometric information through convolutional neural networks, but also implicitly model sequential dynamics and relation for VO using deep recurrent neural networks. Uncertainty is also derived along with the VO estimation without introducing much extra computation. Extensive experiments on several datasets representing driving, flying and walking scenarios show competitive performance of the proposed ESP-VO to the state-of-the-art methods, demonstrating a promising potential of the deep learning technique for VO and verifying that it can be a viable complement to current VO systems.

Automatic hierarchization for the recognition of places in images

Chen Fan, Zetao Chen, Adam Jacobson, Xiaoping Hu, Michael Milford, Biologically-inspired visual place recognition with adaptive multiple scales,Robotics and Autonomous Systems, Volume 96, 2017, Pages 224-237, DOI: 10.1016/j.robot.2017.07.015.

In this paper we present a novel adaptive multi-scale system for performing visual place recognition. Unlike recent previous multi-scale place recognition systems that use manually pre-fixed scales, we present a system that adaptively selects the spatial scales. This approach differs from previous multi-scale methods, where place recognition is performed through a non-optimized distance metric in a fixed and pre-determined scale space. Instead, we learn an optimized distance metric which creates a new recognition space for clustering images with similar features while separating those with different features. Consequently, the method exploits the natural spatial scales present in the operating environment. With these adaptive scales, a hierarchical recognition mechanism with multiple parallel channels is then proposed. Each channel performs place recognition from a coarse match to a fine match. We present specific techniques for training each channel to recognize places at varying spatial scales and for combining the place recognition hypotheses from these parallel channels. We also conduct a systematic series of experiments and parameter studies that determine the effect on performance of using different numbers of combined recognition channels. The results demonstrate that the adaptive multi-scale approach outperforms the previous fixed multi-scale approach and is capable of producing better than state of the art performance compared to existing robotic navigation algorithms. The system complexity is linear in the number of places in the reference static map and can realize the online place recognition in mobile robotics on typical dataset sizes We analyze the results and provide theoretical analysis of the performance improvements. Finally, we discuss interesting insights gained with respect to future work in robotics and neuroscience in this area.

On the need of integrating emotions in robotic architectures

Luiz Pessoa, Do Intelligent Robots Need Emotion?,Trends in Cognitive Sciences, Volume 21, Issue 11, 2017, Pages 817-819, DOI: 10.1016/j.tics.2017.06.010.

What is the place of emotion in intelligent robots? Researchers have advocated the inclusion of some emotion-related components in the information-processing architecture of autonomous agents. It is argued here that emotion needs to be merged with all aspects of the architecture: cognitive–emotional integration should be a key design principle.

Kalman Filter as the extreme case of finite impulse response filters as the horizon increases in length

Shunyi Zhao, Biao Huang, Yuriy S. Shmaliy, Bayesian state estimation on finite horizons: The case of linear state–space model,Automatica, Volume 85, 2017, Pages 91-99, DOI: 10.1016/j.automatica.2017.07.043.

The finite impulse response (FIR) filter and infinite impulse response filter including the Kalman filter (KF) are generally considered as two different types of state estimation methods. In this paper, the sequential Bayesian philosophy is extended to a filter design using a fixed amount of most recent measurements, with the aim of exploiting the FIR structure and unifying some basic FIR filters with the KF. Specifically, the conditional mean and covariance of the posterior probability density functions are first derived to show the FIR counterpart of the KF. To remove the dependence on initial states, the corresponding likelihood is further maximized and realized iteratively. It shows that the maximum likelihood modification unifies the existing unbiased FIR filters by tuning a weighting matrix. Moreover, it converges to the Kalman estimate with the increase of horizon length, and can thus be considered as a link between the FIR filtering and the KF. Several important properties including stability and robustness against errors of noise statistics are illustrated. Finally, a moving target tracking example and an experiment with a three degrees-of-freedom helicopter system are introduced to demonstrate effectiveness.

Using EKF to estimate the state of a quadcopter in SE(3)

Goodarzi, F.A. & Lee, Global Formulation of an Extended Kalman Filter on SE(3) for Geometric Control of a Quadrotor UAV, J Intell Robot Syst (2017) 88: 395, DOI: 10.1007/s10846-017-0525-6.

An extended Kalman filter (EKF) is developed on the special Euclidean group, S E(3) for geometric control of a quadrotor UAV. It is obtained by performing an intrinsic form of linearization on S E(3) to estimate the state of the quadrotor from noisy measurements. The proposed estimator considers all of the coupling effects between rotational and translational dynamics, and it is developed in a coordinate-free fashion. The desirable features of the proposed EKF are illustrated by numerical examples and experimental results for several scenarios. The proposed estimation scheme on S E(3) has been unprecedented and these results can be particularly useful for aggressive maneuvers in GPS denied environments or in situations where parts of onboard sensors fail.

Cognitive informatics: simulation of cognition through direct simulation of neurons

Shivhare, R., Cherukuri, A.K. & Li, Establishment of Cognitive Relations Based on Cognitive Informatics, J. Cogn Comput (2017) 9: 721, DOI: 10.1007/s12559-017-9498-9.

Cognitive informatics (CI) is an interdisciplinary study on modelling of the brain in terms of knowledge and information processing. In CI, objects/attributes are considered as neurons connected to each other via synapse. The relation represents the synapse in CI. In order to represent new information the brain generates new synapse or relation between the existing neurons. Therefore, the establishment of cognitive relations is essential to represent new information. In order to represent new information, we propose an algorithm which creates cognitive relation between the pair of objects and attributes by using the relational attribute and object method. Further, the cognitive relations between the pair of objects or attributes within the context could be checked with newly defined conditions, i.e. the necessary and sufficient condition. These conditions will evaluate whether the relational object and attribute is adequate to have relations between the pair of objects and attributes. The new information is obtained without increasing the number of neurons in brain. It is achieved by creating cognitive relations between the pair of objects and attributes. The obtained results are beneficial to simulate the intelligence behaviour of brain such as learning and memorizing. Integrating the idea of CI into cognitive relations is a promising and challenging research direction. In this paper, we have discussed it from the aspects of cognitive mechanism, cognitive computing and cognitive process.

On the problem of the future limits of information storage

Cambria, E., Chattopadhyay, A., Linn, E. et al, Storages Are Not Forever, Cogn Comput (2017) 9: 646, DOI: 10.1007/s12559-017-9482-4.

Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. We chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering out noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.

An open-source implementation of visual SLAM with a very nice related-work section

R. Mur-Artal and J. D. Tardós, ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras, IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255-1262, DOI: 10.1109/TRO.2017.2705103.

We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.