Category Archives: Robotics

Integrating humans and robots in the factories

Andrea Cherubini, Robin Passama, André Crosnier, Antoine Lasnier, Philippe Fraisse, Collaborative manufacturing with physical human–robot interaction, Robotics and Computer-Integrated Manufacturing, Volume 40, August 2016, Pages 1-13, ISSN 0736-5845, DOI: 10.1016/j.rcim.2015.12.007.

Although the concept of industrial cobots dates back to 1999, most present day hybrid human–machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human–robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified.

Very interesting survey on visual place recognition, including historical background, physio-psychological bases and a definition of “place” in robotics

S. Lowry et al., Visual Place Recognition: A Survey, in IEEE Transactions on Robotics, vol. 32, no. 1, pp. 1-19, Feb. 2016. DOI: 10.1109/TRO.2015.2496823.

Visual place recognition is a challenging problem due to the vast range of ways in which the appearance of real-world places can vary. In recent years, improvements in visual sensing capabilities, an ever-increasing focus on long-term mobile robot autonomy, and the ability to draw on state-of-the-art research in other disciplines-particularly recognition in computer vision and animal navigation in neuroscience-have all contributed to significant advances in visual place recognition systems. This paper presents a survey of the visual place recognition research landscape. We start by introducing the concepts behind place recognition-the role of place recognition in the animal kingdom, how a “place” is defined in a robotics context, and the major components of a place recognition system. Long-term robot operations have revealed that changing appearance can be a significant factor in visual place recognition failure; therefore, we discuss how place recognition solutions can implicitly or explicitly account for appearance change within the environment. Finally, we close with a discussion on the future of visual place recognition, in particular with respect to the rapid advances being made in the related fields of deep learning, semantic scene understanding, and video description.

Incorporating spatial info into the symbolic (bag-of-words) info used for loop closure detection

Nishant Kejriwal, Swagat Kumar, Tomohiro Shibata, High performance loop closure detection using bag of word pairs, Robotics and Autonomous Systems, Volume 77, March 2016, Pages 55-65, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.12.003.

In this paper, we look into the problem of loop closure detection in topological mapping. The bag of words (BoW) is a popular approach which is fast and easy to implement, but suffers from perceptual aliasing, primarily due to vector quantization. We propose to overcome this limitation by incorporating the spatial co-occurrence information directly into the dictionary itself. This is done by creating an additional dictionary comprising of word pairs, which are formed by using a spatial neighborhood defined based on the scale size of each point feature. Since the word pairs are defined relative to the spatial location of each point feature, they exhibit a directional attribute which is a new finding made in this paper. The proposed approach, called bag of word pairs (BoWP), uses relative spatial co-occurrence of words to overcome the limitations of the conventional BoW methods. Unlike previous methods that use spatial arrangement only as a verification step, the proposed method incorporates spatial information directly into the detection level and thus, influences all stages of decision making. The proposed BoWP method is implemented in an on-line fashion by incorporating some of the popular concepts such as, K-D tree for storing and searching features, Bayesian probabilistic framework for making decisions on loop closures, incremental creation of dictionary and using RANSAC for confirming loop closure for the top candidate. Unlike previous methods, an incremental version of K-D tree implementation is used which prevents rebuilding of tree for every incoming image, thereby reducing the per image computation time considerably. Through experiments on standard datasets it is shown that the proposed methods provide better recall performance than most of the existing methods. This improvement is achieved without making use any geometric information obtained from range sensors or robot odometry. The computational requirements for the algorithm is comparable to that of BoW methods and is shown to be less than the latest state-of-the-art method in this category.

Implementation of spatial relations in graph-SLAM through quaternions instead of homogeneous matrices

Jiantong Cheng, Jonghyuk Kim, Zhenyu Jiang, Wanfang Che, Dual quaternion-based graphical SLAM, Robotics and Autonomous Systems, Volume 77, March 2016, Pages 15-24, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.12.001.

This paper presents a new parameterization approach for the graph-based SLAM problem and reveals the differences of two popular over-parameterized ways in the optimization procedure. In the SALM problem, constraints or relative transformations between any two poses are generally separated into translations plus 3D rotations, which are then described in a homogeneous transformation matrix (HTM) to simplify computational operations. This however introduces added complexities in frequent conversions between the HTM and state variables, due to their different representations. This new approach, unit dual quaternion (UDQ), describes a spatial transformation as a screw with only 8 elements. We show that state variables can be directly represented by UDQs, and how their relative transformations can be written with the UDQ product, without the trivial computations of HTM. Then, we explore the performances of the unit quaternion and the axis–angle representations in the graph-based SLAM problem, which have been successfully applied to over parameterize perturbations under the assumption of small errors. Based on public synthetic and real-world datasets in 2D and 3D environments, experimental results show that the proposed approach reduces greatly the computational complexity while obtaining the same optimization accuracies as the HTM-based algorithm, and the axis–angle representation is superior to be the quaternion in the case of poor initial estimations.

Interesting approach to PF-based localization and active localization when the map contains semantic information

Nikolay Atanasov, Menglong Zhu, Kostas Daniilidis, and George J. Pappas, Localization from semantic observations via the matrix permanent, The International Journal of Robotics Research January–March 2016 35: 73-99, first published on October 6, 2015, DOI: 10.1177/0278364915596589.

Most approaches to robot localization rely on low-level geometric features such as points, lines, and planes. In this paper, we use object recognition to obtain semantic information from the robot’s sensors and consider the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels. As object recognition algorithms miss detections and produce false alarms, correct data association between the detections and the landmarks on the map is central to the semantic localization problem. Instead of the traditional vector-based representation, we propose a sensor model, which encodes the semantic observations via random finite sets and enables a unified treatment of missed detections, false alarms, and data association. Our second contribution is to reduce the problem of computing the likelihood of a set-valued observation to the problem of computing a matrix permanent. It is this crucial transformation that allows us to solve the semantic localization problem with a polynomial-time approximation to the set-based Bayes filter. Finally, we address the active semantic localization problem, in which the observer’s trajectory is planned in order to improve the accuracy and efficiency of the localization process. The performance of our approach is demonstrated in simulation and in real environments using deformable-part-model-based object detectors. Robust global localization from semantic observations is demonstrated for a mobile robot, for the Project Tango phone, and on the KITTI visual odometry dataset. Comparisons are made with the traditional lidar-based geometric Monte Carlo localization.

Using the Bingham distribution of probability, which is defined on a d-dimensional sphere to be antipodally symmetric, to address the problem of angle periodicity in [0,2pi] when estimating orientation in a recursive filter

Gilitschenski, I.; Kurz, G.; Julier, S.J.; Hanebeck, U.D., Unscented Orientation Estimation Based on the Bingham Distribution, in Automatic Control, IEEE Transactions on , vol.61, no.1, pp.172-177, Jan. 2016, DOI: 10.1109/TAC.2015.2423831.

In this work, we develop a recursive filter to estimate orientation in 3D, represented by quaternions, using directional distributions. Many closed-form orientation estimation algorithms are based on traditional nonlinear filtering techniques, such as the extended Kalman filter (EKF) or the unscented Kalman filter (UKF). These approaches assume the uncertainties in the system state and measurements to be Gaussian-distributed. However, Gaussians cannot account for the periodic nature of the manifold of orientations and thus small angular errors have to be assumed and ad hoc fixes must be used. In this work, we develop computationally efficient recursive estimators that use the Bingham distribution. This distribution is defined on the hypersphere and is inherently more suitable for periodic problems. As a result, these algorithms are able to consistently estimate orientation even in the presence of large angular errors. Furthermore, handling of nontrivial system functions is performed using an entirely deterministic method which avoids any random sampling. A scheme reminiscent of the UKF is proposed for the nonlinear manifold of orientations. It is the first deterministic sampling scheme that truly reflects the nonlinear manifold of orientations.

Robotic probabilistic SLAM in continuous time

Furgale1 P., Tong C.-H., Barfoot T.-D., Sibley G., Continuous-time batch trajectory estimation using temporal basis functions, The International Journal of Robotics Research December 2015 vol. 34 no. 14 1688-1710, DOI: 10.1177/0278364915585860.

Roboticists often formulate estimation problems in discrete time for the practical reason of keeping the state size tractable; however, the discrete-time approach does not scale well for use with high-rate sensors, such as inertial measurement units, rolling-shutter cameras, or sweeping laser imaging sensors. The difficulty lies in the fact that a pose variable is typically included for every time at which a measurement is acquired, rendering the dimension of the state impractically large for large numbers of measurements. This issue is exacerbated for the simultaneous localization and mapping problem, which further augments the state to include landmark variables. To address this tractability issue, we propose to move the full Maximum-a-Posteriori estimation problem into continuous time and use temporal basis functions to keep the state size manageable. We present a full probabilistic derivation of the continuous-time estimation problem, derive an estimator based on the assumption that the densities and processes involved are Gaussian and show how the coefficients of a relatively small number of basis functions can form the state to be estimated, making the solution efficient. Our derivation is presented in steps of increasingly specific assumptions, opening the door to the development of other novel continuous-time estimation algorithms through the application of different assumptions at any point. We use the simultaneous localization and mapping problem as our motivation throughout the paper, although the approach is not specific to this application. Results from two experiments are provided to validate the approach: (i) self-calibration involving a camera and a high-rate inertial measurement unit, and (ii) perspective localization with a rolling-shutter camera.

Model checking for the verification of the correct functionality in the presence of sensor failures of a network of behaviours included in a robotic architecture

Lisa Kiekbusch, Christopher Armbrust, Karsten Berns, Formal verification of behaviour networks including sensor failures, Robotics and Autonomous Systems, Volume 74, Part B, December 2015, Pages 331-339, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.08.002.

The paper deals with the problem of verifying behaviour-based control systems. Although failures in sensor hardware and software can have strong influences on the robot’s operation, they are often neglected in the verification process. Instead, perfect sensing is assumed. Therefore, this paper provides an approach for modelling the sensor chain in a formal way and connecting it to the formal model of the control system. The resulting model can be verified using model checking techniques, which is shown on the examples of the control systems of an autonomous indoor robot and an autonomous off-road robot.

Study of how a complex motion planning problem solved through RRT can benefit from parallelization

Brian W. Satzinger, Chelsea Lau, Marten Byl, Katie Byl, Tractable locomotion planning for RoboSimian, The International Journal of Robotics Research November 2015 vol. 34 no. 13 1541-1558, DOI: 10.1177/0278364915584947.

This paper investigates practical solutions for low-bandwidth, teleoperated mobility for RoboSimian in complex environments. Locomotion planning for this robot is challenging due to kinematic redundancy. We present an end-to-end planning method that exploits a reduced-dimension rapidly-exploring random tree search, constraining a subset of limbs to an inverse kinematics table. Then, we evaluate the performance of this approach through simulations in randomized environments and in the style of the Defense Advanced Research Projects Agency Robotics Challenges terrain both in simulation and with hardware.
We also illustrate the importance of allowing for significant body motion during swing leg motions on extreme terrain and quantify the trade-offs between computation time and execution time, subject to velocity and acceleration limits of the joints. These results lead us to hypothesize that appropriate statistical “investment” of parallel computing resources between competing formulations or flavors of random planning algorithms can improve motion planning performance significantly. Motivated by the need to improve the speed of limbed mobility for the Defense Advanced Research Projects Agency Robotics Challenge, we introduce one formulation of this resource allocation problem as a toy example and discuss advantages and implications of such trajectory planning for tractable locomotion on complex terrain.

Comparison of EKF and UKF for robot localization and a method of selection of a subset of the available sonar sensors

Luigi D’Alfonso, Walter Lucia, Pietro Muraca, Paolo Pugliese, Mobile robot localization via EKF and UKF: A comparison based on real data, Robotics and Autonomous Systems, Volume 74, Part A, December 2015, Pages 122-127, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.07.007.

In this work we compare the performance of two well known filters for nonlinear models, the Extended Kalman Filter and the Unscented Kalman Filter, in estimating the position and orientation of a mobile robot. The two filters fuse the measurements taken by ultrasonic sensors located onboard the robot. The experimental results on real data show a substantial equivalence of the two filters, although in principle the approximating properties of the UKF are much better. A switching sensors activation policy is also devised, which allows to obtain an accurate estimate of the robot state using only a fraction of the available sensors, with a relevant saving of battery power.