Category Archives: Computer Vision

Deep reinforcement learning applied to learn both attention and classification in a task of vehicle classification

D. Zhao, Y. Chen and L. Lv, Deep Reinforcement Learning With Visual Attention for Vehicle Classification, IEEE Transactions on Cognitive and Developmental Systems, vol. 9, no. 4, pp. 356-367, DOI: 10.1109/TCDS.2016.2614675.

Automatic vehicle classification is crucial to intelligent transportation system, especially for vehicle-tracking by police. Due to the complex lighting and image capture conditions, image-based vehicle classification in real-world environments is still a challenging task and the performance is far from being satisfactory. However, owing to the mechanism of visual attention, the human vision system shows remarkable capability compared with the computer vision system, especially in distinguishing nuances processing. Inspired by this mechanism, we propose a convolutional neural network (CNN) model of visual attention for image classification. A visual attention-based image processing module is used to highlight one part of an image and weaken the others, generating a focused image. Then the focused image is input into the CNN to be classified. According to the classification probability distribution, we compute the information entropy to guide a reinforcement learning agent to achieve a better policy for image classification to select the key parts of an image. Systematic experiments on a surveillance-nature dataset which contains images captured by surveillance cameras in the front view, demonstrate that the proposed model is more competitive than the large-scale CNN in vehicle classification tasks.

Improving the search of matching image features using the usual coherence present in true matches

W. Y. Lin et al, CODE: Coherence Based Decision Boundaries for Feature Correspondence, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 1, pp. 34-47, DOI: 10.1109/TPAMI.2017.2652468.

A key challenge in feature correspondence is the difficulty in differentiating true and false matches at a local descriptor level. This forces adoption of strict similarity thresholds that discard many true matches. However, if analyzed at a global level, false matches are usually randomly scattered while true matches tend to be coherent (clustered around a few dominant motions), thus creating a coherence based separability constraint. This paper proposes a non-linear regression technique that can discover such a coherence based separability constraint from highly noisy matches and embed it into a correspondence likelihood model. Once computed, the model can filter the entire set of nearest neighbor matches (which typically contains over 90 percent false matches) for true matches. We integrate our technique into a full feature correspondence system which reliably generates large numbers of good quality correspondences over wide baselines where previous techniques provide few or no matches.

Interesting survey on Visual SLAM without filtering and of its future lines of research

Georges Younes, Daniel Asmar, Elie Shammas, John Zelek, Keyframe-based monocular SLAM: design, survey, and future directions, Robotics and Autonomous Systems, Volume 98, 2017, Pages 67-88, DOI: 10.1016/j.robot.2017.09.010.

Extensive research in the field of monocular SLAM for the past fifteen years has yielded workable systems that found their way into various applications in robotics and augmented reality. Although filter-based monocular SLAM systems were common at some time, the more efficient keyframe-based solutions are becoming the de facto methodology for building a monocular SLAM system. The objective of this paper is threefold: first, the paper serves as a guideline for people seeking to design their own monocular SLAM according to specific environmental constraints. Second, it presents a survey that covers the various keyframe-based monocular SLAM systems in the literature, detailing the components of their implementation, and critically assessing the specific strategies made in each proposed solution. Third, the paper provides insight into the direction of future research in this field, to address the major limitations still facing monocular SLAM; namely, in the issues of illumination changes, initialization, highly dynamic motion, poorly textured scenes, repetitive textures, map maintenance, and failure recovery.

First end-to-end implementation of (monocular) Visual Odometry with deep neural networks, including output with the uncertainty of the result

Sen Wang, Ronald Clark, Hongkai Wen, and Niki Trigoni, End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks, The International Journal of Robotics Research Vol 37, Issue 4-5, pp. 513 – 542, DOI: 0.1177/0278364917734298.

This paper studies visual odometry (VO) from the perspective of deep learning. After tremendous efforts in the robotics and computer vision communities over the past few decades, state-of-the-art VO algorithms have demonstrated incredible performance. However, since the VO problem is typically formulated as a pure geometric problem, one of the key features still missing from current VO systems is the capability to automatically gain knowledge and improve performance through learning. In this paper, we investigate whether deep neural networks can be effective and beneficial to the VO problem. An end-to-end, sequence-to-sequence probabilistic visual odometry (ESP-VO) framework is proposed for the monocular VO based on deep recurrent convolutional neural networks. It is trained and deployed in an end-to-end manner, that is, directly inferring poses and uncertainties from a sequence of raw images (video) without adopting any modules from the conventional VO pipeline. It can not only automatically learn effective feature representation encapsulating geometric information through convolutional neural networks, but also implicitly model sequential dynamics and relation for VO using deep recurrent neural networks. Uncertainty is also derived along with the VO estimation without introducing much extra computation. Extensive experiments on several datasets representing driving, flying and walking scenarios show competitive performance of the proposed ESP-VO to the state-of-the-art methods, demonstrating a promising potential of the deep learning technique for VO and verifying that it can be a viable complement to current VO systems.

Automatic hierarchization for the recognition of places in images

Chen Fan, Zetao Chen, Adam Jacobson, Xiaoping Hu, Michael Milford, Biologically-inspired visual place recognition with adaptive multiple scales,Robotics and Autonomous Systems, Volume 96, 2017, Pages 224-237, DOI: 10.1016/j.robot.2017.07.015.

In this paper we present a novel adaptive multi-scale system for performing visual place recognition. Unlike recent previous multi-scale place recognition systems that use manually pre-fixed scales, we present a system that adaptively selects the spatial scales. This approach differs from previous multi-scale methods, where place recognition is performed through a non-optimized distance metric in a fixed and pre-determined scale space. Instead, we learn an optimized distance metric which creates a new recognition space for clustering images with similar features while separating those with different features. Consequently, the method exploits the natural spatial scales present in the operating environment. With these adaptive scales, a hierarchical recognition mechanism with multiple parallel channels is then proposed. Each channel performs place recognition from a coarse match to a fine match. We present specific techniques for training each channel to recognize places at varying spatial scales and for combining the place recognition hypotheses from these parallel channels. We also conduct a systematic series of experiments and parameter studies that determine the effect on performance of using different numbers of combined recognition channels. The results demonstrate that the adaptive multi-scale approach outperforms the previous fixed multi-scale approach and is capable of producing better than state of the art performance compared to existing robotic navigation algorithms. The system complexity is linear in the number of places in the reference static map and can realize the online place recognition in mobile robotics on typical dataset sizes We analyze the results and provide theoretical analysis of the performance improvements. Finally, we discuss interesting insights gained with respect to future work in robotics and neuroscience in this area.

An open-source implementation of visual SLAM with a very nice related-work section

R. Mur-Artal and J. D. Tardós, ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras, IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255-1262, DOI: 10.1109/TRO.2017.2705103.

We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.

Survey on visual attention in 3D for robotics

Ekaterina Potapova, Michael Zillich, and Markus Vincze, Survey of recent advances in 3D visual attention for robotics, The International Journal of Robotics Research, Vol 36, Issue 11, pp. 1159 – 1176, DOI: 10.1177/0278364917726587.

3D visual attention plays an important role in both human and robotics perception that yet has to be explored in full detail. However, the majority of computer vision and robotics methods are concerned only with 2D visual attention. This survey presents findings and approaches that cover 3D visual attention in both human and robot vision, summarizing the last 30 years of research and also looking beyond computational methods. First, we present work in such fields as biological vision and neurophysiology, studying 3D attention in human observers. This provides a view of the role attention plays at the system level for biological vision. Then, we cover computer and robot vision approaches that take 3D visual attention into account. We compare approaches with respect to different categories, such as feature-based, data-based, or depth-based visual attention, and draw conclusions on what advances will help robotics to cope better with complex real-world settings and tasks.

Interesting implementation of visual graph SLAM in C++ for educational purposes

Dominik Schlegel, Mirco Colosi, Giorgio Grisetti, ProSLAM: Graph SLAM from a Programmer’s Perspective/strong>, arXiv:1709.04377.

In this paper we present ProSLAM, a lightweight stereo visual SLAM system designed with simplicity in mind. Our work stems from the experience gathered by the authors while teaching SLAM to students and aims at providing a highly modular system that can be easily implemented and understood. Rather than focusing on the well known mathematical aspects of Stereo Visual SLAM, in this work we highlight the data structures and the algorithmic aspects that one needs to tackle during the design of such a system. We implemented ProSLAM using the C++ programming language in combination with a minimal set of well known used external libraries. In addition to an open source implementation, we provide several code snippets that address the core aspects of our approach directly in this paper. The results of a thorough validation performed on standard benchmark datasets show that our approach achieves accuracy comparable to state of the art methods, while requiring substantially less computational resources.

Interleaving segmentation (semantics) and dense 3D reconstruction (metrics)

C. Häne, C. Zach, A. Cohen and M. Pollefeys, Dense Semantic 3D Reconstruction, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 9, pp. 1730-1743, DOI: 10.1109/TPAMI.2016.2613051.

Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being `too noisy’. These priors generally yield overly smooth reconstructions and/or segmentations in certain regions while they fail to constrain the solution sufficiently in other areas. In this paper, we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other’s task. As a consequence, we propose a mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. On the one hand knowing about the semantic class of the geometry provides information about the likelihood of the surface direction. On the other hand the surface direction provides information about the likelihood of the semantic class. Experimental results on several data sets highlight the advantages of our joint formulation. We show how weakly observed surfaces are reconstructed more faithfully compared to a geometry only reconstruction. Thanks to the volumetric nature of our formulation we also infer surfaces which cannot be directly observed for example the surface between the ground and a building. Finally, our method returns a semantic segmentation which is consistent across the whole dataset.

A new feature for 3D point clouds that is more efficient than the state-of-the-art SHOT

Sai Manoj Prakhya, Bingbing Liu, Weisi Lin, Vinit Jakhetiya & Sharath Chandra Guntuku, B-SHOT: a binary 3D feature descriptor for fast Keypoint matching on 3D point clouds, Auton Robot (2017) 41:1501–1520, DOI: 10.1007/s10514-016-9612-y.

We present the first attempt in creating a binary 3D feature descriptor for fast and efficient keypoint matching on 3D point clouds. Specifically, we propose a linarization technique and apply it on the state-of-the-art 3D feature descriptor, SHOT to create the first binary 3D feature descriptor, which we call B-SHOT. B-SHOT requires 32 times lesser memory for its representation while being six times faster in feature descriptor matching, when compared to the SHOT feature descriptor. Next, we propose a robust evaluation metric, specifically for 3D feature descriptors. A comprehensive evaluation on standard benchmarks reveals that B-SHOT offers comparable keypoint matching performance to that of the state-of-the-art real valued 3D feature descriptors, albeit at dramatically lower computational and memory costs.