Category Archives: Robot Sensors

Adapting the resolution of depth sensors and the location of the high-resolution area (fovea) as a possible attention mechanism in robots

Tasneem Z, Adhivarahan C, Wang D, Xie H, Dantu K, Koppal SJ., Adaptive fovea for scanning depth sensors, The International Journal of Robotics Research. 2020;39(7):837-855, DOI: 10.1177/0278364920920931.

Depth sensors have been used extensively for perception in robotics. Typically these sensors have a fixed angular resolution and field of view (FOV). This is in contrast to human perception, which involves foveating: scanning with the eyes’ highest angular resolution over regions of interest (ROIs). We build a scanning depth sensor that can control its angular resolution over the FOV. This opens up new directions for robotics research, because many algorithms in localization, mapping, exploration, and manipulation make implicit assumptions about the fixed resolution of a depth sensor, impacting latency, energy efficiency, and accuracy. Our algorithms increase resolution in ROIs either through deconvolutions or intelligent sample distribution across the FOV. The areas of high resolution in the sensor FOV act as artificial fovea and we adaptively vary the fovea locations to maximize a well-known information theoretic measure. We demonstrate novel applications such as adaptive time-of-flight (TOF) sensing, LiDAR zoom, gradient-based LiDAR sensing, and energy-efficient LiDAR scanning. As a proof of concept, we mount the sensor on a ground robot platform, showing how to reduce robot motion to obtain a desired scanning resolution. We also present a ROS wrapper for active simulation for our novel sensor in Gazebo. Finally, we provide extensive empirical analysis of all our algorithms, demonstrating trade-offs between time, resolution and stand-off distance.

Adapting perception to environmental changes explicitly

Sriram Siva, Hao Zhang, Robot perceptual adaptation to environment changes for long-term human teammate following, The International Journal of Robotics Research. January 2020, DOI: 10.1177/0278364919896625.

Perception is one of the several fundamental abilities required by robots, and it also poses significant challenges, especially in real-world field applications. Long-term autonomy introduces additional difficulties to robot perception, including short- and long-term changes of the robot operation environment (e.g., lighting changes). In this article, we propose an innovative human-inspired approach named robot perceptual adaptation (ROPA) that is able to calibrate perception according to the environment context, which enables perceptual adaptation in response to environmental variations. ROPA jointly performs feature learning, sensor fusion, and perception calibration under a unified regularized optimization framework. We also implement a new algorithm to solve the formulated optimization problem, which has a theoretical guarantee to converge to the optimal solution. In addition, we collect a large-scale dataset from physical robots in the field, called perceptual adaptation to environment changes (PEAC), with the aim to benchmark methods for robot adaptation to short-term and long-term, and fast and gradual lighting changes for human detection based upon different feature modalities extracted from color and depth sensors. Utilizing the PEAC dataset, we conduct extensive experiments in the application of human recognition and following in various scenarios to evaluate ROPA. Experimental results have validated that the ROPA approach obtains promising performance in terms of accuracy and efficiency, and effectively adapts robot perception to address short-term and long-term lighting changes in human detection and following applications.

Reinforcement learning for improving autonomy of mobile robots in calibrating visual sensors

Fernando Nobre, Christoffer Heckman, Learning to calibrate: Reinforcement learning for guided calibration of visual–inertial rigs,. The International Journal of Robotics Research, 38(12–13), 1352–1374, DOI: 10.1177/0278364919844824.

We present a new approach to assisted intrinsic and extrinsic calibration with an observability-aware visual–inertial calibration system that guides the user through the calibration procedure by suggesting easy-to-perform motions that render the calibration parameters observable. This is done by identifying which subset of the parameter space is rendered observable with a rank-revealing decomposition of the Fisher information matrix, modeling calibration as a Markov decision process and using reinforcement learning to establish which discrete sequence of motions optimizes for the regression of the desired parameters. The goal is to address the assumption common to most calibration solutions: that sufficiently informative motions are provided by the operator. We do not make use of a process model and instead leverage an experience-based approach that is broadly applicable to any platform in the context of simultaneous localization and mapping. This is a step in the direction of long-term autonomy and “power-on-and-go” robotic systems, making repeatable and reliable calibration accessible to the non-expert operator.

Robots with extended sensorization of their physical building materials

Dana Hughes, Christoffer Heckman, Nikolaus Correll, Materials that make robots smart ,. The International Journal of Robotics Research, 38(12–13), 1338–1351, DOI: 10.1177/0278364919856099.

We posit that embodied artificial intelligence is not only a computational, but also a materials problem. While the importance of material and structural properties in the control loop are well understood, materials can take an active role during control by tight integration of sensors, actuators, computation, and communication. We envision such materials to abstract functionality, therefore making the construction of intelligent robots more straightforward and robust. For example, robots could be made of bones that measure load, muscles that move, skin that provides the robot with information about the kind and location of tactile sensations ranging from pressure to texture and damage, eyes that extract high-level information, and brain material that provides computation in a scalable manner. Such materials will not resemble any existing engineered materials, but rather the heterogeneous components out of which their natural counterparts are made. We describe the state-of-the-art in so-called “robotic materials,” their opportunities for revolutionizing applications ranging from manipulation to autonomous driving by describing two recent robotic materials, a smart skin and a smart tire in more depth, and conclude with open challenges that the robotics community needs to address in collaboration with allies, such as wireless sensor network researchers and polymer scientists.

Survey on visual attention in 3D for robotics

Ekaterina Potapova, Michael Zillich, and Markus Vincze, Survey of recent advances in 3D visual attention for robotics, The International Journal of Robotics Research, Vol 36, Issue 11, pp. 1159 – 1176, DOI: 10.1177/0278364917726587.

3D visual attention plays an important role in both human and robotics perception that yet has to be explored in full detail. However, the majority of computer vision and robotics methods are concerned only with 2D visual attention. This survey presents findings and approaches that cover 3D visual attention in both human and robot vision, summarizing the last 30 years of research and also looking beyond computational methods. First, we present work in such fields as biological vision and neurophysiology, studying 3D attention in human observers. This provides a view of the role attention plays at the system level for biological vision. Then, we cover computer and robot vision approaches that take 3D visual attention into account. We compare approaches with respect to different categories, such as feature-based, data-based, or depth-based visual attention, and draw conclusions on what advances will help robotics to cope better with complex real-world settings and tasks.

Improving orientation estimation in a mobile robot for doing better odometry

M.T. Sabet, H.R. Mohammadi Daniali, A.R. Fathi, E. Alizadeh, Experimental analysis of a low-cost dead reckoning navigation system for a land vehicle using a robust AHRS, Robotics and Autonomous Systems, Volume 95, 2017, Pages 37-51, DOI: 10.1016/j.robot.2017.05.010.

In navigation and motion control of an autonomous vehicle, estimation of attitude and heading is an important issue especially when the localization sensors such as GPS are not available and the vehicle is navigated by the dead reckoning (DR) strategies. In this paper, based on a new modeling framework an Extended Kalman Filter (EKF) is utilized for estimation of attitude, heading and gyroscope sensor bias using a low-cost MEMS inertial sensor. The algorithm is developed for accurate estimation of attitude and heading in the presence of external disturbances including external body accelerations and magnetic disturbances. In this study using the proposed attitude and heading reference system (AHRS) and an odometer sensor, a low-cost aided DR navigation system has been designed. The proposed algorithm application is evaluated by experimental tests in different acceleration bound and existence of external magnetic disturbances for a land vehicle. The results indicate that the roll, pitch and heading are estimated by mean value errors about 0.83%, 0.68% and 1.13%, respectively. Moreover, they indicate that a relative navigation error about 3% of the traveling distance can be achieved using the developed approach in during GPS outages.

Testbed for comparisons of different UWB sensors applied to localization

A. R. Jiménez Ruiz and F. Seco Granja, “Comparing Ubisense, BeSpoon, and DecaWave UWB Location Systems: Indoor Performance Analysis,” in IEEE Transactions on Instrumentation and Measurement, vol. 66, no. 8, pp. 2106-2117, Aug. 2017.DOI: 10.1109/TIM.2017.2681398.

Most ultrawideband (UWB) location systems already proposed for position estimation have only been individually evaluated for particular scenarios. For a fair performance comparison among different solutions, a common evaluation scenario would be desirable. In this paper, we compare three commercially available UWB systems (Ubisense, BeSpoon, and DecaWave) under the same experimental conditions, in order to do a critical performance analysis. We include the characterization of the quality of the estimated tag-to-sensor distances in an indoor industrial environment. This testing space includes areas under line-of-sight (LOS) and diverse non-LOS conditions caused by the reflection, propagation, and the diffraction of the UWB radio signals across different obstacles. The study also includes the analysis of the estimated azimuth and elevation angles for the Ubisense system, which is the only one that incorporates this feature using an array antenna at each sensor. Finally, we analyze the 3-D positioning estimation performance of the three UWB systems using a Bayesian filter implemented with a particle filter and a measurement model that takes into account bad range measurements and outliers. A final conclusion is drawn about which system performs better under these industrial conditions.

Efficient detection of glass obstacles when using a laser rangefinder

Xun Wang, JianGuo Wang, Detecting glass in Simultaneous Localisation and Mapping, Robotics and Autonomous Systems, Volume 88, February 2017, Pages 97-103, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.11.003.

Simultaneous Localisation and Mapping (SLAM) has become one of key technologies used in advanced robot platform. The current state-of-art indoor SLAM with laser scanning rangefinders can provide accurate realtime localisation and mapping service to mobile robotic platforms such as PR2 robot. In recent years, many modern building designs feature large glass panels as one of the key interior fitting elements, e.g. large glass walls. Due to the transparent nature of glass panels, laser rangefinders are unable to produce accurate readings which causes SLAM functioning incorrectly in these environments. In this paper, we propose a simple and effective solution to identify glass panels based on the specular reflection of laser beams from the glass. Specifically, we use a simple technique to detect the reflected light intensity profile around the normal incident angle to the glass panel. Integrating this glass detection method with an existing SLAM algorithm, our SLAM system is able to detect and localise glass obstacles in realtime. Furthermore, the tests we conducted in two office buildings with a PR2 robot show the proposed method can detect ∼ 95% of all glass panels with no false positive detection. The source code of the modified SLAM with glass detection is released as a open source ROS package along with this paper.

Improving sensory information, diagnosis and fault tolerance by using multiple sensors and sensor fusion, with a good related work section (2.3) on fault tolerance on data fusion

Kaci Bader, Benjamin Lussier, Walter Schön, A fault tolerant architecture for data fusion: A real application of Kalman filters for mobile robot localization, Robotics and Autonomous Systems, Volume 88, February 2017, Pages 11-23, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.11.015.

Multisensor perception has an important role in robotics and autonomous systems, providing inputs for critical functions including obstacle detection and localization. It is starting to appear in critical applications such as drones and ADASs (Advanced Driver Assistance Systems). However, this kind of complex system is difficult to validate comprehensively. In this paper we look at multisensor perception systems in relation to an alternative dependability method, namely fault tolerance. We propose an approach for tolerating faults in multisensor data fusion that is based on the more traditional method of duplication–comparison, and that offers detection and recovery services. We detail an example implementation using Kalman filter data fusion for mobile robot localization. We demonstrate its effectiveness in this case study using real data and fault injection.

The problem of monitoring events that can only be predicted stochastically, applied to mobile sensors for monitoring

Jingjin Yu; Karaman, S.; Rus, D., Persistent Monitoring of Events With Stochastic Arrivals at Multiple Stations, Robotics, IEEE Transactions on , vol.31, no.3, pp.521,535, June 2015, DOI: 10.1109/TRO.2015.2409453.

This paper introduces a new mobile sensor scheduling problem involving a single robot tasked to monitor several events of interest that are occurring at different locations (stations). Of particular interest is the monitoring of transient events of a stochastic nature, with applications ranging from natural phenomena (e.g., monitoring abnormal seismic activity around a volcano using a ground robot) to urban activities (e.g., monitoring early formations of traffic congestion using an aerial robot). Motivated by examples like these, this paper focuses on problems in which the precise occurrence times of the events are unknown apriori, but statistics for their interarrival times are available. In monitoring such events, the robot seeks to: (1) maximize the number of events observed and (2) minimize the delay between two consecutive observations of events occurring at the same location. This paper considers the case when a robot is tasked with optimizing the event observations in a balanced manner, following a cyclic patrolling route. To tackle this problem, first, assuming that the cyclic ordering of stations is known, we prove the existence and uniqueness of the optimal solution and show that the solution has desirable convergence rate and robustness. Our constructive proof also yields an efficient algorithm for computing the unique optimal solution with O(n) time complexity, in which n is the number of stations, with O(log n) time complexity for incrementally adding or removing stations. Except for the algorithm, our analysis remains valid when the cyclic order is unknown. We then provide a polynomial-time approximation scheme that computes for any ε > 0 a (1 + ε)-optimal solution for this more general, NP-hard problem.