Category Archives: Robotics

Checking the behavior of robotic software (i.e., verification) and embedded sw in general, with a good related work on the issue

Lyons, D.M.; Arkin, R.C.; Shu Jiang; Tsung-Ming Liu; Nirmal, P., Performance Verification for Behavior-Based Robot Missions, Robotics, IEEE Transactions on , vol.31, no.3, pp.619,636, June 2015, DOI: 10.1109/TRO.2015.2418592.

Certain robot missions need to perform predictably in a physical environment that may have significant uncertainty. One approach is to leverage automatic software verification techniques to establish a performance guarantee. The addition of an environment model and uncertainty in both program and environment, however, means that the state space of a model-checking solution to the problem can be prohibitively large. An approach based on behavior-based controllers in a process-algebra framework that avoids state-space combinatorics is presented here. In this approach, verification of the robot program in the uncertain environment is reduced to a filtering problem for a Bayesian network. Validation results are presented for the verification of a multiple-waypoint and an autonomous exploration robot mission.

The problem of monitoring events that can only be predicted stochastically, applied to mobile sensors for monitoring

Jingjin Yu; Karaman, S.; Rus, D., Persistent Monitoring of Events With Stochastic Arrivals at Multiple Stations, Robotics, IEEE Transactions on , vol.31, no.3, pp.521,535, June 2015, DOI: 10.1109/TRO.2015.2409453.

This paper introduces a new mobile sensor scheduling problem involving a single robot tasked to monitor several events of interest that are occurring at different locations (stations). Of particular interest is the monitoring of transient events of a stochastic nature, with applications ranging from natural phenomena (e.g., monitoring abnormal seismic activity around a volcano using a ground robot) to urban activities (e.g., monitoring early formations of traffic congestion using an aerial robot). Motivated by examples like these, this paper focuses on problems in which the precise occurrence times of the events are unknown apriori, but statistics for their interarrival times are available. In monitoring such events, the robot seeks to: (1) maximize the number of events observed and (2) minimize the delay between two consecutive observations of events occurring at the same location. This paper considers the case when a robot is tasked with optimizing the event observations in a balanced manner, following a cyclic patrolling route. To tackle this problem, first, assuming that the cyclic ordering of stations is known, we prove the existence and uniqueness of the optimal solution and show that the solution has desirable convergence rate and robustness. Our constructive proof also yields an efficient algorithm for computing the unique optimal solution with O(n) time complexity, in which n is the number of stations, with O(log n) time complexity for incrementally adding or removing stations. Except for the algorithm, our analysis remains valid when the cyclic order is unknown. We then provide a polynomial-time approximation scheme that computes for any ε > 0 a (1 + ε)-optimal solution for this more general, NP-hard problem.

Deducing the space concept from the sensorimotor behaviour of a robot, and an interesting related work of uninterpreted sensors and actuators in developmental robotics that deserves a deeper look

Alban Laflaquière, J. Kevin O’Regan, Sylvain Argentieri, Bruno Gas, Alexander V. Terekhov, Learning agent’s spatial configuration from sensorimotor invariants, Robotics and Autonomous Systems, Volume 71, September 2015, Pages 49-59, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.01.003.

The design of robotic systems is largely dictated by our purely human intuition about how we perceive the world. This intuition has been proven incorrect with regard to a number of critical issues, such as visual change blindness. In order to develop truly autonomous robots, we must step away from this intuition and let robotic agents develop their own way of perceiving. The robot should start from scratch and gradually develop perceptual notions, under no prior assumptions, exclusively by looking into its sensorimotor experience and identifying repetitive patterns and invariants. One of the most fundamental perceptual notions, space, cannot be an exception to this requirement. In this paper we look into the prerequisites for the emergence of simplified spatial notions on the basis of a robot’s sensorimotor flow. We show that the notion of space as environment-independent cannot be deduced solely from exteroceptive information, which is highly variable and is mainly determined by the contents of the environment. The environment-independent definition of space can be approached by looking into the functions that link the motor commands to changes in exteroceptive inputs. In a sufficiently rich environment, the kernels of these functions correspond uniquely to the spatial configuration of the agent’s exteroceptors. We simulate a redundant robotic arm with a retina installed at its end-point and show how this agent can learn the configuration space of its retina. The resulting manifold has the topology of the Cartesian product of a plane and a circle, and corresponds to the planar position and orientation of the retina.

A new approach to solve POMDP-like problems through gradient descent and optimal control

Vadim Indelman, Luca Carlone, Frank Dellaert, Planning in the continuous domain: A generalized belief space approach for autonomous navigation in unknown environments, The International Journal of Robotics Research, vol. 34 no. 7, pp. 849-882, DOI: 10.1177/0278364914561102.

We investigate the problem of planning under uncertainty, with application to mobile robotics. We propose a probabilistic framework in which the robot bases its decisions on the generalized belief, which is a probabilistic description of its own state and of external variables of interest. The approach naturally leads to a dual-layer architecture: an inner estimation layer, which performs inference to predict the outcome of possible decisions; and an outer decisional layer which is in charge of deciding the best action to undertake. Decision making is entrusted to a model predictive control (MPC) scheme. The formulation is valid for general cost functions and does not discretize the state or control space, enabling planning in continuous domain. Moreover, it allows to relax the assumption of maximum likelihood observations: predicted measurements are treated as random variables, and binary random variables are used to model the event that a measurement is actually taken by the robot. We successfully apply our approach to the problem of uncertainty-constrained exploration, in which the robot has to perform tasks in an unknown environment, while maintaining localization uncertainty within given bounds. We present an extensive numerical analysis of the proposed approach and compare it against related work. In practice, our planning approach produces smooth and natural trajectories and is able to impose soft upper bounds on the uncertainty. Finally, we exploit the results of this analysis to identify current limitations and show that the proposed framework can accommodate several desirable extensions.

A framework to manage and switch between several sensor modalities in tele-operation

Andrea Cherubini, Robin Passama, Philippe Fraisse, André Crosnier, A unified multimodal control framework for human–robot interaction, Robotics and Autonomous Systems, Volume 70, August 2015, Pages 106-115, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.03.002.

In human–robot interaction, the robot controller must reactively adapt to sudden changes in the environment (due to unpredictable human behaviour). This often requires operating different modes, and managing sudden signal changes from heterogeneous sensor data. In this paper, we present a multimodal sensor-based controller, enabling a robot to adapt to changes in the sensor signals (here, changes in the human collaborator behaviour). Our controller is based on a unified task formalism, and in contrast with classical hybrid visicn–force–position control, it enables smooth transitions and weighted combinations of the sensor tasks. The approach is validated in a mock-up industrial scenario, where pose, vision (from both traditional camera and Kinect), and force tasks must be realized either exclusively or simultaneously, for human–robot collaboration.

Robot kidnapping detection based on support vector machines

Dylan Campbell, Mark Whitty, Metric-based detection of robot kidnapping with an SVM classifier, Robotics and Autonomous Systems, Volume 69, July 2015, Pages 40-51, ISSN 0921-8890, DOI: 10.1016/j.robot.2014.08.004.

Kidnapping occurs when a robot is unaware that it has not correctly ascertained its position, potentially causing severe map deformation and reducing the robot’s functionality. This paper presents metric-based techniques for real-time kidnap detection, utilising either linear or SVM classifiers to identify all kidnapping events during the autonomous operation of a mobile robot. In contrast, existing techniques either solve specific cases of kidnapping, such as elevator motion, without addressing the general case or remove dependence on local pose estimation entirely, an inefficient and computationally expensive approach. Three metrics that measured the quality of a pose estimate were evaluated and a joint classifier was constructed by combining the most discriminative quality metric with a fourth metric that measured the discrepancy between two independent pose estimates. A multi-class Support Vector Machine classifier was also trained using all four metrics and produced better classification results than the simpler joint classifier, at the cost of requiring a larger training dataset. While metrics specific to 3D point clouds were used, the approach can be generalised to other forms of data, including visual, provided that two independent ways of estimating pose are available.

A nice SLAM approach based on hybrid Normal Distribution Transform (NDT) + occupancy grid maps intended for long term operation in dynamic environments

Erik Einhorn, Horst-Michael Gross, Generic NDT mapping in dynamic environments and its application for lifelong SLAM, Robotics and Autonomous Systems, Volume 69, July 2015, Pages 28-39, ISSN 0921-8890, DOI: 10.1016/j.robot.2014.08.008.

In this paper, we present a new, generic approach for Simultaneous Localization and Mapping (SLAM). First of all, we propose an abstraction of the underlying sensor data using Normal Distribution Transform (NDT) maps that are suitable for making our approach independent from the used sensor and the dimension of the generated maps. We present several modifications for the original NDT mapping to handle free-space measurements explicitly. We additionally describe a method to detect and handle dynamic objects such as moving persons. This enables the usage of the proposed approach in highly dynamic environments. In the second part of this paper we describe our graph-based SLAM approach that is designed for lifelong usage. Therefore, the memory and computational complexity is limited by pruning the pose graph in an appropriate way.

Interesting paper on fault tolerance applied to robotics, with good survey of the subject

D. Crestani, K. Godary-Dejean, L. Lapierre, Enhancing fault tolerance of autonomous mobile robots, Robotics and Autonomous Systems, Volume 68, June 2015, Pages 140-155, ISSN 0921-8890, DOI: 10.1016/j.robot.2014.12.015.

Experience demonstrates that autonomous mobile robots running in the field in a dynamic environment often breakdown. Generally, mobile robots are not designed to efficiently manage faulty or unforeseen situations. Even if some research studies exist, there is a lack of a global approach that really integrates dependability and particularly fault tolerance into the mobile robot design.
This paper presents an approach that aims to integrate fault tolerance principles into the design of a robot real-time control architecture. A failure mode analysis is firstly conducted to identify and characterize the most relevant faults. Then the fault detection and diagnosis mechanisms are explained. Fault detection is based on dedicated software components scanning faulty behaviors. Diagnosis is based on the residual principle and signature analysis to identify faulty software or hardware components and faulty behaviors. Finally, the recovery mechanism, based on the modality principle, proposes to adapt the robot’s control loop according to the context and current operational functions of the robot.
This approach has been applied and implemented in the control architecture of a Pioneer 3DX mobile robot.

Novelty detection as a way for enhancing learning capabilities of a robot, and a brief but interesting survey of motivational theories and their difference with attention

Y. Gatsoulis, T.M. McGinnity, Intrinsically motivated learning systems based on biologically-inspired novelty detection, Robotics and Autonomous Systems, Volume 68, June 2015, Pages 12-20, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.02.006.

Intrinsic motivations play an important role in human learning, particularly in the early stages of childhood development, and ideas from this research field have influenced robotic learning and adaptability. In this paper we investigate one specific type of intrinsic motivation, that of novelty detection and we discuss the reasons that make it a powerful facility for continuous learning. We formulate and present one original type of biologically inspired novelty detection architecture and implement it on a robotic system engaged in a perceptual classification task. The results of real-world robot experiments we conducted show how this original architecture conforms to behavioural observations and demonstrate its effectiveness in terms of focusing the system’s attention in areas that are potential for effective learning.

On the history of IEEE Transactions on Robotics and Automation, ICRA, and others

Sabanovic, S.; Milojevic, S.; Asaro, P.; Francisco, M., Robotics Narratives and Networks [History], Robotics & Automation Magazine, IEEE , vol.22, no.1, pp.137,146, March 2015, DOI: 10.1109/MRA.2014.2385564.

Somewhere around 1983, maybe late 1982, there was talk beginning about doing something more formal within IEEE that dealt with robotics and automation. Informally, activity was getting started through the Control Society,…also Systems, Man and Cybernetics, which obviously makes a lot of sense with the telerobotics things and a few others. But we wanted to build a more permanent home for it, so there was one of the first meetings. George Saridis chaired the meeting. I know George Bekey was there, Tony Bejczy, Lou Paul, probably another half dozen people.