Tag Archives: Survey

A survey on the concept of Entropy as a measure of the intelligence and autonomy of a system, modeled hierarchically

Valavanis, K.P., The Entropy Based Approach to Modeling and Evaluating Autonomy and Intelligence of Robotic Systems, J Intell Robot Syst (2018) 91: 7 DOI: 10.1007/s10846-018-0905-6.

This review paper presents the Entropy approach to modeling and performance evaluation of Intelligent Machines (IMs), which are modeled as hierarchical, multi-level structures. It provides a chronological summary of developments related to intelligent control, from its origins to current advances. It discusses fundamentals of the concept of Entropy as a measure of uncertainty and as a control function, which may be used to control, evaluate and improve through adaptation and learning performance of engineering systems. It describes a multi-level, hierarchical, architecture that is used to model such systems, and it defines autonomy and machine intelligence for engineering systems, with the aim to set foundations necessary to tackle related challenges. The modeling philosophy for the systems under consideration follows the mathematically proven principle of Increasing Precision with Decreasing Intelligence (IPDI). Entropy is also used in the context of N-Dimensional Information Theory to model the flow of information throughout such systems and contributes to quantitatively evaluate uncertainty, thus, autonomy and intelligence. It is explained how Entropy qualifies as a unique, single, measure to evaluate autonomy, intelligence and precision of task execution. The main contribution of this review paper is that it brings under one forum research findings from the 1970’s and 1980’s, and that it supports the argument that even today, given the unprecedented existing computational power, advances in Artificial Intelligence, Deep Learning and Control Theory, the same foundational framework may be followed to study large-scale, distributed Cyber Physical Systems (CPSs), including distributed intelligence and multi-agent systems, with direct applications to the SmartGrid, transportation systems and multi-robot teams, to mention but a few applications.

Survey on the concept of affordance and its use in robotics (the rest of this issue of the journal also deals with affordances in robotics)

L. Jamone et al, Affordances in Psychology, Neuroscience, and Robotics: A Survey,, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 1, pp. 4-25, March 2018, DOI: 10.1109/TCDS.2016.2594134.

The concept of affordances appeared in psychology during the late 60s as an alternative perspective on the visual perception of the environment. It was revolutionary in the intuition that the way living beings perceive the world is deeply influenced by the actions they are able to perform. Then, across the last 40 years, it has influenced many applied fields, e.g., design, human-computer interaction, computer vision, and robotics. In this paper, we offer a multidisciplinary perspective on the notion of affordances. We first discuss the main definitions and formalizations of the affordance theory, then we report the most significant evidence in psychology and neuroscience that support it, and finally we review the most relevant applications of this concept in robotics.

Survey of the modelling of agents (intentions, goals, etc.)

Stefano V. Albrecht, Peter Stone, Autonomous agents modelling other agents: A comprehensive survey and open problems, Artificial Intelligence,
Volume 258, 2018, Pages 66-95, DOI: 10.1016/j.artint.2018.01.002.

Much research in artificial intelligence is concerned with the development of autonomous agents that can interact effectively with other agents. An important aspect of such agents is the ability to reason about the behaviours of other agents, by constructing models which make predictions about various properties of interest (such as actions, goals, beliefs) of the modelled agents. A variety of modelling approaches now exist which vary widely in their methodology and underlying assumptions, catering to the needs of the different sub-communities within which they were developed and reflecting the different practical uses for which they are intended. The purpose of the present article is to provide a comprehensive survey of the salient modelling methods which can be found in the literature. The article concludes with a discussion of open problems which may form the basis for fruitful future research.

A survey in interactive perception in robots: interacting with the environment to improve perception and using internal models and prediction too

J. Bohg et al, Interactive Perception: Leveraging Action in Perception and Perception in Action, IEEE Transactions on Robotics, vol. 33, no. 6, pp. 1273-1291, DOI: 10.1109/TRO.2017.2721939.

Recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment. These approaches are subsumed under the term Interactive Perception (IP). This view of perception provides the following benefits. First, interaction with the environment creates a rich sensory signal that would otherwise not be present. Second, knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal. In this survey, we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of IP. We close this survey by discussing remaining open questions. With this survey, we hope to help define the field of Interactive Perception and to provide a valuable resource for future research.

Experimental comparison of methods for merging line segments in line-segment-based maps for mobile robots

Francesco Amigoni, Alberto Quattrini Li, Comparing methods for merging redundant line segments in maps, Robotics and Autonomous Systems, Volume 99, 2018, Pages 135-147, DOI: 10.1016/j.robot.2017.10.016.

Map building of indoor environments is considered a basic building block for autonomous mobile robots, enabling, among others, self-localization and efficient path planning. While the mainstream approach stores maps as occupancy grids of regular cells, some works have advocated for the use of maps composed of line segments to represent the boundary of obstacles, leveraging on their more compact size. In order to limit both the growth of the corresponding data structures and the effort in processing these maps, a number of methods have been proposed for merging together redundant line segments that represent the same portion of the environment. In this paper, we experimentally compare some of the most significant methods for merging line segments in maps by applying them to publicly available data sets. At the end, we propose some guidelines to choose the appropriate method.

Interesting survey on Visual SLAM without filtering and of its future lines of research

Georges Younes, Daniel Asmar, Elie Shammas, John Zelek, Keyframe-based monocular SLAM: design, survey, and future directions, Robotics and Autonomous Systems, Volume 98, 2017, Pages 67-88, DOI: 10.1016/j.robot.2017.09.010.

Extensive research in the field of monocular SLAM for the past fifteen years has yielded workable systems that found their way into various applications in robotics and augmented reality. Although filter-based monocular SLAM systems were common at some time, the more efficient keyframe-based solutions are becoming the de facto methodology for building a monocular SLAM system. The objective of this paper is threefold: first, the paper serves as a guideline for people seeking to design their own monocular SLAM according to specific environmental constraints. Second, it presents a survey that covers the various keyframe-based monocular SLAM systems in the literature, detailing the components of their implementation, and critically assessing the specific strategies made in each proposed solution. Third, the paper provides insight into the direction of future research in this field, to address the major limitations still facing monocular SLAM; namely, in the issues of illumination changes, initialization, highly dynamic motion, poorly textured scenes, repetitive textures, map maintenance, and failure recovery.

A very good survey of visual saliency methods, with a list of robotic tasks that have benefit from attention

Ali Borji, Dicky N. Sihite, and Laurent Itti, Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study, IEEE Transactions on Image Processing, V. 22, N. 1, 2013, DOI: 10.1109/TIP.2012.2210727.

Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as “visual saliency.” Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy/difficult stimuli. Furthermore, several concerns in visual saliency modeling,
eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains.

An excellent survey of metrical SLAM (and of map representations and other issues related to SLAM) as of 2016

C. Cadena et al., “Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age,” in IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309-1332, Dec. 2016. DOI: 10.1109/TRO.2016.2624754.

Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. We survey the current state of SLAM and consider future directions. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?

Survey and taxonomy of path planning algorithms

Thi Thoa Mac, Cosmin Copot, Duc Trung Tran, Robin De Keyser, Heuristic approaches in robot path planning: A survey, Robotics and Autonomous Systems, Volume 86, 2016, Pages 13-28, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.08.001.

Autonomous navigation of a robot is a promising research domain due to its extensive applications. The navigation consists of four essential requirements known as perception, localization, cognition and path planning, and motion control in which path planning is the most important and interesting part. The proposed path planning techniques are classified into two main categories: classical methods and heuristic methods. The classical methods consist of cell decomposition, potential field method, subgoal network and road map. The approaches are simple; however, they commonly consume expensive computation and may possibly fail when the robot confronts with uncertainty. This survey concentrates on heuristic-based algorithms in robot path planning which are comprised of neural network, fuzzy logic, nature-inspired algorithms and hybrid algorithms. In addition, potential field method is also considered due to the good results. The strengths and drawbacks of each algorithm are discussed and future outline is provided.

Survey of Cognitive Offloading

Evan F. Risko, Sam J. Gilbert, Cognitive Offloading, Trends in Cognitive Sciences, Volume 20, Issue 9, 2016, Pages 676-688, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.07.002.

If you have ever tilted your head to perceive a rotated image, or programmed a smartphone to remind you of an upcoming appointment, you have engaged in cognitive offloading: the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand. Despite the ubiquity of this type of behavior, it has only recently become the target of systematic investigation in and of itself. We review research from several domains that focuses on two main questions: (i) what mechanisms trigger cognitive offloading, and (ii) what are the cognitive consequences of this behavior? We offer a novel metacognitive framework that integrates results from diverse domains and suggests avenues for future research.