Cognitive control: a nice bunch of definitions and state-of-the-art

S. Haykin, M. Fatemi, P. Setoodeh and Y. Xue, Cognitive Control, in Proceedings of the IEEE, vol. 100, no. 12, pp. 3156-3169, Dec. 2012., DOI: 10.1109/JPROC.2012.2215773.

This paper is inspired by how cognitive control manifests itself in the human brain and does so in a remarkable way. It addresses the many facets involved in the control of directed information flow in a dynamic system, culminating in the notion of information gap, defined as the difference between relevant information (useful part of what is extracted from the incoming measurements) and sufficient information representing the information needed for achieving minimal risk. The notion of information gap leads naturally to how cognitive control can itself be defined. Then, another important idea is described, namely the two-state model, in which one is the system’s state and the other is the entropic state that provides an essential metric for quantifying the information gap. The entropic state is computed in the perceptual part (i.e., perceptor) of the dynamic system and sent to the controller directly as feedback information. This feedback information provides the cognitive controller the information needed about the environment and the system to bring reinforcement leaning into play; reinforcement learning (RL), incorporating planning as an integral part, is at the very heart of cognitive control. The stage is now set for a computational experiment, involving cognitive radar wherein the cognitive controller is enabled to control the receiver via the environment. The experiment demonstrates how RL provides the mechanism for improved utilization of computational resources, and yet is able to deliver good performance through the use of planning. The paper finishes with concluding remarks.

Implementation of PF SLAM in FPGAs and a good state of the art of the issue

B.G. Sileshi, J. Oliver, R. Toledo, J. Gonçalves, P. Costa, On the behaviour of low cost laser scanners in HW/SW particle filter SLAM applications, Robotics and Autonomous Systems, Volume 80, June 2016, Pages 11-23, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.03.002.

Particle filters (PFs) are computationally intensive sequential Monte Carlo estimation methods with applications in the field of mobile robotics for performing tasks such as tracking, simultaneous localization and mapping (SLAM) and navigation, by dealing with the uncertainties and/or noise generated by the sensors as well as with the intrinsic uncertainties of the environment. However, the application of PFs with an important number of particles has traditionally been difficult to implement in real-time applications due to the huge number of operations they require. This work presents a hardware implementation on FPGA (field programmable gate arrays) of a PF applied to SLAM which aims to accelerate the execution time of the PF algorithm with moderate resource. The presented system is evaluated for different sensors including a low cost Neato XV-11 laser scanner sensor. First the system is validated by post processing data provided by a realistic simulation of a differential robot, equipped with a hacked Neato XV-11 laser scanner, that navigates in the Robot@Factory competition maze. The robot was simulated using SimTwo, which is a realistic simulation software that can support several types of robots. The simulator provides the robot ground truth, odometry and the laser scanner data. Then the proposed solution is further validated on standard laser scanner sensors in complex environments. The results achieved from this study confirmed the possible use of low cost laser scanner for different robotics applications which benefits in several aspects due to its cost and the increased speed provided by the SLAM algorithm running on FPGA.

Interesting approach to deal with the design of complex systems based on analogies with simpler ones

Victor Ragusila, M. Reza Emami, Mechatronics by analogy and application to legged locomotion, Mechatronics, Volume 35, May 2016, Pages 173-191, ISSN 0957-4158, DOI: 10.1016/j.mechatronics.2016.02.007.

A new design methodology for mechatronic systems, dubbed as Mechatronics by Analogy (MbA), is introduced. It argues that by establishing a similarity relation between a complex system and a number of simpler models it is possible to design the former using the analysis and synthesis means developed for the latter. The methodology provides a framework for concurrent engineering of complex systems while maintaining the transparency of the system behavior through making formal analogies between the system and those with more tractable dynamics. The application of the MbA methodology to the design of a monopod robot leg, called the Linkage Leg, is also presented. A series of simulations show that the dynamic behavior of the Linkage Leg is similar to that of a combination of a double pendulum and a spring-loaded inverted pendulum, based on which the system kinematic, dynamic, and control parameters can be designed concurrently.

Real-time trajectory generation for omnidirectional robots, and a good set of basic bibliographical references

Tamás Kalmár-Nagy, Real-time trajectory generation for omni-directional vehicles by constrained dynamic inversion, Mechatronics, Volume 35, May 2016, Pages 44-53, ISSN 0957-4158, DOI: 10.1016/j.mechatronics.2015.12.004.

This paper presents a computationally efficient algorithm for real-time trajectory generation for omni-directional vehicles. The algorithm uses a dynamic inversion based approach that incorporates vehicle dynamics, actuator saturation and bounded acceleration. The algorithm is compared with other trajectory generation algorithms for omni-directional vehicles. The method yields good quality trajectories and is implementable in real-time. Numerical and hardware tests are presented.

Improvements on the ICP algorithm to point cloud registration from a low precision RGB-D sensor

Rogério Yugo Takimoto, Marcos de Sales Guerra Tsuzuki, Renato Vogelaar, Thiago de Castro Martins, André Kubagawa Sato, Yuma Iwao, Toshiyuki Gotoh, Seiichiro Kagei, 3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor, Mechatronics, Volume 35, May 2016, Pages 11-22, ISSN 0957-4158, DOI:j.mechatronics.2015.10.014.

A 3D reconstruction method using feature points is presented and the parameters used to improve the reconstruction are discussed. The precision of the 3D reconstruction is improved by combining point clouds obtained from different viewpoints using structured light. A well-known algorithm for point cloud registration is the ICP (Iterative Closest Point) that determines the rotation and translation that, when applied to one of the point clouds, places both point clouds optimally. The ICP algorithm iteratively executes two main steps: point correspondence determination and registration algorithm. The point correspondence determination is a module that, if not properly executed, can make the ICP converge to a local minimum. To overcome this drawback, two techniques were used. A meaningful set of 3D points using a technique known as SIFT (Scale-invariant feature transform) was obtained and an ICP that uses statistics to generate a dynamic distance and color threshold to the distance allowed between closest points was implemented. The reconstruction precision improvement was implemented using meaningful point clouds and the ICP to increase the number of points in the 3D space. The surface reconstruction is performed using marching cubes and filters to remove the noise and to smooth the surface. The factors that influence the 3D reconstruction precision are here discussed and analyzed. A detailed discussion of the number of frames used by the ICP and the ICP parameters is presented.

Calculating (experimental) probability distributions of the execution of sequential software

Laurent David, Isabelle Puaut, Static Determination of Probabilistic Execution Times, Proceedings of the 12th 16th Euromicro Conference on Real-Time Systems (ECRTS’04). Link.

Most previous research done in probabilistic schedulability analysis assumes a known distribution of execution times for each task of a real-time application. This is however not trivial to determine it with a high level of confidence. Methods based on measurements are often biased since not in general exhaustive on all the possible execution paths, whereas methods based on static analysis are mostly Worst-Case Execution Time – WCET – oriented. Using static analysis, this work proposes a method to obtain probabilistic distributions of execution times. It assumes that the given real time application is divided into multiple tasks, whose source code is known. Ignoring in this paper hardware considerations and based only on the source code of the tasks, the proposed technique allows designers to associate to any execution path an execution time and a probability to go through this path. A source code example is presented to illustrate the method.

Pdf form of the WCET of code execution

S. Edgar and A. Burns, Statistical analysis of WCET for scheduling, Real-Time Systems Symposium, 2001. (RTSS 2001). Proceedings. 22nd IEEE, 2001, pp. 215-224. DOI: 10.1109/REAL.2001.990614.

To perform a schedulability test, scheduling analysis relies on a known worst-case execution time (WCET). This value may be difficult to compute and may be overly pessimistic. This paper offers an alternative analysis based on estimating a WCET from test data to within a specific level of probabilistic confidence. A method is presented for calculating an estimate given statistical assumptions. The implications of the level of confidence on the likelihood of schedulability are also presented.

Dealing with multiple hypothesis in Graph-SLAM through multigraphs (as in multi-hierarchical graphs)

Max Pfingsthorn and Andreas Birk, Generalized graph SLAM: Solving local and global ambiguities through multimodal and hyperedge constraints, The International Journal of Robotics Research May 2016 35: 601-630, DOI: 10.1177/0278364915585395.

Research in Graph-based Simultaneous Localization and Mapping has experienced a recent trend towards robust methods. These methods take the combinatorial aspect of data association into account by allowing decisions of the graph topology to be made during optimization. The Generalized Graph Simultaneous Localization and Mapping framework presented in this work can represent ambiguous data on both local and global scales, i.e. it can handle multiple mutually exclusive choices in registration results and potentially erroneous loop closures. This is achieved by augmenting previous work on multimodal distributions with an extended graph structure using hyperedges to encode ambiguous loop closures. The novel representation combines both hyperedges and multimodal Mixture of Gaussian constraints to represent all sources of ambiguity in Simultaneous Localization and Mapping. Furthermore, a discrete optimization stage is introduced between the Simultaneous Localization and Mapping frontend and backend to handle these ambiguities in a unified way utilizing the novel representation of Generalized Graph Simultaneous Localization and Mapping, providing a general approach to handle all forms of outliers. The novel Generalized Prefilter method optimizes among all local and global choices and generates a traditional unimodal unambiguous pose graph for subsequent continuous optimization in the backend. Systematic experiments on synthetic datasets show that the novel representation of the Generalized Graph Simultaneous Localization and Mapping framework with the Generalized Prefilter method, is significantly more robust and faster than other robust state-of-the-art methods. In addition, two experiments with real data are presented to corroborate the results observed with synthetic data. Different general strategies to construct problems from real data, utilizing the full representational power of the Generalized Graph Simultaneous Localization and Mapping framework are also illustrated in these experiments.

Interesting survey of relevant long-term applications of service robots in real environments

Roberto Pinillos, Samuel Marcos, Raul Feliz, Eduardo Zalama, Jaime Gómez-García-Bermejo, Long-term assessment of a service robot in a hotel environment, Robotics and Autonomous Systems, Volume 79, May 2016, Pages 40-57, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.01.014.

The long term evaluation of the Sacarino robot is presented in this paper. The study is aimed to improve the robot‘s capabilities as a bellboy in a hotel; walking alongside the guests, providing information about the city and the hotel and providing hotel-related services. The paper establishes a three-stage assessment methodology based on the continuous measurement of a set of metrics regarding navigation and interaction with guests. Sacarino has been automatically collecting information in a real hotel environment for long periods of time. The acquired information has been analyzed and used to improve the robot’s operation in the hotel through successive refinements. Some interesting considerations and useful hints for the researchers of service robots have been extracted from the analysis of the results.

Theoretical models for explaining the human (quick) decicion-making process

Roger Ratcliff, Philip L. Smith, Scott D. Brown, Gail McKoon, Diffusion Decision Model: Current Issues and History, Trends in Cognitive Sciences, Volume 20, Issue 4, April 2016, Pages 260-281, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.01.007.

There is growing interest in diffusion models to represent the cognitive and neural processes of speeded decision making. Sequential-sampling models like the diffusion model have a long history in psychology. They view decision making as a process of noisy accumulation of evidence from a stimulus. The standard model assumes that evidence accumulates at a constant rate during the second or two it takes to make a decision. This process can be linked to the behaviors of populations of neurons and to theories of optimality. Diffusion models have been used successfully in a range of cognitive tasks and as psychometric tools in clinical research to examine individual differences. In this review, we relate the models to both earlier and more recent research in psychology.