Author Archives: Juan-antonio Fernández-madrigal

A robot architecture composed of reinforcement learners for predicting and developing behaviors

Richard S. Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M. Pilarski, Adam White, and Doina PrecupHorde (2011), A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction, Proc. of 10th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2011), Tumer, Yolum, Sonenberg and Stone (eds.), May, 2–6, 2011, Taipei, Taiwan, pp. 761-768.

Maintaining accurate world knowledge in a complex and changing environment is a perennial problem for robots and other artificial intelligence systems. Our architecture for addressing this problem, called Horde, consists of a large number of independent reinforcement learning sub-agents, or demons. Each demon is responsible for answering a single predictive or goal-oriented question about the world, thereby contributing in a factored, modular way to the system’s overall knowledge. The questions are in the form of a value function, but each demon has its own policy, reward function, termination function, and terminal-reward function unrelated to those of the base problem. Learning proceeds in parallel by all demons simultaneously so as to extract the maximal training information from whatever actions are taken by the system as a whole. Gradient-based temporal-difference learning methods are used to learn efficiently and reliably with function approximation in this off-policy setting. Horde runs in constant time and memory per time step, and is thus suitable for learning online in realtime applications such as robotics. We present results using Horde on a multi-sensored mobile robot to successfully learn goal-oriented behaviors and long-term predictions from off-policy experience. Horde is a significant incremental step towards a real-time architecture for efficient learning of general knowledge from unsupervised sensorimotor interaction.

“Nexting” (predicting events that occur next, possibly at different time scales) implemented in a robot through temporal difference learning and with a large number of learners

Joseph Modayil, Adam White, Richard S. Sutton (2011), Multi-timescale Nexting in a Reinforcement Learning Robot, arXiv:1112.1133 [cs.LG]. ARXIV, (this version to appear in the Proceedings of the Conference on the Simulation of Adaptive Behavior, 2012).

The term “nexting” has been used by psychologists to refer to the propensity of people and many other animals to continually predict what will happen next in an immediate, local, and personal sense. The ability to “next” constitutes a basic kind of awareness and knowledge of one’s environment. In this paper we present results with a robot that learns to next in real time, predicting thousands of features of the world’s state, including all sensory inputs, at timescales from 0.1 to 8 seconds. This was achieved by treating each state feature as a reward-like target and applying temporal-difference methods to learn a corresponding value function with a discount rate corresponding to the timescale. We show that two thousand predictions, each dependent on six thousand state features, can be learned and updated online at better than 10Hz on a laptop computer, using the standard TD(lambda) algorithm with linear function approximation. We show that this approach is efficient enough to be practical, with most of the learning complete within 30 minutes. We also show that a single tile-coded feature representation suffices to accurately predict many different signals at a significant range of timescales. Finally, we show that the accuracy of our learned predictions compares favorably with the optimal off-line solution.

Application of deep learning and reinforcement learning to an industrial process, with a gentle introduction to both and a clear explanation of the process and decisions made to build the whole control system

Johannes Günther, Patrick M. Pilarski, Gerhard Helfrich, Hao Shen, Klaus Diepold, Intelligent laser welding through representation, prediction, and control learning: An architecture with deep neural networks and reinforcement learning, Mechatronics, Volume 34, March 2016, Pages 1-11, ISSN 0957-4158, DOI: 10.1016/j.mechatronics.2015.09.004.

Laser welding is a widely used but complex industrial process. In this work, we propose the use of an integrated machine intelligence architecture to help address the significant control difficulties that prevent laser welding from seeing its full potential in process engineering and production. This architecture combines three contemporary machine learning techniques to allow a laser welding controller to learn and improve in a self-directed manner. As a first contribution of this work, we show how a deep, auto-encoding neural network is capable of extracting salient, low-dimensional features from real high-dimensional laser welding data. As a second contribution and novel integration step, these features are then used as input to a temporal-difference learning algorithm (in this case a general-value-function learner) to acquire important real-time information about the process of laser welding; temporally extended predictions are used in combination with deep learning to directly map sensor data to the final quality of a welding seam. As a third contribution and final part of our proposed architecture, we suggest that deep learning features and general-value-function predictions can be beneficially combined with actor–critic reinforcement learning to learn context-appropriate control policies to govern welding power in real time. Preliminary control results are demonstrated using multiple runs with a laser-welding simulator. The proposed intelligent laser-welding architecture combines representation, prediction, and control learning: three of the main hallmarks of an intelligent system. As such, we suggest that an integration approach like the one described in this work has the capacity to improve laser welding performance without ongoing and time-intensive human assistance. Our architecture therefore promises to address several key requirements of modern industry. To our knowledge, this architecture is the first demonstrated combination of deep learning and general value functions. It also represents the first use of deep learning for laser welding specifically and production engineering in general. We believe that it would be straightforward to adapt our architecture for use in other industrial and production engineering settings.

Cognitive control: a nice bunch of definitions and state-of-the-art

S. Haykin, M. Fatemi, P. Setoodeh and Y. Xue, Cognitive Control, in Proceedings of the IEEE, vol. 100, no. 12, pp. 3156-3169, Dec. 2012., DOI: 10.1109/JPROC.2012.2215773.

This paper is inspired by how cognitive control manifests itself in the human brain and does so in a remarkable way. It addresses the many facets involved in the control of directed information flow in a dynamic system, culminating in the notion of information gap, defined as the difference between relevant information (useful part of what is extracted from the incoming measurements) and sufficient information representing the information needed for achieving minimal risk. The notion of information gap leads naturally to how cognitive control can itself be defined. Then, another important idea is described, namely the two-state model, in which one is the system’s state and the other is the entropic state that provides an essential metric for quantifying the information gap. The entropic state is computed in the perceptual part (i.e., perceptor) of the dynamic system and sent to the controller directly as feedback information. This feedback information provides the cognitive controller the information needed about the environment and the system to bring reinforcement leaning into play; reinforcement learning (RL), incorporating planning as an integral part, is at the very heart of cognitive control. The stage is now set for a computational experiment, involving cognitive radar wherein the cognitive controller is enabled to control the receiver via the environment. The experiment demonstrates how RL provides the mechanism for improved utilization of computational resources, and yet is able to deliver good performance through the use of planning. The paper finishes with concluding remarks.

Implementation of PF SLAM in FPGAs and a good state of the art of the issue

B.G. Sileshi, J. Oliver, R. Toledo, J. Gonçalves, P. Costa, On the behaviour of low cost laser scanners in HW/SW particle filter SLAM applications, Robotics and Autonomous Systems, Volume 80, June 2016, Pages 11-23, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.03.002.

Particle filters (PFs) are computationally intensive sequential Monte Carlo estimation methods with applications in the field of mobile robotics for performing tasks such as tracking, simultaneous localization and mapping (SLAM) and navigation, by dealing with the uncertainties and/or noise generated by the sensors as well as with the intrinsic uncertainties of the environment. However, the application of PFs with an important number of particles has traditionally been difficult to implement in real-time applications due to the huge number of operations they require. This work presents a hardware implementation on FPGA (field programmable gate arrays) of a PF applied to SLAM which aims to accelerate the execution time of the PF algorithm with moderate resource. The presented system is evaluated for different sensors including a low cost Neato XV-11 laser scanner sensor. First the system is validated by post processing data provided by a realistic simulation of a differential robot, equipped with a hacked Neato XV-11 laser scanner, that navigates in the Robot@Factory competition maze. The robot was simulated using SimTwo, which is a realistic simulation software that can support several types of robots. The simulator provides the robot ground truth, odometry and the laser scanner data. Then the proposed solution is further validated on standard laser scanner sensors in complex environments. The results achieved from this study confirmed the possible use of low cost laser scanner for different robotics applications which benefits in several aspects due to its cost and the increased speed provided by the SLAM algorithm running on FPGA.

Interesting approach to deal with the design of complex systems based on analogies with simpler ones

Victor Ragusila, M. Reza Emami, Mechatronics by analogy and application to legged locomotion, Mechatronics, Volume 35, May 2016, Pages 173-191, ISSN 0957-4158, DOI: 10.1016/j.mechatronics.2016.02.007.

A new design methodology for mechatronic systems, dubbed as Mechatronics by Analogy (MbA), is introduced. It argues that by establishing a similarity relation between a complex system and a number of simpler models it is possible to design the former using the analysis and synthesis means developed for the latter. The methodology provides a framework for concurrent engineering of complex systems while maintaining the transparency of the system behavior through making formal analogies between the system and those with more tractable dynamics. The application of the MbA methodology to the design of a monopod robot leg, called the Linkage Leg, is also presented. A series of simulations show that the dynamic behavior of the Linkage Leg is similar to that of a combination of a double pendulum and a spring-loaded inverted pendulum, based on which the system kinematic, dynamic, and control parameters can be designed concurrently.

Real-time trajectory generation for omnidirectional robots, and a good set of basic bibliographical references

Tamás Kalmár-Nagy, Real-time trajectory generation for omni-directional vehicles by constrained dynamic inversion, Mechatronics, Volume 35, May 2016, Pages 44-53, ISSN 0957-4158, DOI: 10.1016/j.mechatronics.2015.12.004.

This paper presents a computationally efficient algorithm for real-time trajectory generation for omni-directional vehicles. The algorithm uses a dynamic inversion based approach that incorporates vehicle dynamics, actuator saturation and bounded acceleration. The algorithm is compared with other trajectory generation algorithms for omni-directional vehicles. The method yields good quality trajectories and is implementable in real-time. Numerical and hardware tests are presented.

Improvements on the ICP algorithm to point cloud registration from a low precision RGB-D sensor

Rogério Yugo Takimoto, Marcos de Sales Guerra Tsuzuki, Renato Vogelaar, Thiago de Castro Martins, André Kubagawa Sato, Yuma Iwao, Toshiyuki Gotoh, Seiichiro Kagei, 3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor, Mechatronics, Volume 35, May 2016, Pages 11-22, ISSN 0957-4158, DOI:j.mechatronics.2015.10.014.

A 3D reconstruction method using feature points is presented and the parameters used to improve the reconstruction are discussed. The precision of the 3D reconstruction is improved by combining point clouds obtained from different viewpoints using structured light. A well-known algorithm for point cloud registration is the ICP (Iterative Closest Point) that determines the rotation and translation that, when applied to one of the point clouds, places both point clouds optimally. The ICP algorithm iteratively executes two main steps: point correspondence determination and registration algorithm. The point correspondence determination is a module that, if not properly executed, can make the ICP converge to a local minimum. To overcome this drawback, two techniques were used. A meaningful set of 3D points using a technique known as SIFT (Scale-invariant feature transform) was obtained and an ICP that uses statistics to generate a dynamic distance and color threshold to the distance allowed between closest points was implemented. The reconstruction precision improvement was implemented using meaningful point clouds and the ICP to increase the number of points in the 3D space. The surface reconstruction is performed using marching cubes and filters to remove the noise and to smooth the surface. The factors that influence the 3D reconstruction precision are here discussed and analyzed. A detailed discussion of the number of frames used by the ICP and the ICP parameters is presented.

Calculating (experimental) probability distributions of the execution of sequential software

Laurent David, Isabelle Puaut, Static Determination of Probabilistic Execution Times, Proceedings of the 12th 16th Euromicro Conference on Real-Time Systems (ECRTS’04). Link.

Most previous research done in probabilistic schedulability analysis assumes a known distribution of execution times for each task of a real-time application. This is however not trivial to determine it with a high level of confidence. Methods based on measurements are often biased since not in general exhaustive on all the possible execution paths, whereas methods based on static analysis are mostly Worst-Case Execution Time – WCET – oriented. Using static analysis, this work proposes a method to obtain probabilistic distributions of execution times. It assumes that the given real time application is divided into multiple tasks, whose source code is known. Ignoring in this paper hardware considerations and based only on the source code of the tasks, the proposed technique allows designers to associate to any execution path an execution time and a probability to go through this path. A source code example is presented to illustrate the method.

Pdf form of the WCET of code execution

S. Edgar and A. Burns, Statistical analysis of WCET for scheduling, Real-Time Systems Symposium, 2001. (RTSS 2001). Proceedings. 22nd IEEE, 2001, pp. 215-224. DOI: 10.1109/REAL.2001.990614.

To perform a schedulability test, scheduling analysis relies on a known worst-case execution time (WCET). This value may be difficult to compute and may be overly pessimistic. This paper offers an alternative analysis based on estimating a WCET from test data to within a specific level of probabilistic confidence. A method is presented for calculating an estimate given statistical assumptions. The implications of the level of confidence on the likelihood of schedulability are also presented.