Interesting and gentle introduction to WCET analysis and synchronous design for hard real-time systems

Pascal Raymond, Claire Maiza, Catherine Parent-Vigouroux, Fabienne Carrier, Mihail Asavoae, 2015, Timing analysis enhancement for synchronous program, Real-Time Systems, Volume 51, Issue 2, pp 192-220, DOI: 10.1007/s11241-015-9219-y.

Real-time critical systems can be considered as correct if they compute both right and fast enough. Functionality aspects (computing right) can be addressed using high level design methods, such as the synchronous approach that provides languages, compilers and verification tools. Real-time aspects (computing fast enough) can be addressed with static timing analysis, that aims at discovering safe bounds on the worst-case execution time (WCET) of the binary code. In this paper, we aim at improving the estimated WCET in the case where the binary code comes from a high-level synchronous design. The key idea is that some high-level functional properties may imply that some execution paths of the binary code are actually infeasible, and thus, can be removed from the worst-case candidates. In order to automatize the method, we show (1) how to trace semantic information between the high-level design and the executable code, (2) how to use a model-checker to prove infeasibility of some execution paths, and (3) how to integrate such infeasibility information into an existing timing analysis framework. Based on a realistic example, we show that there is a large possible improvement for a reasonable computation time overhead.

A survey of semantic mapping for mobile robots

Ioannis Kostavelis, Antonios Gasteratos, 2015, Semantic mapping for mobile robotics tasks: A survey, Robotics and Autonomous Systems, Volume 66, April 2015, Pages 86-103, ISSN 0921-8890, DOI: 10.1016/j.robot.2014.12.006.

The evolution of contemporary mobile robotics has given thrust to a series of additional conjunct technologies. Of such is the semantic mapping, which provides an abstraction of space and a means for human\u2013robot communication. The recent introduction and evolution of semantic mapping motivated this survey, in which an explicit analysis of the existing methods is sought. The several algorithms are categorized according to their primary characteristics, namely scalability, inference model, temporal coherence and topological map usage. The applications involving semantic maps are also outlined in the work at hand, emphasizing on human interaction, knowledge representation and planning. The existence of publicly available validation datasets and benchmarking, suitable for the evaluation of semantic mapping techniques is also discussed in detail. Last, an attempt to address open issues and questions is also made.

Novel recursive bayesian estimator based on approaching pdfs by polynomials and keeping a hypothesis for each of its modes

Huang, G.; Zhou, K.; Trawny, N.; Roumeliotis, S.I., (2015), A Bank of Maximum A Posteriori (MAP) Estimators for Target Tracking, Robotics, IEEE Transactions on , vol.31, no.1, pp.85,103. DOI: TRO.2014.2378432

.

Nonlinear estimation problems, such as range-only and bearing-only target tracking, are often addressed using linearized estimators, e.g., the extended Kalman filter (EKF). These estimators generally suffer from linearization errors as well as the inability to track multimodal probability density functions. In this paper, we propose a bank of batch maximum a posteriori (MAP) estimators as a general estimation framework that provides relinearization of the entire state trajectory, multihypothesis tracking, and an efficient hypothesis generation scheme. Each estimator in the bank is initialized using a locally optimal state estimate for the current time step. Every time a new measurement becomes available, we relax the original batch-MAP problem and solve it incrementally. More specifically, we convert the relaxed one-step-ahead cost function into polynomial or rational form and compute all the local minima analytically. These local minima generate highly probable hypotheses for the target’s trajectory and hence greatly improve the quality of the overall MAP estimate. Additionally, pruning of least probable hypotheses and marginalization of old states are employed to control the computational cost. Monte Carlo simulation and real-world experimental results show that the proposed approach significantly outperforms the standard EKF, the batch-MAP estimator, and the particle filter.

Novel algorithm for inexact graph matching of moderate size graphs based on Gaussian process regression

Serradell, E.; Pinheiro, M.A.; Sznitman, R.; Kybic, J.; Moreno-Noguer, F.; Fua, P., (2015), Non-Rigid Graph Registration Using Active Testing Search, Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.37, no.3, pp.625,638. DOI: http://doi.org/10.1109/TPAMI.2014.2343235

.

We present a new approach for matching sets of branching curvilinear structures that form graphs embedded in R^2 or R^3 and may be subject to deformations. Unlike earlier methods, ours does not rely on local appearance similarity nor does require a good initial alignment. Furthermore, it can cope with non-linear deformations, topological differences, and partial graphs. To handle arbitrary non-linear deformations, we use Gaussian process regressions to represent the geometrical mapping relating the two graphs. In the absence of appearance information, we iteratively establish correspondences between points, update the mapping accordingly, and use it to estimate where to find the most likely correspondences that will be used in the next step. To make the computation tractable for large graphs, the set of new potential matches considered at each iteration is not selected at random as with many RANSAC-based algorithms. Instead, we introduce a so-called Active Testing Search strategy that performs a priority search to favor the most likely matches and speed-up the process. We demonstrate the effectiveness of our approach first on synthetic cases and then on angiography data, retinal fundus images, and microscopy image stacks acquired at very different resolutions.

Demonstration that students benefit from using colors while teaching electrical circuit analysis

Reisslein, J.; Johnson, A.M.; Reisslein, M., (2015), Color Coding of Circuit Quantities in Introductory Circuit Analysis Instruction, Education, IEEE Transactions on , vol.58, no.1, pp.7,14, DOI: 10.1109/TE.2014.2312674

Learning the analysis of electrical circuits represented by circuit diagrams is often challenging for novice students. An open research question in electrical circuit analysis instruction is whether color coding of the mathematical symbols (variables) that denote electrical quantities can improve circuit analysis learning. The present study compared two groups of high school students undergoing their first introductory learning of electrical circuit analysis. One group learned with circuit variables in black font. The other group learned with colored circuit variables, with blue font indicating variables related to voltage, red font indicating those related to current, and black font indicating those related to resistance. The color group achieved significantly higher post-test scores, gave higher ratings for liking the instruction and finding it helpful, and had lower ratings of cognitive load than the black-font group. These results indicate that color coding of the notations for quantities in electrical circuit diagrams aids the circuit analysis learning of novice students.

Mental imaginery for a mobile robot that learns obstacle avoidance

Wilmer Gaona, Esaú Escobar, Jorge Hermosillo, Bruno Lara (2015), Anticipation by multi-modal association through an artificial mental imagery process, Connection Science, 27:1, 68-88, DOI: 10.1080/09540091.2014.95628

Mental imagery has become a central issue in research laboratories seeking to emulate basic cognitive abilities in artificial agents. In this work, we propose a computational model to produce an anticipatory behaviour by means of a multi-modal off-line hebbian association. Unlike the current state of the art, we propose to apply hebbian learning during an internal sensorimotor simulation, emulating a process of mental imagery. We associate visual and tactile stimuli re-enacted by a long-term predictive simulation chain motivated by covert actions. As a result, we obtain a neural network which provides a robot with a mechanism to produce a visually conditioned obstacle avoidance behaviour. We developed our system in a physical Pioneer 3-DX robot and realised two experiments. In the first experiment we test our model on one individual navigating in two different mazes. In the second experiment we assess the robustness of the model by testing in a single environment five individuals trained under different conditions. We believe that our work offers an underpinning mechanism in cognitive robotics for the study of motor control strategies based on internal simulations. These strategies can be seen analogous to the mental imagery process known in humans, opening thus interesting pathways to the construction of upper-level grounded cognitive abilities.

Active exploration strategy for RL in robots, and approximation of value function by Gaussian processes

Jen Jen Chung, Nicholas R.J. Lawrance, Salah Sukkarieh (2015), Learning to soar: Resource-constrained exploration in reinforcement learning, The International Journal of Robotics Research vol. 34, pp. 158-172. DOI: 10.1177/0278364914553683

This paper examines temporal difference reinforcement learning with adaptive and directed exploration for resource-limited missions. The scenario considered is that of an unpowered aerial glider learning to perform energy-gaining flight trajectories in a thermal updraft. The presented algorithm, eGP-SARSA(\u03bb), uses a Gaussian process regression model to estimate the value function in a reinforcement learning framework. The Gaussian process also provides a variance on these estimates that is used to measure the contribution of future observations to the Gaussian process value function model in terms of information gain. To avoid myopic exploration we developed a resource-weighted objective function that combines an estimate of the future information gain using an action rollout with the estimated value function to generate directed explorative action sequences. A number of modifications and computational speed-ups to the algorithm are presented along with a standard GP-SARSA(\u03bb) implementation with Formula -greedy exploration to compare the respective learning performances. The results show that under this objective function, the learning agent is able to continue exploring for better state-action trajectories when platform energy is high and follow conservative energy-gaining trajectories when platform energy is low.

Abstracting and representing tasks performed under Learning from Demonstration, using bayesian non-parametric time-series analysis (good review of both LfD and HMMs for time-series)

Scott Niekum, Sarah Osentoski, George Konidaris, Sachin Chitta, Bhaskara Marthi, Andrew G. Barto (2015), Learning grounded finite-state representations from unstructured demonstrations, The International Journal of Robotics Research, vol. 34, pp. 131-157. DOI: 10.1177/0278364914554471

Robots exhibit flexible behavior largely in proportion to their degree of knowledge about the world. Such knowledge is often meticulously hand-coded for a narrow class of tasks, limiting the scope of possible robot competencies. Thus, the primary limiting factor of robot capabilities is often not the physical attributes of the robot, but the limited time and skill of expert programmers. One way to deal with the vast number of situations and environments that robots face outside the laboratory is to provide users with simple methods for programming robots that do not require the skill of an expert. For this reason, learning from demonstration (LfD) has become a popular alternative to traditional robot programming methods, aiming to provide a natural mechanism for quickly teaching robots. By simply showing a robot how to perform a task, users can easily demonstrate new tasks as needed, without any special knowledge about the robot. Unfortunately, LfD often yields little knowledge about the world, and thus lacks robust generalization capabilities, especially for complex, multi-step tasks. We present a series of algorithms that draw from recent advances in Bayesian non-parametric statistics and control theory to automatically detect and leverage repeated structure at multiple levels of abstraction in demonstration data. The discovery of repeated structure provides critical insights into task invariants, features of importance, high-level task structure, and appropriate skills for the task. This culminates in the discovery of a finite-state representation of the task, composed of grounded skills that are flexible and reusable, providing robust generalization and transfer in complex, multi-step robotic tasks. These algorithms are tested and evaluated using a PR2 mobile manipulator, showing success on several complex real-world tasks, such as furniture assembly.

Scientific limitations to the non-scientific idea that super-intelligence will come (for exterminating humans)

Ernest Davis, Ethical guidelines for a superintelligence, Artificial Intelligence, Volume 220, March 2015, Pages 121-124, ISSN 0004-3702, DOI: 10.1016/j.artint.2014.12.003.

Nick Bostrom, in his new book SuperIntelligence, argues that the creation of an artificial intelligence with human-level intelligence will be followed fairly soon by the existence of an almost omnipotent superintelligence, with consequences that may well be disastrous for humanity. He considers that it is therefore a top priority for mankind to figure out how to imbue such a superintelligence with a sense of morality; however, he considers that this task is very difficult. I discuss a number of flaws in his analysis, particularly the viewpoint that implementing ethical behavior is an especially difficult problem in AI research.

Solving the problem of the slow learning rate of reinfocerment learning through the acquisition of the transition model from the data

Deisenroth, M.P.; Fox, D.; Rasmussen, C.E., Gaussian Processes for Data-Efficient Learning in Robotics and Control, Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.37, no.2, pp.408,423, Feb. 2015, DOI: 10.1109/TPAMI.2013.218

Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.