Soft-real time scheduling in distributed systems based on accrued utility of distributable threads, including situations of node failures

Ravindran B., Anderson J. S., Jensen E. D., On Distributed Real-Time Scheduling in Networked Embedded Systems in the Presence of Crash Failures, Lecture Notes in Computer Science, vol. 4761, pp. 67-81, DOI: 10.1007/978-3-540-75664-4_8.

We consider the problem of scheduling distributable real-time threads in networkedembedded systems that operate under run-time uncertainties including those on thread execution times, thread arrivals, and node failure occurrences. We present a distributed scheduling algorithm called CUA. We show that CUA satisfies thread time constraints in the presence of crash failures, is early-deciding, has an efficient message complexity of O(f n) (where f is the number of crashes that actually occur and n is the number of nodes), and is time-optimal with a time lower bound of O(D + f d + nk) (where D is the message delay upper bound, d is the failure detection bound, and k is the maximum number of threads). In crash-free runs, the algorithm constructs schedules within O(D + nk), and yields optimal total utility if nodes are also not overloaded. The algorithm is also “best-effort” in that a high importance thread that may arrive at any time has a very high likelihood for feasible completion (in contrast to classical admission control algorithms which favor feasible completion of admitted threads over admitting new ones, irrespective of thread importance).

A framework to manage and switch between several sensor modalities in tele-operation

Andrea Cherubini, Robin Passama, Philippe Fraisse, André Crosnier, A unified multimodal control framework for human–robot interaction, Robotics and Autonomous Systems, Volume 70, August 2015, Pages 106-115, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.03.002.

In human–robot interaction, the robot controller must reactively adapt to sudden changes in the environment (due to unpredictable human behaviour). This often requires operating different modes, and managing sudden signal changes from heterogeneous sensor data. In this paper, we present a multimodal sensor-based controller, enabling a robot to adapt to changes in the sensor signals (here, changes in the human collaborator behaviour). Our controller is based on a unified task formalism, and in contrast with classical hybrid visicn–force–position control, it enables smooth transitions and weighted combinations of the sensor tasks. The approach is validated in a mock-up industrial scenario, where pose, vision (from both traditional camera and Kinect), and force tasks must be realized either exclusively or simultaneously, for human–robot collaboration.

Reinforcement learning when a human is the one providing the rewards to the algorithm

W. Bradley Knox, Peter Stone, Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance, Artificial Intelligence, Volume 225, August 2015, Pages 24-50, ISSN 0004-3702, DOI: 10.1016/j.artint.2015.03.009.

Several studies have demonstrated that reward from a human trainer can be a powerful feedback signal for control-learning algorithms. However, the space of algorithms for learning from such human reward has hitherto not been explored systematically. Using model-based reinforcement learning from human reward, this article investigates the problem of learning from human reward through six experiments, focusing on the relationships between reward positivity, which is how generally positive a trainer’s reward values are; temporal discounting, the extent to which future reward is discounted in value; episodicity, whether task learning occurs in discrete learning episodes instead of one continuing session; and task performance, the agent’s performance on the task the trainer intends to teach. This investigation is motivated by the observation that an agent can pursue different learning objectives, leading to different resulting behaviors. We search for learning objectives that lead the agent to behave as the trainer intends.
We identify and empirically support a “positive circuits” problem with low discounting (i.e., high discount factors) for episodic, goal-based tasks that arises from an observed bias among humans towards giving positive reward, resulting in an endorsement of myopic learning for such domains. We then show that converting simple episodic tasks to be non-episodic (i.e., continuing) reduces and in some cases resolves issues present in episodic tasks with generally positive reward and—relatedly—enables highly successful learning with non-myopic valuation in multiple user studies. The primary learning algorithm introduced in this article, which we call “vi-tamer”, is the first algorithm to successfully learn non-myopically from reward generated by a human trainer; we also empirically show that such non-myopic valuation facilitates higher-level understanding of the task. Anticipating the complexity of real-world problems, we perform further studies—one with a failure state added—that compare (1) learning when states are updated asynchronously with local bias—i.e., states quickly reachable from the agent’s current state are updated more often than other states—to (2) learning with the fully synchronous sweeps across each state in the vi-tamer algorithm. With these locally biased updates, we find that the general positivity of human reward creates problems even for continuing tasks, revealing a distinct research challenge for future work.

Heuristic, real-time search with weighted heuristic function

Nicolás Rivera, Jorge A. Baier, Carlos Hernández, Incorporating weights into real-time heuristic search, Artificial Intelligence, Volume 225, August 2015, Pages 1-23, ISSN 0004-3702, DOI: 10.1016/j.artint.2015.03.008.

Multiplying the heuristic function by a weight greater than one is a well-known technique in heuristic search. When this technique is applied to A* with an admissible heuristic it yields substantial runtime savings, at the expense of sacrificing solution optimality. Its applicability to real-time heuristic search, a search approach that builds upon heuristic search, however, has only been explored by a few studies. In this article we present two new approaches to using weights in real-time heuristic search, applicable to a wide range of algorithms. The first one, weighted lookahead, is a variant of an existing approach by Shimbo and Ishida, and utilizes the weight while the algorithm performs lookahead search. The second one, weighted update, incorporates the weight to the edges of the search graph during the learning phase. We implemented both techniques within LSS-LRTA* and evaluated them in path-planning benchmarks. We show that weighted lookahead outperforms an existing approach by Shimbo and Ishida but that it does not improve over existing approaches that do not use weights. Weighted update, on the other hand, yields performance improvements of up to one order of magnitude both in solution cost and total search time. To illustrate further the generality of weighted update, we incorporate the technique in two other well-known real-time heuristic search algorithms: LRTA*-LS and daLSS-LRTA*, and we empirically show significant improvements for LRTA*-LS and modest but still important improvements for daLSS-LRTA*. We analyze the properties of weighted update in depth, showing, among other things, that it guarantees termination. Convergence behavior of LSS-LRTA*, modified to use weighted update, is also analyzed. In such a setting, we prove solutions are w-optimal, and provide additional bounds on solution quality that in practice are tighter than w-optimality.

Reinforcement learning applied to select which parts of a Neural Turing Machine are to be updated with backpropagation during learning

Wojciech Zaremba, Ilya Sutskever, Reinforcement Learning Neural Turing Machines, arXiv.org, arXiv:1505.00521.

The expressive power of a machine learning model is closely related to the number of sequential computational steps it can learn. For example, Deep Neural Networks have been more successful than shallow networks because they can perform a greater number of sequential computational steps (each highly parallel). The Neural Turing Machine (NTM) is a model that can compactly express an even greater number of sequential computational steps, so it is even more powerful than a DNN. Its memory addressing operations are designed to be differentiable; thus the NTM can be trained with backpropagation.
While differentiable memory is relatively easy to implement and train, it necessitates accessing the entire memory content at each computational step. This makes it difficult to implement a fast NTM. In this work, we use the Reinforce algorithm to learn where to access the memory, while using backpropagation to learn what to write to the memory. We call this model the RL-NTM. Reinforce allows our model to access a constant number of memory cells at each computational step, so its implementation can be faster. The RL-NTM is the first model that can, in principle, learn programs of unbounded running time. We successfully trained the RL-NTM to solve a number of algorithmic tasks that are simpler than the ones solvable by the fully differentiable NTM.
As the RL-NTM is a fairly intricate model, we needed a method for verifying the correctness of our implementation. To do so, we developed a simple technique for numerically checking arbitrary implementations of models that use Reinforce, which may be of independent interest.

A bayesian framework to explain magnitude estimation in the human mind

Frederike H. Petzschner, Stefan Glasauer, Klaas E. Stephan, A Bayesian perspective on magnitude estimation, Trends in Cognitive Sciences, Volume 19, Issue 5, May 2015, Pages 285-293, ISSN 1364-6613, DOI: 10.1016/j.tics.2015.03.002.

Our representation of the physical world requires judgments of magnitudes, such as loudness, distance, or time. Interestingly, magnitude estimates are often not veridical but subject to characteristic biases. These biases are strikingly similar across different sensory modalities, suggesting common processing mechanisms that are shared by different sensory systems. However, the search for universal neurobiological principles of magnitude judgments requires guidance by formal theories. Here, we discuss a unifying Bayesian framework for understanding biases in magnitude estimation. This Bayesian perspective enables a re-interpretation of a range of established psychophysical findings, reconciles seemingly incompatible classical views on magnitude estimation, and can guide future investigations of magnitude estimation and its neurobiological mechanisms in health and in psychiatric diseases, such as schizophrenia.

Reinforcement learning to explain emotions

Joost Broekensa, Elmer Jacobsa & Catholijn M. Jonker, A reinforcement learning model of joy, distress, hope and fear, Connection Science, DOI: 10.1080/09540091.2015.1031081.

In this paper we computationally study the relation between adaptive behaviour and emotion. Using the reinforcement learning framework, we propose that learned state utility, V(s), models fear (negative) and hope (positive) based on the fact that both signals are about anticipation of loss or gain. Further, we propose that joy/distress is a signal similar to the error signal. We present agent-based simulation experiments that show that this model replicates psychological and behavioural dynamics of emotion. This work distinguishes itself by assessing the dynamics of emotion in an adaptive agent framework – coupling it to the literature on habituation, development, extinction and hope theory. Our results support the idea that the function of emotion is to provide a complex feedback signal for an organism to adapt its behaviour. Our work is relevant for understanding the relation between emotion and adaptation in animals, as well as for human–robot interaction, in particular how emotional signals can be used to communicate between adaptive agents and humans.

Analysis of the deterioration of several Kalman Filters depending on the amount of uncertainty in the observations, when the observation model is non-linear

Mark R. Morelande and Ángel F. García-Fernández, Analysis of Kalman Filter Approximations for Nonlinear Measurements, IEEE Transactions on signal processing, vol. 61, no. 22, 2013 DOI: 10.1109/TSP.2013.2279367.

A theoretical analysis is presented of the correction step of the Kalman filter (KF) and its various approximations for the case of a nonlinear measurement equation with additive Gaussian noise. The KF is based on a Gaussian app roximation to the joint density of the state and the measurement. The analysis metric is the Kullback-Leibler divergence of this approximation from the true joint density. The purpose of the analysis is to provide a quantitative tool for understanding and assessing the performance of the KF and its variants in nonlinear scenarios. This is illustrated using a numerical example.

Application of reinforcement learning to the defense against attacks on communication networks

Kleanthis Malialisa, Sam Devlina & Daniel Kudenkoa, Distributed reinforcement learning for adaptive and robust network intrusion response, Connection Science, DOI: 0.1080/09540091.2015.1031082.

Distributed denial of service (DDoS) attacks constitute a rapidly evolving threat in the current Internet. Multiagent Router Throttling is a novel approach to defend against DDoS attacks where multiple reinforcement learning agents are installed on a set of routers and learn to rate-limit or throttle traffic towards a victim server. The focus of this paper is on online learning and scalability. We propose an approach that incorporates task decomposition, team rewards and a form of reward shaping called difference rewards. One of the novel characteristics of the proposed system is that it provides a decentralised coordinated response to the DDoS problem, thus being resilient to DDoS attacks themselves. The proposed system learns remarkably fast, thus being suitable for online learning. Furthermore, its scalability is successfully demonstrated in experiments involving 1000 learning agents. We compare our approach against a baseline and a popular state-of-the-art throttling technique from the network security literature and show that the proposed approach is more effective, adaptive to sophisticated attack rate dynamics and robust to agent failures.

Example of both bottom-up and top-down processes that are integrated in a solution for the recognition of shapes

Ching L. Teo, Cornelia Fermüller, and Yiannis Aloimonos, A Gestaltist approach to contour-based object recognition: Combining bottom-up and top-down cues, The International Journal of Robotics Research April 2015 34: 627-652, first published on March 25, 2015, DOI: 10.1177/0278364914558493.

This paper proposes a method for detecting generic classes of objects from their representative contours that can be used by a robot with vision to find objects in cluttered environments. The approach uses a mid-level image operator to group edges into contours which likely correspond to object boundaries. This mid-level operator is used in two ways, bottom-up on simple edges and top-down incorporating object shape information, thus acting as the intermediary between low-level and high-level information. First, the mid-level operator, called the image torque, is applied to simple edges to extract likely fixation locations of objects. Using the operator’s output, a novel contour-based descriptor is created that extends the shape context descriptor to include boundary ownership information and accounts for rotation. This descriptor is then used in a multi-scale matching approach to modulate the torque operator towards the target, so it indicates its location and size. Unlike other approaches that use edges directly to guide the independent edge grouping and matching processes for recognition, both of these steps are effectively combined using the proposed method. We evaluate the performance of our approach using four diverse datasets containing a variety of object categories in clutter, occlusion and viewpoint changes. Compared with current state-of-the-art approaches, our approach is able to detect the target with fewer false alarms in most object categories. The performance is further improved when we exploit depth information available from the Kinect RGB-Depth sensor by imposing depth consistency when applying the image torque.