Very interesting seminal work on the analysis and synthesis of embodied agents as coupled dynamical systems composed of both agent and environment

Randall D. Beer, A dynamical systems perspective on agent-environment interaction, Artificial Intelligence 72 (1995) 173-215 DOI: 10.1016/0004-3702(94)00005-L.

Using the language of dynamical systems theory, a general theoretical framework for the synthesis and analysis of autonomous agents is sketched. In this framework, an agent and its environment are modeled as two coupled dynamical systems whose mutual interaction is in general jointly responsible for the agent’s behavior. In addition, the adaptive fit between an agent and its environment is characterized in terms of the satisfaction of a given constraint on the trajectories of the coupled agent-environment system. The utility of this framework is demonstrated by using it to first synthesize and then analyze a walking behavior for a legged agent.

An interesting survey -before the “generative AI” boom- of the integration of sub-symbolic (for learning) and symbolic (for reasoning) systems

Artur d’Avila Garcez, Luis C. Lamb, Neurosymbolic AI: The 3rd Wave, arXiv:2012.05876 [cs.AI] https://arxiv.org/abs/2012.05876v2.

Current advances in Artificial Intelligence (AI) and Machine Learning (ML) have achieved unprecedented impact across research communities and industry. Nevertheless, concerns about trust, safety, interpretability and accountability of AI were raised by influential thinkers. Many have identified the need for well-founded knowledge representation and reasoning to be integrated with deep learning and for sound explainability. Neural-symbolic computing has been an active area of research for many years seeking to bring together robust learning in neural networks with reasoning and explainability via symbolic representations for network models. In this paper, we relate recent and early research results in neurosymbolic AI with the objective of identifying the key ingredients of the next wave of AI systems. We focus on research that integrates in a principled way neural network-based learning with symbolic knowledge representation and logical reasoning. The insights provided by 20 years of neural-symbolic computing are shown to shed new light onto the increasingly prominent role of trust, safety, interpretability and accountability of AI. We also identify promising directions and challenges for the next decade of AI research from the perspective of neural-symbolic systems.

Improving explainability of deep RL in Robotics

Mehran Taghian, Shotaro Miwa, Yoshihiro Mitsuka, Johannes Günther, Shadan Golestan, Osmar Zaiane, Explainability of deep reinforcement learning algorithms in robotic domains by using Layer-wise Relevance Propagation, Engineering Applications of Artificial Intelligence, Volume 137, Part A, 2024 DOI: 10.1016/j.engappai.2024.109131.

A key component to the recent success of reinforcement learning is the introduction of neural networks for representation learning. Doing so allows for solving challenging problems in several domains, one of which is robotics. However, a major criticism of deep reinforcement learning (DRL) algorithms is their lack of explainability and interpretability. This problem is even exacerbated in robotics as they oftentimes cohabitate space with humans, making it imperative to be able to reason about their behavior. In this paper, we propose to analyze the learned representation in a robotic setting by utilizing Graph Networks (GNs). Using the GN and Layer-wise Relevance Propagation (LRP), we represent the observations as an entity-relationship to allow us to interpret the learned policy. We evaluate our approach in two environments in MuJoCo. These two environments were delicately designed to effectively measure the value of knowledge gained by our approach to analyzing learned representations. This approach allows us to analyze not only how different parts of the observation space contribute to the decision-making process but also differentiate between policies and their differences in performance. This difference in performance also allows for reasoning about the agent’s recovery from faults. These insights are key contributions to explainable deep reinforcement learning in robotic settings.

A relatively simple way of reducing the sampling cost of DQN

Hossein Hassani, Soodeh Nikan, Abdallah Shami, Traffic navigation via reinforcement learning with episodic-guided prioritized experience replay, Engineering Applications of Artificial Intelligence, Volume 137, Part A, 2024, DOI: 10.1016/j.engappai.2024.109147.

Deep Reinforcement Learning (DRL) models play a fundamental role in autonomous driving applications; however, they typically suffer from sample inefficiency because they often require many interactions with the environment to learn effective policies. This makes the training process time-consuming. To address this shortcoming, Prioritized Experience Replay (PER) has proven to be effective by prioritizing samples with high Temporal-Difference (TD) error for learning. In this context, this study contributes to artificial intelligence by proposing a sample-efficient DRL algorithm called Episodic-Guided Prioritized Experience Replay (EPER). The core innovation of EPER lies in the utilization of an episodic memory, dedicated to storing successful training episodes. Within this memory, expected returns for each state–action pair are extracted. These returns, combined with TD error-based prioritization, form a novel objective function for deep Q-network training. To prevent excessive determinism, EPER introduces exploration into the learning process by incorporating a regularization term into the objective function that allows exploration of state-space regions with diverse Q-values. The proposed EPER algorithm is suitable to train a DRL agent for handling episodic tasks, and it can be integrated into off-policy DRL models. EPER is employed for traffic navigation through scenarios such as highway driving, merging, roundabout, and intersection to showcase its application in engineering. The attained results denote that, compared with the PER and an additional state-of-the-art training technique, EPER is superior in expediting the training of the agent and learning a more optimal policy that leads to lower collision rates within the constructed navigation scenarios.

A good survey and taxonomy for DRL in robotics

Chen Tang 1, Ben Abbatematteo 1, Jiaheng Hu 1, Rohan Chandra , Roberto Martı́n-Martı́n , Peter Stone, Deep Reinforcement Learning for Robotics: A Survey of Real-World
Successes,
arXiv:2408.03539 [cs.RO] https://www.arxiv.org/abs/2408.03539.

Reinforcement learning (RL), particularly its combination with deep neural networks referred to as deep RL (DRL), has shown tremendous promise across a wide range of applications, suggesting its potential for enabling the development of sophisticated robotic behaviors. Robotics problems, however, pose fundamental difficulties for the application of RL, stemming from the complexity and cost of interacting with the physical world. This article provides a modern survey of DRL for robotics, with a particular focus on evaluating the real-world successes achieved with DRL in realizing several key robotic competencies. Our analysis aims to identify the key factors underlying those exciting successes, reveal underexplored areas, and provide an overall characterization of the status of DRL in robotics. We highlight several important avenues for future work, emphasizing the need for stable and sample-efficient real-world RL paradigms, holistic approaches for discovering and integrating various competencies to tackle complex long-horizon, open-world tasks, and principled development and evaluation procedures. This survey is designed to offer insights for both RL practitioners and roboticists toward harnessing RL’s power to create generally capable real-world robotic systems.

Integrating the physical model of a Model Predictive Controller into an Actor-Critic RL framework to improve safety and flexibility at the same time

Angel Romero, Yunlong Song, Davide Scaramuzza, Actor-Critic Model Predictive Control, IEEE International Conference on Robotics and Automation, Yokohama, 2024 arXiv:2306.09852 [cs.RO].

An open research question in robotics is how
to combine the benefits of model-free reinforcement learning
(RL)—known for its strong task performance and flexibility in
optimizing general reward formulations—with the robustness
and online replanning capabilities of model predictive control
(MPC). This paper provides an answer by introducing a new
framework called Actor-Critic Model Predictive Control. The
key idea is to embed a differentiable MPC within an actor-
critic RL framework. The proposed approach leverages the
short-term predictive optimization capabilities of MPC with
the exploratory and end-to-end training properties of RL. The
resulting policy effectively manages both short-term decisions
through the MPC-based actor and long-term prediction via
the critic network, unifying the benefits of both model-based
control and end-to-end learning. We validate our method in
both simulation and the real world with a quadcopter platform
across various high-level tasks. We show that the proposed
architecture can achieve real-time control performance, learn
complex behaviors via trial and error, and retain the predictive
properties of the MPC to better handle out of distribution
behaviour.

A review of robotic simulators

J. Collins, S. Chand, A. Vanderkop and D. Howard, A Review of Physics Simulators for Robotic Applications, IEEE Access, vol. 9, pp. 51416-51431, 2021, DOI: 10.1109/ACCESS.2021.3068769.

The use of simulators in robotics research is widespread, underpinning the majority of recent advances in the field. There are now more options available to researchers than ever before, however navigating through the plethora of choices in search of the right simulator is often non-trivial. Depending on the field of research and the scenario to be simulated there will often be a range of suitable physics simulators from which it is difficult to ascertain the most relevant one. We have compiled a broad review of physics simulators for use within the major fields of robotics research. More specifically, we navigate through key sub-domains and discuss the features, benefits, applications and use-cases of the different simulators categorised by the respective research communities. Our review provides an extensive index of the leading physics simulators applicable to robotics researchers and aims to assist them in choosing the best simulator for their use case.

Fitting any dataset with a function that only has 1 parameter

Laurent Bou´e, Real numbers, data science and chaos: How to fit any dataset with a single parameter, arXiv:1904.12320v1 [cs.LG] 28 Apr 2019.

We show how any dataset of any modality (time-series, images, sound…) can be approximated by a well-
behaved (continuous, differentiable…) scalar function with a single real-valued parameter. Building upon
elementary concepts from chaos theory, we adopt a pedagogical approach demonstrating how to adjust this
parameter in order to achieve arbitrary precision fit to all samples of the data. Targeting an audience of
data scientists with a taste for the curious and unusual, the results presented here expand on previous similar
observations [1] regarding expressiveness power and generalization of machine learning models.

Equivalence between Transformers and SVMs

Davoud Ataee Tarzanagh, Yingcong Li, Christos Thrampoulidis, Samet Oymak, Transformers as Support Vector Machines, arXiv:2308.16898 [cs.LG], https://arxiv.org/abs/2308.16898.

Since its inception in “Attention Is All You Need”, transformer architecture has led to revolutionary advancements in NLP. The attention layer within the transformer admits a sequence of input tokens X and makes them interact through pairwise similarities computed as softmax(XQK⊤X⊤), where (K,Q) are the trainable key-query parameters. In this work, we establish a formal equivalence between the optimization geometry of self-attention and a hard-margin SVM problem that separates optimal input tokens from non-optimal tokens using linear constraints on the outer-products of token pairs. This formalism allows us to characterize the implicit bias of 1-layer transformers optimized with gradient descent: (1) Optimizing the attention layer with vanishing regularization, parameterized by (K,Q), converges in direction to an SVM solution minimizing the nuclear norm of the combined parameter W=KQ⊤. Instead, directly parameterizing by W minimizes a Frobenius norm objective. We characterize this convergence, highlighting that it can occur toward locally-optimal directions rather than global ones. (2) Complementing this, we prove the local/global directional convergence of gradient descent under suitable geometric conditions. Importantly, we show that over-parameterization catalyzes global convergence by ensuring the feasibility of the SVM problem and by guaranteeing a benign optimization landscape devoid of stationary points. (3) While our theory applies primarily to linear prediction heads, we propose a more general SVM equivalence that predicts the implicit bias with nonlinear heads. Our findings are applicable to arbitrary datasets and their validity is verified via experiments. We also introduce several open problems and research directions. We believe these findings inspire the interpretation of transformers as a hierarchy of SVMs that separates and selects optimal tokens.

Interesting survey of floating-point arithmetic in computers

David Goldberg, What Every Computer Scientist Should Know About Floating-Point Arithmetic, March, 1991 issue of Computing Surveys of the ACM, https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html.

Floating-point arithmetic is considered an esoteric subject by many people. This is rather surprising because floating-point is ubiquitous in computer systems. Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on those aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating-point standard, and concludes with numerous examples of how computer builders can better support floating-point.