Analysis of using RL as a PID tuning method

Ufuk Demircioğlu, Halit Bakır, Reinforcement learning–driven proportional–integral–derivative controller tuning for mass–spring systems: Stability, performance, and hyperparameter analysis, Engineering Applications of Artificial Intelligence, Volume 162, Part D, 2025, 10.1016/j.engappai.2025.112692.

Artificial intelligence (AI) methods—particularly reinforcement learning (RL)—are used to tune Proportional–Integral–Derivative (PID) controller parameters for a mass–spring–damper system. Learning is performed with the Twin Delayed Deep Deterministic Policy Gradient (TD3) actor–critic algorithm, implemented in MATLAB (Matrix Laboratory) and Simulink (a simulation environment by MathWorks). The objective is to examine the effect of critical RL hyperparameters—including experience buffer size, mini-batch size, and target policy smoothing noise—on the quality of learned PID gains and control performance. The proposed method eliminates the need for manual gain tuning by enabling the RL agent to autonomously learn optimal control strategies through continuous interaction with the Simulink-modeled mass–spring–damper system, where the agent observes responses and applies control actions to optimize the PID gains. Results show that small buffer sizes and suboptimal batch configurations cause unstable behavior, while buffer sizes of 105 or larger and mini-batch sizes between 64 and 128 yield robust tracking. A target policy smoothing noise of 0.01 produced the best performance, while values between 0.05 and 0.1 also provided stable results. Comparative analysis with the classical Simulink PID tuner indicated that, for this linear system, the conventional tuner achieved slightly better transient performance, particularly in overshoot and settling time. Although the RL-based method showed adaptability and generated valid PID gains, it did not surpass the classical approach in this structured system. These findings highlight the promise of AI- and RL-driven control in uncertain, nonlinear, or variable dynamics, while underscoring the importance of hyperparameter optimization in realizing the potential of RL-based Proportional–Integral–Derivative tuning.

RL with both discrete and continuous actions

Chengcheng Yan, Shujie Chen, Jiawei Xu, Xuejie Wang, Zheng Peng, Hybrid Reinforcement Learning in parameterized action space via fluctuates constraint, Engineering Applications of Artificial Intelligence, Volume 162, Part C, 2025 10.1016/j.engappai.2025.112499.

Parameterized actions in Reinforcement Learning (RL) are composed of discrete-continuous hybrid action parameters, which are widely employed in game scenarios. However, previous works have often concentrated on the network structure of RL algorithms to solve hybrid actions, neglecting the impact of fluctuations in action parameters for agent move trajectory. Due to the coupling between discrete and continuous actions, instability in discrete actions influences the selection of corresponding continuous parameters, resulting in the agent deviating from the optimal move path. In this paper, we propose a parameterized RL approach based on parameter fluctuation restriction (PFR) to address this problem, called CP-DQN. Our method effectively mitigated value fluctuation in action parameters by constraining the action parameter between adjacent time steps. Additionally, we have incorporated a supervision module to optimize the entire training process. To quantify the superiority of our approach in minimizing trajectory deviations for agents, we propose an indicator to measure the influence of parameter fluctuations on performance in hybrid action space. Our method is evaluated in three environments with hybrid action spaces, and the experiments demonstrate the superiority of our method compared to existing approaches.

A variant of RL aimed at reducing bias of conventional Q-learning

Fanghui Huang, Wenqi Han, Xiang Li, Xinyang Deng, Wen Jiang, Reducing the estimation bias and variance in reinforcement learning via Maxmean and Aitken value iteration, Engineering Applications of Artificial Intelligence, Volume 162, Part C, 2025, 10.1016/j.engappai.2025.112502.

The value-based reinforcement leaning methods suffer from overestimation bias, because of the existence of max operator, resulting in suboptimal policies. Meanwhile, variance in value estimation will cause the instability of networks. Many algorithms have been presented to solve the mentioned, but these lack the theoretical analysis about the degree of estimation bias, and the trade-off between the estimation bias and variance. Motivated by the above, in this paper, we propose a novel method based on Maxmean and Aitken value iteration, named MMAVI. The Maxmean operation allows the average of multiple state–action values (Q values) to be used as the estimated target value to mitigate the bias and variance. The Aitken value iteration is used to update Q values and improve the convergence rate. Based on the proposed method, combined with Q-learning and deep Q-network, we design two novel algorithms to adapt to different environments. To understand the effect of MMAVI, we analyze it both theoretically and empirically. In theory, we derive the closed-form expressions of reducing bias and variance, and prove that the convergence rate of our proposed method is faster than the traditional methods with Bellman equation. In addition, the convergence of our algorithms is proved in a tabular setting. Finally, we demonstrate that our proposed algorithms outperform the state-of-the-art algorithms in several environments.

A quantitative demonstration based on MDPs of the increasing need of a world model (learnt or given) as the complexity of the task and the performance of the agent increase

Jonathan Richens, David Abel, Alexis Bellot, Tom Everitt, General agents contain world models, arXiv cs:AI, Sep. 2025, arXiv:2506.01622.

Are world models a necessary ingredient for flexible, goal-directed behaviour, or is model-free learning sufficient? We provide a formal answer to this question, showing that any agent capable of generalizing to multi-step goal-directed tasks must have learned a predictive model of its environment. We show that this model can be extracted from the agent’s policy, and that increasing the agents performance or the complexity of the goals it can achieve requires learning increasingly accurate world models. This has a number of consequences: from developing safe and general agents, to bounding agent capabilities in complex environments, and providing new algorithms for eliciting world models from agents.

Inclusion of LLMs in multiple task learning for generating rewards

Z. Lin, Y. Chen and Z. Liu, AutoSkill: Hierarchical Open-Ended Skill Acquisition for Long-Horizon Manipulation Tasks via Language-Modulated Rewards, IEEE Transactions on Cognitive and Developmental Systems, vol. 17, no. 5, pp. 1141-1152, Oct. 2025, 10.1109/TCDS.2025.3551298.

A desirable property of generalist robots is the ability to both bootstrap diverse skills and solve new long-horizon tasks in open-ended environments without human intervention. Recent advancements have shown that large language models (LLMs) encapsulate vast-scale semantic knowledge about the world to enable long-horizon robot planning. However, they are typically restricted to reasoning high-level instructions and lack world grounding, which makes it difficult for them to coordinately bootstrap and acquire new skills in unstructured environments. To this end, we propose AutoSkill, a hierarchical system that empowers the physical robot to automatically learn to cope with new long-horizon tasks by growing an open-ended skill library without hand-crafted rewards. AutoSkill consists of two key components: 1) an in-context skill chain generation and new skill bootstrapping guided by LLMs that inform the robot of discrete and interpretable skill instructions for skill retrieval and augmentation within the skill library; and 2) a zero-shot language-modulated reward scheme in conjunction with a meta prompter facilitates online new skill acquisition via expert-free supervision aligned with proposed skill directives. Extensive experiments conducted in both simulated and realistic environments demonstrate AutoSkill’s superiority over other LLM-based planners as well as hierarchical methods in expediting online learning for novel manipulation tasks.

A cognitive map implemented according to the latest biological knowledge and aimed to robotic navigation

M. A. Hicks, T. Lei, C. Luo, D. W. Carruth and Z. Bi, A Bio-Inspired Goal-Directed Cognitive Map Model for Robot Navigation and Exploration, IEEE Transactions on Cognitive and Developmental Systems, vol. 17, no. 5, pp. 1125-1140, Oct. 2025 10.1109/TCDS.2025.3552085.

The concept of a cognitive map (CM), or spatial map, was originally proposed to explain how mammals learn and navigate their environments. Over time, extensive research in neuroscience and psychology has established the CM as a widely accepted model. In this work, we introduce a new goal-directed cognitive map (GDCM) model that takes a nontraditional approach to spatial mapping for robot navigation and path planning. Unlike conventional models, GDCM does not require complete environmental exploration to construct a graph for navigation purposes. Inspired by biological navigation strategies, such as the use of landmarks, Euclidean distance, random motion, and reward-driven behavior. The GDCM can navigate complex, static environments efficiently without needing to explore the entire workspace. The model utilizes known cell types (head direction, speed, border, grid, and place cells) that constitute the CM, arranged in a unique configuration. Each cell model is designed to emulate its biological counterpart in a simple, computationally efficient way. Through simulation-based comparisons, this innovative CM graph-building approach demonstrates more efficient navigation than traditional models that require full exploration. Furthermore, GDCM consistently outperforms several established path planning and navigation algorithms by finding better paths.

On the model that humans use for predicting movements of targets in order to reach them, and some evidence of a biological Kalman filter-like processing

John F. Soechting, John Z. Juveli, and Hrishikesh M. Rao, Models for the Extrapolation of Target Motion for Manual Interception, J Neurophysiol 102: 1491–1502, 2009, 10.1152/jn.00398.2009.

Soechting JF, Juveli JZ, Rao HM. Models for the extrapolation of target motion for manual interception. J Neurophysiol 102: 1491–1502, 2009. First published July 1, 2009; doi:10.1152/jn.00398.2009. Intercepting a moving target requires a prediction of the target’s future motion. This extrapolation could be achieved using sensed parameters of the target motion, e.g., its position and velocity. However, the accuracy of the prediction would be improved if subjects were also able to incorporate the statistical properties of the target’s motion, accumu- lated as they watched the target move. The present experiments were designed to test for this possibility. Subjects intercepted a target moving on the screen of a computer monitor by sliding their extended finger along the monitor’s surface. Along any of the six possible target paths, target speed could be governed by one of three possible rules: constant speed, a power law relation between speed and curvature, or the trajectory resulting from a sum of sinusoids. A go signal was given to initiate interception and was always presented when the target had the same speed, irrespective of the law of motion. The dependence of the initial direction of finger motion on the target’s law of motion was examined. This direction did not depend on the speed profile of the target, contrary to the hypothesis. However, finger direction could be well predicted by assuming that target location was extrapolated using target velocity and that the amount of extrapolation depended on the distance from the finger to the target. Subsequent analysis showed that the same model of target motion was also used for on-line, visually mediated corrections of finger movement when the motion was initially misdirected.

Improvements in offline RL (from previously acquired datasets)

Lan Wu, Quan Liu, Renyang You, State slow feature softmax Q-value regularization for offline reinforcement learning, Engineering Applications of Artificial Intelligence, Volume 160, Part A, 2025, 10.1016/j.engappai.2025.111828.

Offline reinforcement learning is constrained by its reliance on pre-collected datasets, without the opportunity for further interaction with the environment. This restriction often results in distribution shifts, which can exacerbate Q-value overestimation and degrade policy performance. To address these issues, we propose a method called state slow feature softmax Q-value regularization (SQR), which enhances the stability and accuracy of Q-value estimation in offline settings. SQR employs slow feature representation learning to extract dynamic information from state trajectories, promoting the stability and robustness of the state representations. Additionally, a softmax operator is incorporated into the Q-value update process to smooth Q-value estimation, reducing overestimation and improving policy optimization. Finally, we apply our approach to locomotion and navigation tasks and establish a comprehensive experimental analysis framework. Empirical results demonstrate that SQR outperforms state-of-the-art offline RL baselines, achieving performance improvements ranging from 2.5% to 44.6% on locomotion tasks and 2.0% to 71.1% on navigation tasks. Moreover, it achieves the highest score on 7 out of 15 locomotion datasets and 4 out of 6 navigation datasets. Detailed experimental results confirm the stabilizing effect of slow feature learning and the effectiveness of the softmax regularization in mitigating Q-value overestimation, demonstrating the superiority of SQR in addressing key challenges in offline reinforcement learning.

Clustering of states transitions in RLs

Yasaman Saffari, Javad Salimi Sartakhti, A Graph-based State Representation Learning for episodic reinforcement learning in task-oriented dialogue systems, Engineering Applications of Artificial Intelligence, Volume 160, Part A, 2025 10.1016/j.engappai.2025.111793.

Recent research in dialogue state tracking has made significant progress in tracking user goals using pretrained language models and context-driven approaches. However, existing work has primarily focused on contextual representations, often overlooking the structural complexity and topological properties of state transitions in episodic reinforcement learning tasks. In this study, we introduce a cutting-edge, dual-perspective state representation approach that provides a dynamic and inductive method for topological state representation learning in episodic reinforcement learning within task-oriented dialogue systems. The proposed model extracts inherent topological information from state transitions in the Markov Decision Process graph by employing a modified clustering technique to address the limitations of transductive graph representation learning. It inductively captures structural relationships and enables generalization to unseen states. Another key innovation of this approach is the incorporation of dynamic graph representation learning with task-specific rewards using Temporal Difference error. This captures topological features of state transitions, allowing the system to adapt to evolving goals and enhance decision-making in task-oriented dialogue systems. Experiments, including ablation studies, comparisons with existing approaches, and interpretability analysis, reveal that the proposed model significantly outperforms traditional contextual state representations, improving task success rates by 9%–13% across multiple domains. It also surpasses state-of-the-art Q-network-based methods, enhancing adaptability and decision-making in domains such as movie-ticket booking, restaurant reservations, and taxi ordering.

On the abstraction of actions

Bita Banihashemi, Giuseppe De Giacomo, Yves Lespérance, Abstracting situation calculus action theories, Artificial Intelligence, Volume 348, 2025 10.1016/j.artint.2025.104407.

We develop a general framework for agent abstraction based on the situation calculus and the ConGolog agent programming language. We assume that we have a high-level specification and a low-level specification of the agent, both represented as basic action theories. A refinement mapping specifies how each high-level action is implemented by a low-level ConGolog program and how each high-level fluent can be translated into a low-level formula. We define a notion of sound abstraction between such action theories in terms of the existence of a suitable bisimulation between their respective models. Sound abstractions have many useful properties that ensure that we can reason about the agent’s actions (e.g., executability, projection, and planning) at the abstract level, and refine and concretely execute them at the low level. We also characterize the notion of complete abstraction where all actions (including exogenous ones) that the high level thinks can happen can in fact occur at the low level. To facilitate verifying that one has a sound/complete abstraction relative to a mapping, we provide a set of necessary and sufficient conditions. Finally, we identify a set of basic action theory constraints that ensure that for any low-level action sequence, there is a unique high-level action sequence that it refines. This allows us to track/monitor what the low-level agent is doing and describe it in abstract terms (i.e., provide high-level explanations, for instance, to a client or manager).