Author Archives: Juan-antonio Fernández-madrigal

Enhancing RRT with a more intelligent sampling of movements

Asmaa Loulou, Mustafa Unel, Hybrid attention-guided RRT*: Learning spatial sampling priors for accelerated path planning, Robotics and Autonomous Systems, Volume 198, 2026, 10.1016/j.robot.2026.105338.

Sampling-based planners such as RRT* are widely used for motion planning in high-dimensional and complex environments. However, their reliance on uniform sampling often leads to slow convergence and inefficiency, especially in scenarios with narrow passages or long-range dependencies. To address this, we propose HAGRRT*, a Hybrid Attention-Guided RRT* algorithm that learns to generate spatially informed sampling priors. Our method introduces a new neural architecture that fuses multi-scale convolutional features with a lightweight cross-attention mechanism, explicitly conditioned on the start and goal positions. These features are decoded via a DPT-inspired module to produce 2D probability maps that guide the sampling process. Additionally, we propose an obstacle-aware loss function that penalizes disconnected and infeasible predictions which further encourages the network to focus on traversable, goal-directed regions. Extensive experiments on both structured (maze) and unstructured (forest) environments show that HAGRRT* achieves significantly faster convergence and improved path quality compared to both classical RRT* and recent deep-learning guided variants. Our method consistently requires fewer iterations and samples and is able to generalize across varying dataset types. On structured scenarios, our method achieves an average reduction of 39.6% in the number of samples and an average of 24.4% reduction in planning time compared to recent deep learning methods. On unstructured forest maps, our method reduces the number of samples by 71.5%, and planning time by 81.7% compared to recent deep learning methods, and improves the success rate from 67% to 93%. These results highlight the robustness, efficiency, and generalization ability of our approach across a wide range of planning environments.

See also: the no so strong influence of time in some cognitive processes, such as speech processing (https://doi.org/10.1016/j.tics.2025.05.017)

Uncovering time variations in decision making of agents that do not always respond with the same policy

Anne E. Urai, Structure uncovered: understanding temporal variability in perceptual decision-making, Trends in Cognitive Sciences, Volume 30, Issue 1, 2026, Pages 54-65 10.1016/j.tics.2025.06.003.

Studies of perceptual decision-making typically present the same stimulus repeatedly over the course of an experimental session but ignore the order of these observations, assuming unrealistic stability of decision strategies over trials. However, even ‘stable,’ ‘steady-state,’ or ‘expert’ decision-making behavior features significant trial-to-trial variability that is richly structured in time. Structured trial-to-trial variability of various forms can be uncovered using latent variable models such as hidden Markov models and autoregressive models, revealing how unobservable internal states change over time. Capturing such temporal structure can avoid confounds in cognitive models, provide insights into inter- and intraindividual variability, and bridge the gap between neural and cognitive mechanisms of variability in perceptual decision-making.

See also: the no so strong influence of time in some cognitive processes, such as speech processing (https://doi.org/10.1016/j.tics.2025.05.017)

Evidences in the natural world of the benefits of communication errors within collaborative agents

Bradley D. Ohlinger, Takao Sasaki, How miscommunication can improve collective performance in social insects, Trends in Cognitive Sciences, Volume 30, Issue 1, 2026, Pages 10-12, 10.1016/j.tics.2025.10.005.

Communication errors are typically viewed as detrimental, yet they can benefit collective foraging in social insects. Temnothorax ants provide a powerful model for studying how such errors arise during tandem running and how they might improve group performance under certain environmental conditions.

Deterministic guarantees (aka certification) for POMDPs

Moran Barenboim, Vadim Indelman, Online POMDP planning with anytime deterministic optimality guarantees, Artificial Intelligence, Volume 350, 2026, 10.1016/j.artint.2025.104442.

Decision-making under uncertainty is a critical aspect of many practical autonomous systems due to incomplete information. Partially Observable Markov Decision Processes (POMDPs) offer a mathematically principled framework for formulating decision-making problems under such conditions. However, finding an optimal solution for a POMDP is generally intractable. In recent years, there has been a significant progress of scaling approximate solvers from small to moderately sized problems, using online tree search solvers. Often, such approximate solvers are limited to probabilistic or asymptotic guarantees towards the optimal solution. In this paper, we derive a deterministic relationship for discrete POMDPs between an approximated and the optimal solution. We show that at any time, we can derive bounds that relate between the existing solution and the optimal one. We show that our derivations provide an avenue for a new set of algorithms and can be attached to existing algorithms that have a certain structure to provide them with deterministic guarantees with marginal computational overhead. In return, not only do we certify the solution quality, but we demonstrate that making a decision based on the deterministic guarantee may result in superior performance compared to the original algorithm without the deterministic certification.

A novel stochastic gradient optimization method that improves over common ones

Mengxiang Zhang, Shengjie Li, Inertial proximal stochastic gradient method with adaptive sampling for non-convex and non-smooth problems, Engineering Applications of Artificial Intelligence, Volume 163, Part 3, 2026, 10.1016/j.engappai.2025.113087.

Stochastic gradient methods with inertia have proven effective in convex optimization, yet most real-world tasks involve non-convex objectives. With the growing scale and dimensionality of modern datasets, non-convex and non-smooth regularization has become essential for improving generalization, controlling complexity, and mitigating overfitting. While widely applied in logistic regression, sparse recovery, medical imaging, and sparse neural networks, such formulations remain challenging due to the high cost of exact gradients, the sensitivity of stochastic gradients to sample size, and convergence difficulties caused by noise and non-smooth non-convexity. We propose a stochastic algorithm that addresses these issues by introducing an adaptive sampling strategy to balance stochastic gradient noise and efficiency, incorporating inertia for acceleration, and a step size update rule coupled with both sample size and inertia. We avoid the need for exact function value computations required by traditional inertial methods in non-convex and non-smooth problems, as well as the costly full-gradient evaluations or substantial memory usage typically associated with variance-reduction techniques. To our knowledge, this is the first stochastic method with adaptive sampling and inertia that guarantees convergence in non-convex and non-smooth settings, attaining O(1/K) rates to critical points under mild variance conditions, while achieving accelerated O(1/k2) convergence in convex optimization. Experiments on logistic regression and neural networks validate its efficiency and provide practical guidance for selecting sample sizes and step sizes.

Analysis of using RL as a PID tuning method

Ufuk Demircioğlu, Halit Bakır, Reinforcement learning–driven proportional–integral–derivative controller tuning for mass–spring systems: Stability, performance, and hyperparameter analysis, Engineering Applications of Artificial Intelligence, Volume 162, Part D, 2025, 10.1016/j.engappai.2025.112692.

Artificial intelligence (AI) methods—particularly reinforcement learning (RL)—are used to tune Proportional–Integral–Derivative (PID) controller parameters for a mass–spring–damper system. Learning is performed with the Twin Delayed Deep Deterministic Policy Gradient (TD3) actor–critic algorithm, implemented in MATLAB (Matrix Laboratory) and Simulink (a simulation environment by MathWorks). The objective is to examine the effect of critical RL hyperparameters—including experience buffer size, mini-batch size, and target policy smoothing noise—on the quality of learned PID gains and control performance. The proposed method eliminates the need for manual gain tuning by enabling the RL agent to autonomously learn optimal control strategies through continuous interaction with the Simulink-modeled mass–spring–damper system, where the agent observes responses and applies control actions to optimize the PID gains. Results show that small buffer sizes and suboptimal batch configurations cause unstable behavior, while buffer sizes of 105 or larger and mini-batch sizes between 64 and 128 yield robust tracking. A target policy smoothing noise of 0.01 produced the best performance, while values between 0.05 and 0.1 also provided stable results. Comparative analysis with the classical Simulink PID tuner indicated that, for this linear system, the conventional tuner achieved slightly better transient performance, particularly in overshoot and settling time. Although the RL-based method showed adaptability and generated valid PID gains, it did not surpass the classical approach in this structured system. These findings highlight the promise of AI- and RL-driven control in uncertain, nonlinear, or variable dynamics, while underscoring the importance of hyperparameter optimization in realizing the potential of RL-based Proportional–Integral–Derivative tuning.

RL with both discrete and continuous actions

Chengcheng Yan, Shujie Chen, Jiawei Xu, Xuejie Wang, Zheng Peng, Hybrid Reinforcement Learning in parameterized action space via fluctuates constraint, Engineering Applications of Artificial Intelligence, Volume 162, Part C, 2025 10.1016/j.engappai.2025.112499.

Parameterized actions in Reinforcement Learning (RL) are composed of discrete-continuous hybrid action parameters, which are widely employed in game scenarios. However, previous works have often concentrated on the network structure of RL algorithms to solve hybrid actions, neglecting the impact of fluctuations in action parameters for agent move trajectory. Due to the coupling between discrete and continuous actions, instability in discrete actions influences the selection of corresponding continuous parameters, resulting in the agent deviating from the optimal move path. In this paper, we propose a parameterized RL approach based on parameter fluctuation restriction (PFR) to address this problem, called CP-DQN. Our method effectively mitigated value fluctuation in action parameters by constraining the action parameter between adjacent time steps. Additionally, we have incorporated a supervision module to optimize the entire training process. To quantify the superiority of our approach in minimizing trajectory deviations for agents, we propose an indicator to measure the influence of parameter fluctuations on performance in hybrid action space. Our method is evaluated in three environments with hybrid action spaces, and the experiments demonstrate the superiority of our method compared to existing approaches.

A variant of RL aimed at reducing bias of conventional Q-learning

Fanghui Huang, Wenqi Han, Xiang Li, Xinyang Deng, Wen Jiang, Reducing the estimation bias and variance in reinforcement learning via Maxmean and Aitken value iteration, Engineering Applications of Artificial Intelligence, Volume 162, Part C, 2025, 10.1016/j.engappai.2025.112502.

The value-based reinforcement leaning methods suffer from overestimation bias, because of the existence of max operator, resulting in suboptimal policies. Meanwhile, variance in value estimation will cause the instability of networks. Many algorithms have been presented to solve the mentioned, but these lack the theoretical analysis about the degree of estimation bias, and the trade-off between the estimation bias and variance. Motivated by the above, in this paper, we propose a novel method based on Maxmean and Aitken value iteration, named MMAVI. The Maxmean operation allows the average of multiple state–action values (Q values) to be used as the estimated target value to mitigate the bias and variance. The Aitken value iteration is used to update Q values and improve the convergence rate. Based on the proposed method, combined with Q-learning and deep Q-network, we design two novel algorithms to adapt to different environments. To understand the effect of MMAVI, we analyze it both theoretically and empirically. In theory, we derive the closed-form expressions of reducing bias and variance, and prove that the convergence rate of our proposed method is faster than the traditional methods with Bellman equation. In addition, the convergence of our algorithms is proved in a tabular setting. Finally, we demonstrate that our proposed algorithms outperform the state-of-the-art algorithms in several environments.

A quantitative demonstration based on MDPs of the increasing need of a world model (learnt or given) as the complexity of the task and the performance of the agent increase

Jonathan Richens, David Abel, Alexis Bellot, Tom Everitt, General agents contain world models, arXiv cs:AI, Sep. 2025, arXiv:2506.01622.

Are world models a necessary ingredient for flexible, goal-directed behaviour, or is model-free learning sufficient? We provide a formal answer to this question, showing that any agent capable of generalizing to multi-step goal-directed tasks must have learned a predictive model of its environment. We show that this model can be extracted from the agent’s policy, and that increasing the agents performance or the complexity of the goals it can achieve requires learning increasingly accurate world models. This has a number of consequences: from developing safe and general agents, to bounding agent capabilities in complex environments, and providing new algorithms for eliciting world models from agents.

Inclusion of LLMs in multiple task learning for generating rewards

Z. Lin, Y. Chen and Z. Liu, AutoSkill: Hierarchical Open-Ended Skill Acquisition for Long-Horizon Manipulation Tasks via Language-Modulated Rewards, IEEE Transactions on Cognitive and Developmental Systems, vol. 17, no. 5, pp. 1141-1152, Oct. 2025, 10.1109/TCDS.2025.3551298.

A desirable property of generalist robots is the ability to both bootstrap diverse skills and solve new long-horizon tasks in open-ended environments without human intervention. Recent advancements have shown that large language models (LLMs) encapsulate vast-scale semantic knowledge about the world to enable long-horizon robot planning. However, they are typically restricted to reasoning high-level instructions and lack world grounding, which makes it difficult for them to coordinately bootstrap and acquire new skills in unstructured environments. To this end, we propose AutoSkill, a hierarchical system that empowers the physical robot to automatically learn to cope with new long-horizon tasks by growing an open-ended skill library without hand-crafted rewards. AutoSkill consists of two key components: 1) an in-context skill chain generation and new skill bootstrapping guided by LLMs that inform the robot of discrete and interpretable skill instructions for skill retrieval and augmentation within the skill library; and 2) a zero-shot language-modulated reward scheme in conjunction with a meta prompter facilitates online new skill acquisition via expert-free supervision aligned with proposed skill directives. Extensive experiments conducted in both simulated and realistic environments demonstrate AutoSkill’s superiority over other LLM-based planners as well as hierarchical methods in expediting online learning for novel manipulation tasks.