Author Archives: Juan-antonio Fernández-madrigal

Hierarchical optimization based on learning from mistakes

L. Zhang, B. Garg, P. Sridhara, R. Hosseini and P. Xie, Learning From Mistakes: A Multilevel Optimization Framework, IEEE Transactions on Artificial Intelligence, vol. 6, no. 6, pp. 1651-1663, June 2025, 10.1109/TAI.2025.3534151.

Bi-level optimization methods in machine learning are popularly effective in subdomains of neural architecture search, data reweighting, etc. However, most of these methods do not factor in variations in learning difficulty, which limits their performance in real-world applications. To address the above problems, we propose a framework that imitates the learning process of humans. In human learning, learners usually focus more on the topics where mistakes have been made in the past to deepen their understanding and master the knowledge. Inspired by this effective human learning technique, we propose a multilevel optimization framework, learning from mistakes (LFM), for machine learning. We formulate LFM as a three-stage optimization problem: 1) the learner learns, 2) the learner relearns based on the mistakes made before, and 3) the learner validates his learning. We develop an efficient algorithm to solve the optimization problem. We further apply our method to differentiable neural architecture search and data reweighting. Extensive experiments on CIFAR-10, CIFAR-100, ImageNet, and other related datasets powerfully demonstrate the effectiveness of our approach. The code of LFM is available at: https://github.com/importZL/LFM.

The brain is organized to minimize energy consumption while maximizing computation

Sharna D. Jamadar, Anna Behler, Hamish Deery, Michael Breakspear, The metabolic costs of cognition, Trends in Cognitive Sciences, Volume 29, Issue 6, 2025, Pages 541-555, 10.1016/j.tics.2024.11.010.

Cognition and behavior are emergent properties of brain systems that seek to maximize complex and adaptive behaviors while minimizing energy utilization. Different species reconcile this trade-off in different ways, but in humans the outcome is biased towards complex behaviors and hence relatively high energy use. However, even in energy-intensive brains, numerous parsimonious processes operate to optimize energy use. We review how this balance manifests in both homeostatic processes and task-associated cognition. We also consider the perturbations and disruptions of metabolism in neurocognitive diseases.

A possible explanation of the origin of the concept of number and some arithmetical operations based on language concepts

Stanislas Dehaene, Mathias Sablé-Meyer, Lorenzo Ciccione, Origins of numbers: a shared language-of-thought for arithmetic and geometry?, Trends in Cognitive Sciences, Volume 29, Issue 6, 2025, Pages 526-540, 10.1016/j.tics.2025.03.001.

Concepts of exact number are often thought to originate from counting and the successor function, or from a refinement of the approximate number system (ANS). We argue here for a third origin: a shared language-of-thought (LoT) for geometry and arithmetic that involves primitives of repetition, concatenation, and recursive embedding. Applied to sets, those primitives engender concepts of exact integers through recursive applications of additions and multiplications. Links between geometry and arithmetic also explain the emergence of higher-level notions (squares, primes, etc.). Under our hypothesis, understanding a number means having one or several mental expressions for it, and their minimal description length (MDL) determines how easily they can be mentally manipulated. Several historical, developmental, linguistic, and brain imaging phenomena provide preliminary support for our proposal.

Detecting novelties in the model of the world within MCTS

Bryan Loyall, Avi Pfeffer, James Niehaus, Michael Harradon, Paola Rizzo, Alex Gee, Joe Campolongo, Tyler Mayer, John Steigerwald, Coltrane: A domain-independent system for characterizing and planning in novel situations, Artificial Intelligence, Volume 345, 2025, 10.1016/j.artint.2025.104336.

AI systems operating in open-world environments must be able to adapt to impactful changes in the world, immediately when they occur, and be able to do this across the many types of changes that can occur. We are seeking to create methods to extend traditional AI systems so that they can (1) immediately recognize changes in how the world works that are impactful to task accomplishment; (2) rapidly characterize the nature of the change using the limited observations that are available when the change is first detected; (3) adapt to the change as well as feasible to accomplish the system’s tasks given the available observations; and (4) continue to improve the characterization and adaptation as additional observations are available. In this paper, we describe Coltrane, a domain-independent system for characterizing and planning in novel situations that uses only natural domain descriptions to generate its novelty-handling behavior, without any domain-specific anticipation of the novelty. Coltrane’s characterization method is based on probabilistic program synthesis of perturbations to programs expressed in a traditional programming language describing domain transition models. Its planning method is based on incorporating novel domain models in an MCTS search algorithm and on automatically adapting the heuristics used. Both a formal external evaluation and our own demonstrations show that Coltrane is capable of accurately characterizing interesting forms of novelty and of adapting its behavior to restore its performance to pre-novelty levels and even beyond.

Adding time series forecasting to the model of the system in decision making

Francesco Zito, Vincenzo Cutello, Mario Pavone, Data-driven forecasting and its role in enhanced decision-making, Engineering Applications of Artificial Intelligence, Volume 154, 2025, 10.1016/j.engappai.2025.110934.

Decision-making is a crucial process for any organization, since it involves the selection of the most effective action from a variety of options. In this context, data plays an important role in driving decisions. Analyzing data allows us to extract patterns that enable better decision-making for achieving specific goals. However, to make the right decisions to control the behavior of a system, it is necessary to take into account different factors, which can be challenging. Indeed, in dynamic systems, numerous variables change over time, and understanding the future state of these systems can be crucial for controlling the system. Predicting future states based on historical data is known as time series forecasting, which can be divided into univariate and multivariate forecasting, with the latter being particularly relevant due to its consideration of multiple variables. Deep Learning methods enhance decision-making by identifying patterns in complex datasets. As data complexity grows, techniques like Automated Machine Learning optimize model performance. The present study introduces a novel methodology that integrates multivariate time series forecasting into decision-making frameworks. We used Automated-Machine Learning to develop a predictive model for forecasting future system states, aiding optimal decision-making. The study compares machine learning models based on performance metrics and computational cost across various domains, including weather monitoring, power consumption, hospital electricity monitoring, and exchange rates. We also analyzed the importance of the hyperparameters in identifying key factors affecting model performance. The obtained results show that Neural Architecture Search method can improve state predictor design by reducing computational resources and enhancing performance.

A possible explanation for the formation of concepts in the human brain

Luca D. Kolibius, Sheena A. Josselyn, Simon Hanslmayr, On the origin of memory neurons in the human hippocampus, Trends in Cognitive Sciences, Volume 29, Issue 5, 2025, Pages 421-433 10.1016/j.tics.2025.01.013.

The hippocampus is essential for episodic memory, yet its coding mechanism remains debated. In humans, two main theories have been proposed: one suggests that concept neurons represent specific elements of an episode, while another posits a conjunctive code, where index neurons code the entire episode. Here, we integrate new findings of index neurons in humans and other animals with the concept-specific memory framework, proposing that concept neurons evolve from index neurons through overlapping memories. This process is supported by engram literature, which posits that neurons are allocated to a memory trace based on excitability and that reactivation induces excitability. By integrating these insights, we connect two historically disparate fields of neuroscience: engram research and human single neuron episodic memory research.

On the problem of choice overload for human cognition

Jessie C. Tanner, Claire T. Hemingway, Choice overload and its consequences for animal decision-making, Trends in Cognitive Sciences, Volume 29, Issue 5, 2025, Pages 403-406 10.1016/j.tics.2025.01.003.

Animals routinely make decisions with important consequences for their survival and reproduction, but they frequently make suboptimal decisions. Here, we explore choice overload as one reason why animals may make suboptimal decisions, arguing that choice overload may have important ecological and evolutionary consequences, and propose future directions.

Improving the adaptation of RL to robots with different parameters through Fuzzy

A. G. Haddad, M. B. Mohiuddin, I. Boiko and Y. Zweiri, Fuzzy Ensembles of Reinforcement Learning Policies for Systems With Variable Parameters, IEEE Robotics and Automation Letters, vol. 10, no. 6, pp. 5361-5368, June 2025 10.1109/LRA.2025.3559833.

This paper presents a novel approach to improving the generalization capabilities of reinforcement learning (RL) agents for robotic systems with varying physical parameters. We propose the Fuzzy Ensemble of RL policies (FERL), which enhances performance in environments where system parameters differ from those encountered during training. The FERL method selectively fuses aligned policies, determining their collective decision based on fuzzy memberships tailored to the current parameters of the system. Unlike traditional centralized training approaches that rely on shared experiences for policy updates, FERL allows for independent agent training, facilitating efficient parallelization. The effectiveness of FERL is demonstrated through extensive experiments, including a real-world trajectory tracking application in a quadrotor slung-load system. Our method improves the success rates by up to 15.6% across various simulated systems with variable parameters compared to the existing benchmarks of domain randomization and robust adaptive ensemble adversary RL. In the real-world experiments, our method achieves a 30% reduction in 3D position RMSE compared to individual RL policies. The results underscores FERL robustness and applicability to real robotic systems.

Improving reward shaping in Deep RL for avoiding user’s biases and boosting learning efficiency

Jiawei Lin, Xuekai Wei, Weizhi Xian, Jielu Yan, Leong Hou U, Yong Feng, Zhaowei Shang, Mingliang Zhou, Continuous reinforcement learning via advantage value difference reward shaping: A proximal policy optimization perspective, Engineering Applications of Artificial Intelligence, Volume 151, 2025 10.1016/j.engappai.2025.110676.

Deep reinforcement learning has shown great promise in industrial applications. However, these algorithms suffer from low learning efficiency because of sparse reward signals in continuous control tasks. Reward shaping addresses this issue by transforming sparse rewards into more informative signals, but some designs that rely on domain experts or heuristic rules can introduce cognitive biases, leading to suboptimal solutions. To overcome this challenge, this paper proposes the advantage value difference (AVD), a generalized potential-based end-to-end exploration reward function. The main contribution of this paper is to improve the agent’s exploration efficiency, accelerate the learning process, and prevent premature convergence to local optima. The method leverages the temporal difference error to estimate the potential of states and uses the advantage function to guide the learning process toward more effective strategies. In the context of engineering applications, this paper proves the superiority of AVD in continuous control tasks within the multi-joint dynamics with contact (MuJoCo) environment. Specifically, the proposed method achieves an average increase of 23.5% in episode rewards for the Hopper, Swimmer, and Humanoid tasks compared with the state-of-the-art approaches. The results demonstrate the significant improvement in learning efficiency achieved by AVD for industrial robotic systems.

Using Deep RL to model transitions and observations in EKF localization

Islem Kobbi, Abdelhak Benamirouche, Mohamed Tadjine, Enhancing pose estimation for mobile robots: A comparative analysis of deep reinforcement learning algorithms for adaptive Extended Kalman Filter-based estimation, Engineering Applications of Artificial Intelligence, Volume 150, 2025 10.1016/j.engappai.2025.110548.

The Extended Kalman Filter (EKF) is a widely used algorithm for state estimation in control systems. However, its lack of adaptability limits its performance in dynamic and uncertain environments. To address this limitation, we used an approach that leverages Deep Reinforcement Learning (DRL) to achieve adaptive state estimation in the EKF. By integrating DRL techniques, we enable the state estimator to autonomously learn and update the values of the system dynamics and measurement noise covariance matrices, Q and R, based on observed data, which encode environmental changes or system failures. In this research, we compare the performance of four DRL algorithms, namely Deep Deterministic Policy Gradient (DDPG), Twin Delayed Deep Deterministic Policy Gradient (TD3), Soft Actor-Critic (SAC), and Proximal Policy Optimization (PPO), in optimizing the EKF’s adaptability. The experiments are conducted in both simulated and real-world settings using the Gazebo simulation environment and the Robot Operating System (ROS). The results demonstrate that the DRL-based adaptive state estimator outperforms traditional methods in terms of estimation accuracy and robustness. The comparative analysis provides insights into the strengths and limitations of different DRL agents, showing that the TD3 and the DDPG are the most effective algorithms, with TD3 achieving superior performance, resulting in a 91% improvement over the classic EKF, due to its delayed update mechanism that reduces training noise. This research highlights the potential of DRL to advance state estimation algorithms, offering valuable insights for future work in adaptive estimation techniques.