Monthly Archives: May 2025

You are browsing the site archives by month.

A possible explanation for the formation of concepts in the human brain

Luca D. Kolibius, Sheena A. Josselyn, Simon Hanslmayr, On the origin of memory neurons in the human hippocampus, Trends in Cognitive Sciences, Volume 29, Issue 5, 2025, Pages 421-433 10.1016/j.tics.2025.01.013.

The hippocampus is essential for episodic memory, yet its coding mechanism remains debated. In humans, two main theories have been proposed: one suggests that concept neurons represent specific elements of an episode, while another posits a conjunctive code, where index neurons code the entire episode. Here, we integrate new findings of index neurons in humans and other animals with the concept-specific memory framework, proposing that concept neurons evolve from index neurons through overlapping memories. This process is supported by engram literature, which posits that neurons are allocated to a memory trace based on excitability and that reactivation induces excitability. By integrating these insights, we connect two historically disparate fields of neuroscience: engram research and human single neuron episodic memory research.

On the problem of choice overload for human cognition

Jessie C. Tanner, Claire T. Hemingway, Choice overload and its consequences for animal decision-making, Trends in Cognitive Sciences, Volume 29, Issue 5, 2025, Pages 403-406 10.1016/j.tics.2025.01.003.

Animals routinely make decisions with important consequences for their survival and reproduction, but they frequently make suboptimal decisions. Here, we explore choice overload as one reason why animals may make suboptimal decisions, arguing that choice overload may have important ecological and evolutionary consequences, and propose future directions.

Improving the adaptation of RL to robots with different parameters through Fuzzy

A. G. Haddad, M. B. Mohiuddin, I. Boiko and Y. Zweiri, Fuzzy Ensembles of Reinforcement Learning Policies for Systems With Variable Parameters, IEEE Robotics and Automation Letters, vol. 10, no. 6, pp. 5361-5368, June 2025 10.1109/LRA.2025.3559833.

This paper presents a novel approach to improving the generalization capabilities of reinforcement learning (RL) agents for robotic systems with varying physical parameters. We propose the Fuzzy Ensemble of RL policies (FERL), which enhances performance in environments where system parameters differ from those encountered during training. The FERL method selectively fuses aligned policies, determining their collective decision based on fuzzy memberships tailored to the current parameters of the system. Unlike traditional centralized training approaches that rely on shared experiences for policy updates, FERL allows for independent agent training, facilitating efficient parallelization. The effectiveness of FERL is demonstrated through extensive experiments, including a real-world trajectory tracking application in a quadrotor slung-load system. Our method improves the success rates by up to 15.6% across various simulated systems with variable parameters compared to the existing benchmarks of domain randomization and robust adaptive ensemble adversary RL. In the real-world experiments, our method achieves a 30% reduction in 3D position RMSE compared to individual RL policies. The results underscores FERL robustness and applicability to real robotic systems.