Monthly Archives: April 2025

You are browsing the site archives by month.

When to rely on memories versus sampling sensory information anew to guide behavior

Levi Kumle, Anna C. Nobre, Dejan Draschkow, Sensorimnemonic decisions: choosing memories versus sensory information, Trends in Cognitive Sciences, Volume 29, Issue 4, 2025, Pages 311-313, 10.1016/j.tics.2024.12.010.

We highlight a fundamental psychological function that is central to many of our interactions in the environment – when to rely on memories versus sampling sensory information anew to guide behavior. By operationalizing sensorimnemonic decisions we aim to encourage and advance research into this pivotal process for understanding how memories serve adaptive cognition.

On the explainability of Deep RL and its improvement through the integration of human preferences

Georgios Angelopoulos, Luigi Mangiacapra, Alessandra Rossi, Claudia Di Napoli, Silvia Rossi, What is behind the curtain? Increasing transparency in reinforcement learning with human preferences and explanations, Engineering Applications of Artificial Intelligence, Volume 149, 2025, 10.1016/j.engappai.2025.110520.

In this work, we investigate whether the transparency of a robot’s behaviour is improved when human preferences on the actions the robot performs are taken into account during the learning process. For this purpose, a shielding mechanism called Preference Shielding is proposed and included in a reinforcement learning algorithm to account for human preferences. We also use the shielding to decide when to provide explanations of the robot’s actions. We carried out a within-subjects study involving 26 participants to evaluate the robot’s transparency. Results indicate that considering human preferences during learning improves legibility compared with providing only explanations. In addition, combining human preferences and explanations further amplifies transparency. Results also confirm that increased transparency leads to an increase in people’s perception of the robot’s safety, comfort, and reliability. These findings show the importance of transparency during learning and suggest a paradigm for robotic applications when a robot has to learn a task in the presence of or in collaboration with a human.