D. Hu, L. Mo, J. Wu and C. Huang, “Feariosity”-Guided Reinforcement Learning for Safe and Efficient Autonomous End-to-End Navigation, IEEE Robotics and Automation Letters, vol. 10, no. 8, pp. 7723-7730, Aug. 2025, 10.1109/LRA.2025.3577523.
End-to-end navigation strategies using reinforcement learning (RL) can improve the adaptability and autonomy of Autonomous ground vehicles (AGVs) in complex environments. However, RL still faces challenges in data efficiency and safety. Neuroscientific and psychological research shows that during exploration, the brain balances between fear and curiosity, a critical process for survival and adaptation in dangerous environments. Inspired by this scientific insight, we propose the “Feariosity” model, which integrates fear and curiosity model to simulate the complex psychological dynamics organisms experience during exploration. Based on this model, we developed an innovative policy constraint method that evaluates potential hazards and applies necessary safety constraints while encouraging exploration of unknown areas. Additionally, we designed a new experience replay mechanism that quantifies the threat and unknown level of data, optimizing their usage probability. Extensive experiments in both simulation and real-world scenarios demonstrate that the proposed method significantly improves data efficiency, asymptotic performance during training. Furthermore, it achieves higher success rates, driving efficiency, and robustness in deployment. This also highlights the key role of mimicking biological neural and psychological mechanisms in improving the safety and efficiency through RL.