Tag Archives: Offline Rl

Improvements in offline RL (from previously acquired datasets)

Lan Wu, Quan Liu, Renyang You, State slow feature softmax Q-value regularization for offline reinforcement learning, Engineering Applications of Artificial Intelligence, Volume 160, Part A, 2025, 10.1016/j.engappai.2025.111828.

Offline reinforcement learning is constrained by its reliance on pre-collected datasets, without the opportunity for further interaction with the environment. This restriction often results in distribution shifts, which can exacerbate Q-value overestimation and degrade policy performance. To address these issues, we propose a method called state slow feature softmax Q-value regularization (SQR), which enhances the stability and accuracy of Q-value estimation in offline settings. SQR employs slow feature representation learning to extract dynamic information from state trajectories, promoting the stability and robustness of the state representations. Additionally, a softmax operator is incorporated into the Q-value update process to smooth Q-value estimation, reducing overestimation and improving policy optimization. Finally, we apply our approach to locomotion and navigation tasks and establish a comprehensive experimental analysis framework. Empirical results demonstrate that SQR outperforms state-of-the-art offline RL baselines, achieving performance improvements ranging from 2.5% to 44.6% on locomotion tasks and 2.0% to 71.1% on navigation tasks. Moreover, it achieves the highest score on 7 out of 15 locomotion datasets and 4 out of 6 navigation datasets. Detailed experimental results confirm the stabilizing effect of slow feature learning and the effectiveness of the softmax regularization in mitigating Q-value overestimation, demonstrating the superiority of SQR in addressing key challenges in offline reinforcement learning.

Offline RL in robotics

L. Yao, B. Zhao, X. Xu, Z. Wang, P. K. Wong and Y. Hu, Efficient Incremental Offline Reinforcement Learning With Sparse Broad Critic Approximation, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 1, pp. 156-169, Jan. 2024 DOI: 10.1109/TSMC.2023.3305498.

Offline reinforcement learning (ORL) has been getting increasing attention in robot learning, benefiting from its ability to avoid hazardous exploration and learn policies directly from precollected samples. Approximate policy iteration (API) is one of the most commonly investigated ORL approaches in robotics, due to its linear representation of policies, which makes it fairly transparent in both theoretical and engineering analysis. One open problem of API is how to design efficient and effective basis functions. The broad learning system (BLS) has been extensively studied in supervised and unsupervised learning in various applications. However, few investigations have been conducted on ORL. In this article, a novel incremental ORL approach with sparse broad critic approximation (BORL) is proposed with the advantages of BLS, which approximates the critic function in a linear manner with randomly projected sparse and compact features and dynamically expands its broad structure. The BORL is the first extension of API with BLS in the field of robotics and ORL. The approximation ability and convergence performance of BORL are also analyzed. Comprehensive simulation studies are then conducted on two benchmarks, and the results demonstrate that the proposed BORL can obtain comparable or better performance than conventional API methods without laborious hyperparameter fine-tuning work. To further demonstrate the effectiveness of BORL in practical robotic applications, a variable force tracking problem in robotic ultrasound scanning (RUSS) is investigated, and a learning-based adaptive impedance control (LAIC) algorithm is proposed based on BORL. The experimental results demonstrate the advantages of LAIC compared with conventional force tracking methods.

See also: X. Wang, D. Hou, L. Huang and Y. Cheng, “Offline\u2013Online Actor\u2013Critic,” in IEEE Transactions on Artificial Intelligence, vol. 5, no. 1, pp. 61-69, Jan. 2024, doi: 10.1109/TAI.2022.3225251