Monthly Archives: October 2023

You are browsing the site archives by month.

Hybridizing model-free and model-based in continuous RL, and a nice review of current research and benchmarks in robotics

Pinosky A, Abraham I, Broad A, Argall B, Murphey TD. Hybrid control for combining model-based and model-free reinforcement learning The International Journal of Robotics Research. 2023;42(6):337-355 DOI: 10.1177/02783649221083331.

We develop an approach to improve the learning capabilities of robotic systems by combining learned predictive models with experience-based state-action policy mappings. Predictive models provide an understanding of the task and the dynamics, while experience-based (model-free) policy mappings encode favorable actions that override planned actions. We refer to our approach of systematically combining model-based and model-free learning methods as hybrid learning. Our approach efficiently learns motor skills and improves the performance of predictive models and experience-based policies. Moreover, our approach enables policies (both model-based and model-free) to be updated using any off-policy reinforcement learning method. We derive a deterministic method of hybrid learning by optimally switching between learning modalities. We adapt our method to a stochastic variation that relaxes some of the key assumptions in the original derivation. Our deterministic and stochastic variations are tested on a variety of robot control benchmark tasks in simulation as well as a hardware manipulation task. We extend our approach for use with imitation learning methods, where experience is provided through demonstrations, and we test the expanded capability with a real-world pick-and-place task. The results show that our method is capable of improving the performance and sample efficiency of learning motor skills in a variety of experimental domains.

How plans influence sensors

McFassel G, Shell DA. Reactivity and statefulness: Action-based sensors, plans, and necessary state. The International Journal of Robotics Research. 2023;42(6):385-411 DOI: 10.1177/02783649221078874.

Typically to a roboticist, a plan is the outcome of other work, a synthesized object that realizes ends defined by some problem; plans qua plans are seldom treated as first-class objects of study. Plans designate functionality: a plan can be viewed as defining a robot\u2019s behavior throughout its execution. This informs and reveals many other aspects of the robot\u2019s design, including: necessary sensors and action choices, history, state, task structure, and how to define progress. Interrogating sets of plans helps in comprehending the ways in which differing executions influence the interrelationships between these various aspects. Revisiting Erdmann\u2019s theory of action-based sensors, a classical approach for characterizing fundamental information requirements, we show how plans (in their role of designating behavior) influence sensing requirements. Using an algorithm for enumerating plans, we examine how some plans for which no action-based sensor exists can be transformed into sets of sensors through the identification and handling of features that preclude the existence of action-based sensors. We are not aware of those obstructing features having been previously identified. Action-based sensors may be treated as standalone reactive plans; we relate them to the set of all possible plans through a lattice structure. This lattice reveals a boundary between plans with action-based sensors and those without. Some plans, specifically those that are not reactive plans and require some notion of internal state, can never have associated action-based sensors. Even so, action-based sensors can serve as a framework to explore and interpret how such plans make use of state.

POMDPs in robotics: QMDP-Net as a counterpart for the Partially Observable Markov Decision Process (POMDP) whose transition, observation, and reward functions are initially unknown

Collins N, Kurniawati H. Locally connected interrelated network: A forward propagation primitive, The International Journal of Robotics Research. 2023;42(6):371-384 DOI: 10.1177/02783649221093092.

End-to-end learning for planning is a promising approach for finding good robot strategies in situations where the state transition, observation, and reward functions are initially unknown. Many neural network architectures for this approach have shown positive results. Across these networks, seemingly small components have been used repeatedly in different architectures, which means improving the efficiency of these components has great potential to improve the overall performance of the network. This paper aims to improve one such component: The forward propagation module. In particular, we propose Locally Connected Interrelated Network (LCI-Net) \u2013 a novel type of locally connected layer with unshared but interrelated weights \u2013 to improve the efficiency of learning stochastic transition models for planning and propagating information via the learned transition models. LCI-Net is a small differentiable neural network module that can be plugged into various existing architectures. For evaluation purposes, we apply LCI-Net to VIN and QMDP-Net. VIN is an end-to-end neural network for solving Markov Decision Processes (MDPs) whose transition and reward functions are initially unknown, while QMDP-Net is its counterpart for the Partially Observable Markov Decision Process (POMDP) whose transition, observation, and reward functions are initially unknown. Simulation tests on benchmark problems involving 2D and 3D navigation and grasping indicate promising results: Changing only the forward propagation module alone with LCI-Net improves VIN\u2019s and QMDP-Net generalisation capability by more than 3� and 10�, respectively.

RL in manufacturing control

Vladimir Samsonov, Karim Ben Hicham, Tobias Meisen, Reinforcement Learning in Manufacturing Control: Baselines, challenges and ways forward, Engineering Applications of Artificial Intelligence, Volume 112, 2022 DOI: 10.1016/j.engappai.2022.104868.

The field of Neural Combinatorial Optimization (NCO) offers multiple learning-based approaches to solve well-known combinatorial optimization tasks such as Traveling Salesman or Knapsack problem capable of competing with classical optimization approaches in terms of both solution quality and speed. This brought the attention of the research community to the tasks of Manufacturing Control (MC) with combinatorial nature. In this paper we outline the main components of MC tasks, select the most promising application fields and analyze dedicated learning-based solutions available in the literature. We draw multiple parallels to the current state of the art in the NCO field and allocate the main research gaps and directions on the perception, cognition and interaction levels. Using a set of practical examples we implement and benchmark common design patterns for single-agent Reinforcement Learning (RL) solutions. Along with testing existing solutions, we build on the ranked reward idea (Laterre et al., 2018) and offer a novel Multi-Instance Ranked Reward (m-R2) approach tailored to MC optimization tasks. It minimizes the reward shaping effort and defines a suitable training curriculum for more stable learning by separately tracking the agent\u2019s performance on every scheduling task and rewarding only policies contributing towards better scheduling solutions. We implement all solution design patterns as a set of interchangeable modules with a shared API, unified in a benchmarking framework with the focus on standardization of training and evaluation processes, reproducibility and simplified experiment lifecycle management. In addition to the framework, we make available our discrete-event simulation of a job shop production.

Also:

Zhihao Liu, Quan Liu, Wenjun Xu, Lihui Wang, Zude Zhou,
Robot learning towards smart robotic manufacturing: A review,
Robotics and Computer-Integrated Manufacturing,
Volume 77,
2022,
102360,
ISSN 0736-5845,
https://doi.org/10.1016/j.rcim.2022.102360.

Dealing with continuous spaces in Q-learning by maintaining several spaces, each one corresponding to a particular time-step

Joao Pedro Araujo, Mario A.T. Figueiredo, Miguel Ayala Botto, Control with adaptive Q-learning: A comparison for two classical control problems, Engineering Applications of Artificial Intelligence, Volume 112, 2022 DOI: 10.1016/j.engappai.2022.104797.

This paper evaluates adaptive Q-learning (AQL) and single-partition adaptive Q-learning (SPAQL), two algorithms for efficient model-free episodic reinforcement learning (RL), in two classical control problems (Pendulum and CartPole). AQL adaptively partitions the state\u2013action space of a Markov decision process (MDP), while learning the control policy, i.e., the mapping from states to actions. The main difference between AQL and SPAQL is that the latter learns time-invariant policies, where the mapping from states to actions does not depend explicitly on the time step. This paper also proposes the SPAQL with terminal state (SPAQL-TS), an improved version of SPAQL tailored for the design of regulators for control problems. The time-invariant policies are shown to result in a better performance than the time-variant ones in both problems studied. These algorithms are particularly fitted to RL problems where the action space is finite, as is the case with the CartPole problem. SPAQL-TS solves the OpenAI GymCartPole problem, while also displaying a higher sample efficiency than trust region policy optimization (TRPO), a standard RL algorithm for solving control tasks. Moreover, the policies learned by SPAQL are interpretable, while TRPO policies are typically encoded as neural networks, and therefore hard to interpret. Yielding interpretable policies while being sample-efficient are the major advantages of SPAQL. The code for the experiments is available at https://github.com/jaraujo98/SinglePartitionAdaptiveQLearning.

Modifications of Q-learning for better learning of robot navigation

Ee Soong Low, Pauline Ong, Cheng Yee Low, Rosli Omar, Modified Q-learning with distance metric and virtual target on path planning of mobile robot, Expert Systems with Applications, Volume 199, 2022, DOI: 10.1016/j.eswa.2022.117191.

Path planning is an essential element in mobile robot navigation. One of the popular path planners is Q-learning \u2013 a type of reinforcement learning that learns with little or no prior knowledge of the environment. Despite the successful implementation of Q-learning reported in numerous studies, its slow convergence associated with the curse of dimensionality may limit the performance in practice. To solve this problem, an Improved Q-learning (IQL) with three modifications is introduced in this study. First, a distance metric is added to Q-learning to guide the agent moves towards the target. Second, the Q function of Q-learning is modified to overcome dead-ends more effectively. Lastly, the virtual target concept is introduced in Q-learning to bypass dead-ends. Experimental results across twenty types of navigation maps show that the proposed strategies accelerate the learning speed of IQL in comparison with the Q-learning. Besides, performance comparison with seven well-known path planners indicates its efficiency in terms of the path smoothness, time taken, shortest distance and total distance used.

A nice summary of SLAM in robotics with Lidar and Cameras

Chghaf, M., Rodriguez, S. & Ouardi, A.E. Camera, LiDAR and Multi-modal SLAM Systems for Autonomous Ground Vehicles: a Survey J Intell Robot Syst 105, 2 (2022) DOI: 10.1007/s10846-022-01582-8.

Simultaneous Localization and Mapping (SLAM) have been widely studied over the last years for autonomous vehicles. SLAM achieves its purpose by constructing a map of the unknown environment while keeping track of the location. A major challenge, which is paramount during the design of SLAM systems, lies in the efficient use of onboard sensors to perceive the environment. The most widely applied algorithms are camera-based SLAM and LiDAR-based SLAM. Recent research focuses on the fusion of camera-based and LiDAR-based frameworks that show promising results. In this paper, we present a study of commonly used sensors and the fundamental theories behind SLAM algorithms. The study then presents the hardware architectures used to process these algorithms and the performance obtained when possible. Secondly, we highlight state-of-the-art methodologies in each modality and in the multi-modal framework. A brief comparison followed by future challenges is then underlined. Additionally, we provide insights to possible fusion approaches that can increase the robustness and accuracy of modern SLAM algorithms; hence allowing the hardware-software co-design of embedded systems taking into account the algorithmic complexity and the embedded architectures and real-time constraints.

A nice summary of RL applied to robot navigation

N. Khlif, N. Khraief and S. Belghith, Reinforcement Learning for Mobile Robot Navigation: An overview IEEE Information Technologies & Smart Industrial Systems (ITSIS), Paris, France, 2022, pp. 1-7 DOI: 10.1109/ITSIS56166.2022.10118362.

For several years, research shows that interest in autonomous mobile robots is increasing and it has more and more grown. Autonomous mobile robots is an object of discussion but nowadays it’s an emerging topic due to the all progress related to field like autonomous driving and UAV (drones). Integrating intelligence into robotic systems requires solving various research problems, including one of the most important problems of mobile robotic systems: navigation. Find the answers to the following three questions: What is the localisation of the robot? Where are the robot going? How can it get there? presenting the solution of mobile robot navigation problem. These questions are answered by basic navigation parts which are localization, mapping and path planning. The paper present an overview of research on autonomous mobile robot navigation. First, a quick introduction to the various features of navigation. We also discuss machine learning and reinforcement learning in mobile robotics. Furthermore, we will discuss some path planning techniques. Some future directions are also suggested.

Mixing rule-based and reinforcement learning navigation for robots

Y. Zhu, Z. Wang, C. Chen and D. Dong, Rule-Based Reinforcement Learning for Efficient Robot Navigation With Space Reduction, IEEE/ASME Transactions on Mechatronics, vol. 27, no. 2, pp. 846-857, April 2022 DOI: 10.1109/TMECH.2021.3072675.

For real-world deployments, it is critical to allow robots to navigate in complex environments autonomously. Traditional methods usually maintain an internal map of the environment, and then design several simple rules, in conjunction with a localization and planning approach, to navigate through the internal map. These approaches often involve a variety of assumptions and prior knowledge. In contrast, recent reinforcement learning (RL) methods can provide a model-free, self-learning mechanism as the robot interacts with an initially unknown environment, but are expensive to deploy in real-world scenarios due to inefficient exploration. In this article, we focus on efficient navigation with the RL technique and combine the advantages of these two kinds of methods into a rule-based RL (RuRL) algorithm for reducing the sample complexity and cost of time. First, we use the rule of wall-following to generate a closed-loop trajectory. Second, we employ a reduction rule to shrink the trajectory, which in turn effectively reduces the redundant exploration space. Besides, we give the detailed theoretical guarantee that the optimal navigation path is still in the reduced space. Third, in the reduced space, we utilize the Pledge rule to guide the exploration strategy for accelerating the RL process at the early stage. Experiments conducted on real robot navigation problems in hex-grid environments demonstrate that RuRL can achieve improved navigation performance.

Clustering time series through the moments of the corresponding regimes using fuzzy

Roy Cerqueti, Pierpaolo D\u2019Urso, Livia De Giovanni, Massimiliano Giacalone, Raffaele Mattera, Weighted score-driven fuzzy clustering of time series with a financial application, Expert Systems with Applications, Volume 198, 2022 DOI: 10.1016/j.eswa.2022.116752.

Time series data are commonly clustered based on their distributional characteristics. The moments play a central role among such characteristics because of their relevant informative content. This paper aims to develop a novel approach that faces still open issues in moment-based clustering. First of all, we deal with a very general framework of time-varying moments rather than static quantities. Second, we include in the clustering model high-order moments. Third, we avoid implicit equal weighting of the considered moments by developing a clustering procedure that objectively computes the optimal weight for each moment. As a result, following a fuzzy approach, two weighted clustering models based on both unconditional and conditional moments are proposed. Since the Dynamic Conditional Score model is used to estimate both conditional and unconditional moments, the resulting framework is called weighted score-driven clustering. We apply the proposed method to financial time series as an empirical experiment.