Tag Archives: Markov Decision Processes

POMDPs focused on obtaining policies that can be understood well just through the observation of the robot actions

Miguel Faria, Francisco S. Melo, Ana Paiva, “Guess what I’m doing”: Extending legibility to sequential decision tasks, Artificial Intelligence, Volume 330, 2024 DOI: 10.1016/j.artint.2024.104107.

In this paper we investigate the notion of legibility in sequential decision tasks under uncertainty. Previous works that extend legibility to scenarios beyond robot motion either focus on deterministic settings or are computationally too expensive. Our proposed approach, dubbed PoLMDP, is able to handle uncertainty while remaining computationally tractable. We establish the advantages of our approach against state-of-the-art approaches in several scenarios of varying complexity. We also showcase the use of our legible policies as demonstrations in machine teaching scenarios, establishing their superiority in teaching new behaviours against the commonly used demonstrations based on the optimal policy. Finally, we assess the legibility of our computed policies through a user study, where people are asked to infer the goal of a mobile robot following a legible policy by observing its actions.

Use of Markov Decision Processes to select tasks in a service mobile robots

Lacerda, B., Faruq, F., Parker, D., & Hawes, N., Probabilistic planning with formal performance guarantees for mobile service robots, The International Journal of Robotics Research, DOI: 10.1177/0278364919856695.

We present a framework for mobile service robot task planning and execution, based on the use of probabilistic verification techniques for the generation of optimal policies with attached formal performance guarantees. Our approach is based on a Markov decision process model of the robot in its environment, encompassing a topological map where nodes represent relevant locations in the environment, and a range of tasks that can be executed in different locations. The navigation in the topological map is modeled stochastically for a specific time of day. This is done by using spatio-temporal models that provide, for a given time of day, the probability of successfully navigating between two topological nodes, and the expected time to do so. We then present a methodology to generate cost optimal policies for tasks specified in co-safe linear temporal logic. Our key contribution is to address scenarios in which the task may not be achievable with probability one. We introduce a task progression function and present an approach to generate policies that are formally guaranteed to, in decreasing order of priority: maximize the probability of finishing the task; maximize progress towards completion, if this is not possible; and minimize the expected time or cost required. We illustrate and evaluate our approach with a scalability evaluation in a simulated scenario, and report on its implementation in a robot performing service tasks in an office environment for long periods of time.

Taking into account the influence of a recommender in the change of behaviour of the agent using it

Jonathan P. Epperlein, Sergiy Zhuk, Robert Shorten, Recovering Markov models from closed-loop data, Automatica, Volume 103, 2019, Pages 116-125, DOI: 10.1016/j.automatica.2019.01.022.

Situations in which recommender systems are used to augment decision making are becoming prevalent in many application domains. Almost always, these prediction tools (recommenders) are created with a view to affecting behavioural change. Clearly, successful applications actuating behavioural change, affect the original model underpinning the predictor, leading to an inconsistency. This feedback loop is often not considered in standard machine learning techniques which rely upon machine learning/statistical learning machinery. The objective of this paper is to develop tools that recover unbiased user models in the presence of recommenders. More specifically, we assume that we observe a time series which is a trajectory of a Markov chain R modulated by another Markov chain S, i.e. the transition matrix of R is unknown and depends on the current state of S. The transition matrix of the latter is also unknown. In other words, at each time instant, S selects a transition matrix for R within a given set which consists of known and unknown matrices. The state of S, in turn, depends on the current state of R thus introducing a feedback loop. We propose an Expectation–Maximisation (EM) type algorithm, which estimates the transition matrices of S and R. Experimental results are given to demonstrate the efficacy of the approach.

A survey on decision making for multiagent systems, including multirobot systems

Y. Rizk, M. Awad and E. W. Tunstel, Decision Making in Multiagent Systems: A Survey, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 3, pp. 514-529, DOI: 10.1109/TCDS.2018.2840971.

Intelligent transport systems, efficient electric grids, and sensor networks for data collection and analysis are some examples of the multiagent systems (MAS) that cooperate to achieve common goals. Decision making is an integral part of intelligent agents and MAS that will allow such systems to accomplish increasingly complex tasks. In this survey, we investigate state-of-the-art work within the past five years on cooperative MAS decision making models, including Markov decision processes, game theory, swarm intelligence, and graph theoretic models. We survey algorithms that result in optimal and suboptimal policies such as reinforcement learning, dynamic programming, evolutionary computing, and neural networks. We also discuss the application of these models to robotics, wireless sensor networks, cognitive radio networks, intelligent transport systems, and smart electric grids. In addition, we define key terms in the area and discuss remaining challenges that include incorporating big data advancements to decision making, developing autonomous, scalable and computationally efficient algorithms, tackling more complex tasks, and developing standardized evaluation metrics. While recent surveys have been published on this topic, we present a broader discussion of related models and applications.Note to Practitioners:Future smart cities will rely on cooperative MAS that make decisions about what actions to perform that will lead to the completion of their tasks. Decision making models and algorithms have been developed and reported in the literature to generate such sequences of actions. These models are based on a wide variety of principles including human decision making and social animal behavior. In this paper, we survey existing decision making models and algorithms that generate optimal and suboptimal sequences of actions. We also discuss some of the remaining challenges faced by the research community before more effective MAS deployment can be achieved in this age of Internet of Things, robotics, and mobile devices. These challenges include developing more scalable and efficient algorithms, utilizing the abundant sensory data available, tackling more complex tasks, and developing evaluation standards for decision making.

Regression to help in finding the optimal policy in MDPs based on duality theory

H. Zhu, F. Ye and E. Zhou, Solving the Dual Problems of Dynamic Programs via Regression, IEEE Transactions on Automatic Control, vol. 63, no. 5, pp. 1340-1355, DOI: 10.1109/TAC.2017.2747405.

In recent years, information relaxation and duality in dynamic programs have been studied extensively, and the resulted primal-dual approach has become a powerful procedure in solving dynamic programs by providing lower-upper bounds on the optimal value function. Theoretically, with the so-called value-based optimal dual penalty, the optimal value function could be recovered exactly via strong duality. However, in practice, obtaining tight dual bounds usually requires good approximations of the optimal dual penalty, which could be time consuming if analytical computation is not possible and nested simulation has to be used to estimate the conditional expectations inside the dual penalty. In this paper, we will develop a framework of a regression approach to approximating the optimal dual penalty in a nonnested manner, by exploring the structure of the function space consisting of all feasible dual penalties. The resulted approximations maintain to be feasible dual penalties, and thus yielding valid dual bounds on the optimal value function. We show that the proposed framework is computationally efficient, and the resulted dual penalties lead to numerically tractable dual problems. Finally, we apply the framework to a high-dimensional dynamic trading problem to demonstrate its effectiveness in solving the dual problems of complex dynamic programs.