Tag Archives: Decision Making

Continuous POMDPs through belief state sparsification, applied to active SLAM

Elimelech K, Indelman V. Simplified decision making in the belief space using belief sparsification. The International Journal of Robotics Research. 2022;41(5):470-496 DOI: 10.1177/02783649221076381.

In this work, we introduce a new and efficient solution approach for the problem of decision making under uncertainty, which can be formulated as decision making in a belief space, over a possibly high-dimensional state space. Typically, to solve a decision problem, one should identify the optimal action from a set of candidates, according to some objective. We claim that one can often generate and solve an analogous yet simplified decision problem, which can be solved more efficiently. A wise simplification method can lead to the same action selection, or one for which the maximal loss in optimality can be guaranteed. Furthermore, such simplification is separated from the state inference and does not compromise its accuracy, as the selected action would finally be applied on the original state. First, we present the concept for general decision problems and provide a theoretical framework for a coherent formulation of the approach. We then practically apply these ideas to decision problems in the belief space, which can be simplified by considering a sparse approximation of their initial belief. The scalable belief sparsification algorithm we provide is able to yield solutions which are guaranteed to be consistent with the original problem. We demonstrate the benefits of the approach in the solution of a realistic active-SLAM problem and manage to significantly reduce computation time, with no loss in the quality of solution. This work is both fundamental and practical and holds numerous possible extensions.

Mixing logical planning with NNs for decision making

Zuo, G., Pan, T., Zhang, T. et al., SOAR Improved Artificial Neural Network for Multistep Decision-making Tasks, . Cogn Comput 13, 612–625 (2021) DOI: 10.1007/s12559-020-09716-6.

Recently, artificial neural networks (ANNs) have been applied to various robot-related research areas due to their powerful spatial feature abstraction and temporal information prediction abilities. Decision-making has also played a fundamental role in the research area of robotics. How to improve ANNs with the characteristics of decision-making is a challenging research issue. ANNs are connectionist models, which means they are naturally weak in long-term planning, logical reasoning, and multistep decision-making. Considering that a small refinement of the inner network structures of ANNs will usually lead to exponentially growing data costs, an additional planning module seems necessary for the further improvement of ANNs, especially for small data learning. In this paper, we propose a state operator and result (SOAR) improved ANN (SANN) model, which takes advantage of both the long-term cognitive planning ability of SOAR and the powerful feature detection ability of ANNs. It mimics the cognitive mechanism of the human brain to improve the traditional ANN with an additional logical planning module. In addition, a data fusion module is constructed to combine the probability vector obtained by SOAR planning and the original data feature array. A data fusion module is constructed to convert the information from the logical sequences in SOAR to the probabilistic vector in ANNs. The proposed architecture is validated in two types of robot multistep decision-making experiments for a grasping task: a multiblock simulated experiment and a multicup experiment in a real scenario. The experimental results show the efficiency and high accuracy of our proposed architecture. The integration of SOAR and ANN is a good compromise between logical planning with small data and probabilistic classification with big data. It also has strong potential for more complicated tasks that require robust classification, long-term planning, and fast learning. Some potential applications include recognition of grasping order in multiobject environment and cooperative grasping of multiagents.

Studying magician tricks to understand decision making and how to influence it

Alice Pailhès, Gustav Kuhn, Mind Control Tricks: Magicians’ Forcing and Free Will, . Trends in Cognitive Sciences, Volume 25, Issue 5, 2021, Pages 338-341 DOI: 10.1016/j.tics.2021.02.001.

A new research program has recently emerged that investigates magicians’ mind control tricks, also called forces. This research highlights the psychological processes that underpin decision-making, illustrates the ease by which our decisions can be covertly influenced, and helps answer questions about our sense of free will and agency over choices.

Interesting alternative to the classical “maximize expected utility” rule for decision making

EtienneKoechlin, Human Decision-Making beyond the Rational Decision Theory, Trends in Cognitive Sciences, Volume 24, Issue 1, January 2020, Pages 4-6, DOI: 10.1016/j.tics.2019.11.001.

Two recent studies (Farashahi et al. and Rouault et al.) provide compelling evidence refuting the Subjective Expected Utility (SEU) hypothesis as a ground model describing human decision-making. Together, these studies pave the way towards a new model that subsumes the notion of decision-making and adaptive behavior into a single account.

On theories of human decision making and the role of affects

Ian D. Roberts, Cendri A. Hutcherson, Affect and Decision Making: Insights and Predictions from Computational Models, Trends in Cognitive Sciences,
Volume 23, Issue 7, 2019, Pages 602-614 DOI: 10.1016/j.tics.2019.04.005.

In recent years interest in integrating the affective and decision sciences has skyrocketed. Immense progress has been made, but the complexities of each field, which can multiply when combined, present a significant obstacle. A carefully defined framework for integration is needed. The shift towards computational modeling in decision science provides a powerful basis and a path forward, but one whose synergistic potential will only be fully realized by drawing on the theoretical richness of the affective sciences. Reviewing research using a popular computational model of choice (the drift diffusion model), we discuss how mapping concepts to parameters reduces conceptual ambiguity and reveals novel hypotheses.

A new model of reinforcement learning based on the human brain that copes with continuous spaces through continuous rewards, with a short but nice state-of-the-art of RL applied to large, continuous spaces

Feifei Zhao, Yi Zeng, Guixiang Wang, Jun Bai, Bo Xu, A Brain-Inspired Decision Making Model Based on Top-Down Biasing of Prefrontal Cortex to Basal Ganglia and Its Application in Autonomous UAV Explorations, Cognitive Computation, Volume 10, Issue 2, pp 296–306, DOI: 10.1007/s12559-017-9511-3.

Decision making is a fundamental ability for intelligent agents (e.g., humanoid robots and unmanned aerial vehicles). During decision making process, agents can improve the strategy for interacting with the dynamic environment through reinforcement learning. Many state-of-the-art reinforcement learning models deal with relatively smaller number of state-action pairs, and the states are preferably discrete, such as Q-learning and Actor-Critic algorithms. While in practice, in many scenario, the states are continuous and hard to be properly discretized. Better autonomous decision making methods need to be proposed to handle these problems. Inspired by the mechanism of decision making in human brain, we propose a general computational model, named as prefrontal cortex-basal ganglia (PFC-BG) algorithm. The proposed model is inspired by the biological reinforcement learning pathway and mechanisms from the following perspectives: (1) Dopamine signals continuously update reward-relevant information for both basal ganglia and working memory in prefrontal cortex. (2) We maintain the contextual reward information in working memory. This has a top-down biasing effect on reinforcement learning in basal ganglia. The proposed model separates the continuous states into smaller distinguishable states, and introduces continuous reward function for each state to obtain reward information at different time. To verify the performance of our model, we apply it to many UAV decision making experiments, such as avoiding obstacles and flying through window and door, and the experiments support the effectiveness of the model. Compared with traditional Q-learning and Actor-Critic algorithms, the proposed model is more biologically inspired, and more accurate and faster to make decision.

Z-numbers: an extension of fuzzy variables for cognitive decision making, and the concept of cognitive information

Hong-gang Peng, Jian-qiang Wang, Outranking Decision-Making Method with Z-Number Cognitive Information, Cognitive Computation, Volume 10, Issue 5, pp 752–768, DOI: 10.1007/s12559-018-9556-y.

The Z-number provides an adequate and reliable description of cognitive information. The nature of Z-numbers is complex, however, and important issues in Z-number computation remain to be addressed. This study focuses on developing a computationally simple method with Z-numbers to address multicriteria decision-making (MCDM) problems. Processing Z-numbers requires the direct computation of fuzzy and probabilistic uncertainties. We used an effective method to analyze the Z-number construct. Next, we proposed some outranking relations of Z-numbers and defined the dominance degree of discrete Z-numbers. Also, after analyzing the characteristics of elimination and choice translating reality III (ELECTRE III) and qualitative flexible multiple criteria method (QUALIFLEX), we developed an improved outranking method. To demonstrate this method, we provided an illustrative example concerning job-satisfaction evaluation. We further verified the validity of the method by a criteria test and comparative analysis. The results demonstrate that the method can be successfully applied to real-world decision-making problems, and it can identify more reasonable outcomes than previous methods. This study overcomes the high computational complexity in existing Z-number computation frameworks by exploring the pairwise comparison of Z-numbers. The method inherits the merits of the classical outranking method and considers the non-compensability of criteria. Therefore, it has remarkable potential to address practical decision-making problems involving Z-information.

A survey on decision making for multiagent systems, including multirobot systems

Y. Rizk, M. Awad and E. W. Tunstel, Decision Making in Multiagent Systems: A Survey, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 3, pp. 514-529, DOI: 10.1109/TCDS.2018.2840971.

Intelligent transport systems, efficient electric grids, and sensor networks for data collection and analysis are some examples of the multiagent systems (MAS) that cooperate to achieve common goals. Decision making is an integral part of intelligent agents and MAS that will allow such systems to accomplish increasingly complex tasks. In this survey, we investigate state-of-the-art work within the past five years on cooperative MAS decision making models, including Markov decision processes, game theory, swarm intelligence, and graph theoretic models. We survey algorithms that result in optimal and suboptimal policies such as reinforcement learning, dynamic programming, evolutionary computing, and neural networks. We also discuss the application of these models to robotics, wireless sensor networks, cognitive radio networks, intelligent transport systems, and smart electric grids. In addition, we define key terms in the area and discuss remaining challenges that include incorporating big data advancements to decision making, developing autonomous, scalable and computationally efficient algorithms, tackling more complex tasks, and developing standardized evaluation metrics. While recent surveys have been published on this topic, we present a broader discussion of related models and applications.Note to Practitioners:Future smart cities will rely on cooperative MAS that make decisions about what actions to perform that will lead to the completion of their tasks. Decision making models and algorithms have been developed and reported in the literature to generate such sequences of actions. These models are based on a wide variety of principles including human decision making and social animal behavior. In this paper, we survey existing decision making models and algorithms that generate optimal and suboptimal sequences of actions. We also discuss some of the remaining challenges faced by the research community before more effective MAS deployment can be achieved in this age of Internet of Things, robotics, and mobile devices. These challenges include developing more scalable and efficient algorithms, utilizing the abundant sensory data available, tackling more complex tasks, and developing evaluation standards for decision making.

A novel method of mathematical compression of the value function for polynomial (in the state) time complexity of value iteration / policy iteration

Alex Gorodetsky, Sertac Karaman, and Youssef Marzouk, High-dimensional stochastic optimal control using continuous tensor decompositions, The International Journal of Robotics Research Vol 37, Issue 2-3, pp. 340 – 377, DOI: 10.1177/0278364917753994.

Motion planning and control problems are embedded and essential in almost all robotics applications. These problems are often formulated as stochastic optimal control problems and solved using dynamic programming algorithms. Unfortunately, most existing algorithms that guarantee convergence to optimal solutions suffer from the curse of dimensionality: the run time of the algorithm grows exponentially with the dimension of the state space of the system. We propose novel dynamic programming algorithms that alleviate the curse of dimensionality in problems that exhibit certain low-rank structure. The proposed algorithms are based on continuous tensor decompositions recently developed by the authors. Essentially, the algorithms represent high-dimensional functions (e.g. the value function) in a compressed format, and directly perform dynamic programming computations (e.g. value iteration, policy iteration) in this format. Under certain technical assumptions, the new algorithms guarantee convergence towards optimal solutions with arbitrary precision. Furthermore, the run times of the new algorithms scale polynomially with the state dimension and polynomially with the ranks of the value function. This approach realizes substantial computational savings in “compressible” problem instances, where value functions admit low-rank approximations. We demonstrate the new algorithms in a wide range of problems, including a simulated six-dimensional agile quadcopter maneuvering example and a seven-dimensional aircraft perching example. In some of these examples, we estimate computational savings of up to 10 orders of magnitude over standard value iteration algorithms. We further demonstrate the algorithms running in real time on board a quadcopter during a flight experiment under motion capture.

A model of the interdependence of previous sensorimotor experiences in the following decision making

Evelina Dineva & Gregor Schöner, How infants’ reaches reveal principles of sensorimotor decision making, Connection Science vol. 30 iss. 1, p. 53-80, DOI: 10.1080/09540091.2017.1405382.

In Piaget’s classical A-not-B-task, infants repeatedly make a sensorimotor decision to reach to one of two cued targets. Perseverative errors are induced by switching the cue from A to B, while spontaneous errors are unsolicited reaches to B when only A is cued. We argue that theoretical accounts of sensorimotor decision-making fail to address how motor decisions leave a memory trace that may impact future sensorimotor decisions. Instead, in extant neural models, perseveration is caused solely by the history of stimulation. We present a neural dynamic model of sensorimotor decision-making within the framework of Dynamic Field Theory, in which a dynamic instability amplifies fluctuations in neural activation into macroscopic, stable neural activation states that leave memory traces. The model predicts perseveration, but also a tendency to repeat spontaneous errors. To test the account, we pool data from several A-not-B experiments. A conditional probabilities analysis accounts quantitatively how motor decisions depend on the history of reaching. The results provide evidence for the interdependence among subsequent reaching decisions that is explained by the model, showing that by amplifying small differences in activation and affecting learning, decisions have consequences beyond the individual behavioural act.