Tag Archives: Decision Making

Robot exploration through decision-making + gaussian processes

Stephens, A., Budd, M., Staniaszek, M. et al. Planning under uncertainty for safe robot exploration using Gaussian process prediction, Auton Robot 48, 18 (2024) DOI: 10.1007/s10514-024-10172-6.

The exploration of new environments is a crucial challenge for mobile robots. This task becomes even more complex with the added requirement of ensuring safety. Here, safety refers to the robot staying in regions where the values of certain environmental conditions (such as terrain steepness or radiation levels) are within a predefined threshold. We consider two types of safe exploration problems. First, the robot has a map of its workspace, but the values of the environmental features relevant to safety are unknown beforehand and must be explored. Second, both the map and the environmental features are unknown, and the robot must build a map whilst remaining safe. Our proposed framework uses a Gaussian process to predict the value of the environmental features in unvisited regions. We then build a Markov decision process that integrates the Gaussian process predictions with the transition probabilities of the environmental model. The Markov decision process is then incorporated into an exploration algorithm that decides which new region of the environment to explore based on information value, predicted safety, and distance from the current position of the robot. We empirically evaluate the effectiveness of our framework through simulations and its application on a physical robot in an underground environment.

Setting up goals, even unproductive or unuseful ones, can help in building cognition

Junyi Chu, Joshua B. Tenenbaum, Laura E. Schulz, In praise of folly: flexible goals and human cognition, Trends in Cognitive Sciences, Volume 28, Issue 7, 2024, Pages 628-642 DOI: 10.1016/j.tics.2024.03.006.

Humans often pursue idiosyncratic goals that appear remote from functional ends, including information gain. We suggest that this is valuable because goals (even prima facie foolish or unachievable ones) contain structured information that scaffolds thinking and planning. By evaluating hypotheses and plans with respect to their goals, humans can discover new ideas that go beyond prior knowledge and observable evidence. These hypotheses and plans can be transmitted independently of their original motivations, adapted across generations, and serve as an engine of cultural evolution. Here, we review recent empirical and computational research underlying goal generation and planning and discuss the ways that the flexibility of our motivational system supports cognitive gains for both individuals and societies.

Continuous POMDPs through belief state sparsification, applied to active SLAM

Elimelech K, Indelman V. Simplified decision making in the belief space using belief sparsification. The International Journal of Robotics Research. 2022;41(5):470-496 DOI: 10.1177/02783649221076381.

In this work, we introduce a new and efficient solution approach for the problem of decision making under uncertainty, which can be formulated as decision making in a belief space, over a possibly high-dimensional state space. Typically, to solve a decision problem, one should identify the optimal action from a set of candidates, according to some objective. We claim that one can often generate and solve an analogous yet simplified decision problem, which can be solved more efficiently. A wise simplification method can lead to the same action selection, or one for which the maximal loss in optimality can be guaranteed. Furthermore, such simplification is separated from the state inference and does not compromise its accuracy, as the selected action would finally be applied on the original state. First, we present the concept for general decision problems and provide a theoretical framework for a coherent formulation of the approach. We then practically apply these ideas to decision problems in the belief space, which can be simplified by considering a sparse approximation of their initial belief. The scalable belief sparsification algorithm we provide is able to yield solutions which are guaranteed to be consistent with the original problem. We demonstrate the benefits of the approach in the solution of a realistic active-SLAM problem and manage to significantly reduce computation time, with no loss in the quality of solution. This work is both fundamental and practical and holds numerous possible extensions.

Mixing logical planning with NNs for decision making

Zuo, G., Pan, T., Zhang, T. et al., SOAR Improved Artificial Neural Network for Multistep Decision-making Tasks, . Cogn Comput 13, 612–625 (2021) DOI: 10.1007/s12559-020-09716-6.

Recently, artificial neural networks (ANNs) have been applied to various robot-related research areas due to their powerful spatial feature abstraction and temporal information prediction abilities. Decision-making has also played a fundamental role in the research area of robotics. How to improve ANNs with the characteristics of decision-making is a challenging research issue. ANNs are connectionist models, which means they are naturally weak in long-term planning, logical reasoning, and multistep decision-making. Considering that a small refinement of the inner network structures of ANNs will usually lead to exponentially growing data costs, an additional planning module seems necessary for the further improvement of ANNs, especially for small data learning. In this paper, we propose a state operator and result (SOAR) improved ANN (SANN) model, which takes advantage of both the long-term cognitive planning ability of SOAR and the powerful feature detection ability of ANNs. It mimics the cognitive mechanism of the human brain to improve the traditional ANN with an additional logical planning module. In addition, a data fusion module is constructed to combine the probability vector obtained by SOAR planning and the original data feature array. A data fusion module is constructed to convert the information from the logical sequences in SOAR to the probabilistic vector in ANNs. The proposed architecture is validated in two types of robot multistep decision-making experiments for a grasping task: a multiblock simulated experiment and a multicup experiment in a real scenario. The experimental results show the efficiency and high accuracy of our proposed architecture. The integration of SOAR and ANN is a good compromise between logical planning with small data and probabilistic classification with big data. It also has strong potential for more complicated tasks that require robust classification, long-term planning, and fast learning. Some potential applications include recognition of grasping order in multiobject environment and cooperative grasping of multiagents.

Studying magician tricks to understand decision making and how to influence it

Alice Pailhès, Gustav Kuhn, Mind Control Tricks: Magicians’ Forcing and Free Will, . Trends in Cognitive Sciences, Volume 25, Issue 5, 2021, Pages 338-341 DOI: 10.1016/j.tics.2021.02.001.

A new research program has recently emerged that investigates magicians’ mind control tricks, also called forces. This research highlights the psychological processes that underpin decision-making, illustrates the ease by which our decisions can be covertly influenced, and helps answer questions about our sense of free will and agency over choices.

Interesting alternative to the classical “maximize expected utility” rule for decision making

EtienneKoechlin, Human Decision-Making beyond the Rational Decision Theory, Trends in Cognitive Sciences, Volume 24, Issue 1, January 2020, Pages 4-6, DOI: 10.1016/j.tics.2019.11.001.

Two recent studies (Farashahi et al. and Rouault et al.) provide compelling evidence refuting the Subjective Expected Utility (SEU) hypothesis as a ground model describing human decision-making. Together, these studies pave the way towards a new model that subsumes the notion of decision-making and adaptive behavior into a single account.

On theories of human decision making and the role of affects

Ian D. Roberts, Cendri A. Hutcherson, Affect and Decision Making: Insights and Predictions from Computational Models, Trends in Cognitive Sciences,
Volume 23, Issue 7, 2019, Pages 602-614 DOI: 10.1016/j.tics.2019.04.005.

In recent years interest in integrating the affective and decision sciences has skyrocketed. Immense progress has been made, but the complexities of each field, which can multiply when combined, present a significant obstacle. A carefully defined framework for integration is needed. The shift towards computational modeling in decision science provides a powerful basis and a path forward, but one whose synergistic potential will only be fully realized by drawing on the theoretical richness of the affective sciences. Reviewing research using a popular computational model of choice (the drift diffusion model), we discuss how mapping concepts to parameters reduces conceptual ambiguity and reveals novel hypotheses.

A new model of reinforcement learning based on the human brain that copes with continuous spaces through continuous rewards, with a short but nice state-of-the-art of RL applied to large, continuous spaces

Feifei Zhao, Yi Zeng, Guixiang Wang, Jun Bai, Bo Xu, A Brain-Inspired Decision Making Model Based on Top-Down Biasing of Prefrontal Cortex to Basal Ganglia and Its Application in Autonomous UAV Explorations, Cognitive Computation, Volume 10, Issue 2, pp 296–306, DOI: 10.1007/s12559-017-9511-3.

Decision making is a fundamental ability for intelligent agents (e.g., humanoid robots and unmanned aerial vehicles). During decision making process, agents can improve the strategy for interacting with the dynamic environment through reinforcement learning. Many state-of-the-art reinforcement learning models deal with relatively smaller number of state-action pairs, and the states are preferably discrete, such as Q-learning and Actor-Critic algorithms. While in practice, in many scenario, the states are continuous and hard to be properly discretized. Better autonomous decision making methods need to be proposed to handle these problems. Inspired by the mechanism of decision making in human brain, we propose a general computational model, named as prefrontal cortex-basal ganglia (PFC-BG) algorithm. The proposed model is inspired by the biological reinforcement learning pathway and mechanisms from the following perspectives: (1) Dopamine signals continuously update reward-relevant information for both basal ganglia and working memory in prefrontal cortex. (2) We maintain the contextual reward information in working memory. This has a top-down biasing effect on reinforcement learning in basal ganglia. The proposed model separates the continuous states into smaller distinguishable states, and introduces continuous reward function for each state to obtain reward information at different time. To verify the performance of our model, we apply it to many UAV decision making experiments, such as avoiding obstacles and flying through window and door, and the experiments support the effectiveness of the model. Compared with traditional Q-learning and Actor-Critic algorithms, the proposed model is more biologically inspired, and more accurate and faster to make decision.

Z-numbers: an extension of fuzzy variables for cognitive decision making, and the concept of cognitive information

Hong-gang Peng, Jian-qiang Wang, Outranking Decision-Making Method with Z-Number Cognitive Information, Cognitive Computation, Volume 10, Issue 5, pp 752–768, DOI: 10.1007/s12559-018-9556-y.

The Z-number provides an adequate and reliable description of cognitive information. The nature of Z-numbers is complex, however, and important issues in Z-number computation remain to be addressed. This study focuses on developing a computationally simple method with Z-numbers to address multicriteria decision-making (MCDM) problems. Processing Z-numbers requires the direct computation of fuzzy and probabilistic uncertainties. We used an effective method to analyze the Z-number construct. Next, we proposed some outranking relations of Z-numbers and defined the dominance degree of discrete Z-numbers. Also, after analyzing the characteristics of elimination and choice translating reality III (ELECTRE III) and qualitative flexible multiple criteria method (QUALIFLEX), we developed an improved outranking method. To demonstrate this method, we provided an illustrative example concerning job-satisfaction evaluation. We further verified the validity of the method by a criteria test and comparative analysis. The results demonstrate that the method can be successfully applied to real-world decision-making problems, and it can identify more reasonable outcomes than previous methods. This study overcomes the high computational complexity in existing Z-number computation frameworks by exploring the pairwise comparison of Z-numbers. The method inherits the merits of the classical outranking method and considers the non-compensability of criteria. Therefore, it has remarkable potential to address practical decision-making problems involving Z-information.

A survey on decision making for multiagent systems, including multirobot systems

Y. Rizk, M. Awad and E. W. Tunstel, Decision Making in Multiagent Systems: A Survey, IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 3, pp. 514-529, DOI: 10.1109/TCDS.2018.2840971.

Intelligent transport systems, efficient electric grids, and sensor networks for data collection and analysis are some examples of the multiagent systems (MAS) that cooperate to achieve common goals. Decision making is an integral part of intelligent agents and MAS that will allow such systems to accomplish increasingly complex tasks. In this survey, we investigate state-of-the-art work within the past five years on cooperative MAS decision making models, including Markov decision processes, game theory, swarm intelligence, and graph theoretic models. We survey algorithms that result in optimal and suboptimal policies such as reinforcement learning, dynamic programming, evolutionary computing, and neural networks. We also discuss the application of these models to robotics, wireless sensor networks, cognitive radio networks, intelligent transport systems, and smart electric grids. In addition, we define key terms in the area and discuss remaining challenges that include incorporating big data advancements to decision making, developing autonomous, scalable and computationally efficient algorithms, tackling more complex tasks, and developing standardized evaluation metrics. While recent surveys have been published on this topic, we present a broader discussion of related models and applications.Note to Practitioners:Future smart cities will rely on cooperative MAS that make decisions about what actions to perform that will lead to the completion of their tasks. Decision making models and algorithms have been developed and reported in the literature to generate such sequences of actions. These models are based on a wide variety of principles including human decision making and social animal behavior. In this paper, we survey existing decision making models and algorithms that generate optimal and suboptimal sequences of actions. We also discuss some of the remaining challenges faced by the research community before more effective MAS deployment can be achieved in this age of Internet of Things, robotics, and mobile devices. These challenges include developing more scalable and efficient algorithms, utilizing the abundant sensory data available, tackling more complex tasks, and developing evaluation standards for decision making.