A new software design, verification and implementation method for robotics

Li, W., Ribeiro, P., Miyazawa, A. et al., Formal design, verification and implementation of robotic controller software via RoboChart and RoboTool, Auton Robot 48, 14 (2024) DOI: 10.1007/s10514-024-10163-7.

Current practice in simulation and implementation of robot controllers is usually undertaken with guidance from high-level design diagrams and pseudocode. Thus, no rigorous connection between the design and the development of a robot controller is established. This paper presents a framework for designing robotic controllers with support for automatic generation of executable code and automatic property checking. A state-machine based notation, RoboChart, and a tool (RoboTool) that implements the automatic generation of code and mathematical models from the designed controllers are presented. We demonstrate the application of RoboChart and its related tool through a case study of a robot performing an exploration task. The automatically generated code is platform independent and is used in both simulation and two different physical robotic platforms. Properties are formally checked against the mathematical models generated by RoboTool, and further validated in the actual simulations and physical experiments. The tool not only provides engineers with a way of designing robotic controllers formally but also paves the way for correct implementation of robotic systems.

Setting up goals, even unproductive or unuseful ones, can help in building cognition

Junyi Chu, Joshua B. Tenenbaum, Laura E. Schulz, In praise of folly: flexible goals and human cognition, Trends in Cognitive Sciences, Volume 28, Issue 7, 2024, Pages 628-642 DOI: 10.1016/j.tics.2024.03.006.

Humans often pursue idiosyncratic goals that appear remote from functional ends, including information gain. We suggest that this is valuable because goals (even prima facie foolish or unachievable ones) contain structured information that scaffolds thinking and planning. By evaluating hypotheses and plans with respect to their goals, humans can discover new ideas that go beyond prior knowledge and observable evidence. These hypotheses and plans can be transmitted independently of their original motivations, adapted across generations, and serve as an engine of cultural evolution. Here, we review recent empirical and computational research underlying goal generation and planning and discuss the ways that the flexibility of our motivational system supports cognitive gains for both individuals and societies.

Review of the current methologies for achieving continuous learning, and its biological bases

Buddhi Wickramasinghe, Gobinda Saha , and Kaushik Roy, Continual Learning: A Review of Techniques, Challenges, and Future Directions, IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE, VOL. 5, NO. 6, JUNE 2024 DOI: 10.1109/TAI.2023.3339091.

Continual learning (CL), or the ability to acquire, process, and learn from new information without forgetting acquired knowledge, is a fundamental quality of an intelligent agent. The human brain has evolved into gracefully dealing with ever-changing circumstances and learning from experience with the help of complex neurophysiological mechanisms. Even though artificial intelligence takes after human intelligence, traditional neural networks do not possess the ability to adapt to dynamic environments. When presented with new information, an artificial neural network (ANN) often completely forgets its prior knowledge, a phenomenon called catastrophic forgetting or catastrophic interference. Incorporating CL capabilities into ANNs is an active field of research and is integral to achieving artificial general intelligence. In this review, we revisit CL approaches and critically examine their strengths and limitations. We conclude that CL approaches should look beyond mitigating catastrophic forgetting and strive for systems that can learn, store, recall, and transfer knowledge, much like the human brain. To this end, we highlight the importance of adopting alternative brain-inspired data representations and learning algorithms and provide our perspective on promising new directions where CL could play an instrumental role.

See also: doi: 10.1109/TAI.2024.3355879

A clustering algorithm that claims to be simpler and faster than others

Yewang Chen, Yuanyuan Yang, Songwen Pei, Yi Chen, Jixiang Du, A simple rapid sample-based clustering for large-scale data, Engineering Applications of Artificial Intelligence, Volume 133, Part F, 2024 DOI: 10.1016/j.engappai.2024.108551.

Large-scale data clustering is a crucial task in addressing big data challenges. However, existing approaches often struggle to efficiently and effectively identify different types of big data, making it a significant challenge. In this paper, we propose a novel sample-based clustering algorithm, which is very simple but extremely efficient, and runs in about O(n×r) expected time, where n is the size of the dataset and r is the category number. The method is based on two key assumptions: (1) The data of each sufficient sample should have similar data distribution, as well as category distribution, to the entire data set; (2) the representative of each category in all sufficient samples conform to Gaussian distribution. It processes data in two stages, one is to classify data in each local sample independently, and the other is to globally classify data by assigning each point to the category of its nearest representative category center. The experimental results show that the proposed algorithm is effective, which outperforms other current variants of clustering algorithm.

Profiling the energy consumption of AGVs

J. Leng, J. Peng, J. Liu, Y. Zhang, J. Ji and Y. Zhang, rofiling Power Consumption in Low-Speed Autonomous Guided Vehicles, IEEE Robotics and Automation Letters, vol. 9, no. 7, pp. 6027-6034, July 2024 DOI: 10.1109/LRA.2024.3396051.

The increasing demand for automation has led to a rise in the use of low-speed Autonomous guided vehicles (AGVs). However, AGVs rely on batteries for their power source, which limits their operational time and affects their overall performance. To optimize their energy usage and enhance their battery life, it is crucial to understand the power consumption behavior of AGVs. This letter presents a comprehensive study on profiling power consumption in low-speed AGVs. The previous power consumption estimation models for AGVs were mostly based on physical formulas. We introduce a data-driven power consumption estimation model for each of the main components of the AGV, including the chassis, computing platform, sensors and communication devices. By conducting three actual driving tests, we show that the MAPE in estimating instantaneous power is 4.8%, a significant 8.1% improvement compared to using a physical model. Moreover, the MAPE for energy consumption is only 1.5%, which is 6.6% better than the physical model. To demonstrate the utility of our power consumption estimation models, we conduct two case studies – one is energy-efficient path planning and the other is energy-efficient perception task interval adjustment. This study demonstrates that integrating the power consumption estimation model into path planning reduces energy consumption by over 12%. Additionally, adjusting detection interval lowers computational energy consumption by 10.1%.

Thermodynamics as a way of identifying hierarchies

Morten L. Kringelbach, Yonatan Sanz Perl, Gustavo Deco, The Thermodynamics of Mind, Trends in Cognitive Sciences, Volume 28, Issue 6, 2024, Pages 568-581 DOI: 10.1016/j.tics.2024.03.009.

To not only survive, but also thrive, the brain must efficiently orchestrate distributed computation across space and time. This requires hierarchical organisation facilitating fast information transfer and processing at the lowest possible metabolic cost. Quantifying brain hierarchy is difficult but can be estimated from the asymmetry of information flow. Thermodynamics has successfully characterised hierarchy in many other complex systems. Here, we propose the ‘Thermodynamics of Mind’ framework as a natural way to quantify hierarchical brain orchestration and its underlying mechanisms. This has already provided novel insights into the orchestration of hierarchy in brain states including movie watching, where the hierarchy of the brain is flatter than during rest. Overall, this framework holds great promise for revealing the orchestration of cognition.

Using fractal interpolation for time series prediction

Alexandra Băicoianu, Cristina Gabriela Gavrilă, Cristina Maria Păcurar, Victor Dan Păcurar, Fractal interpolation in the context of prediction accuracy optimization, Engineering Applications of Artificial Intelligence, Volume 133, Part D, 2024 DOI: 10.1016/j.engappai.2024.108380.

This paper focuses on the hypothesis of optimizing time series predictions using fractal interpolation techniques. In general, the accuracy of machine learning model predictions is closely related to the quality and quantitative aspects of the data used, following the principle of garbage-in, garbage-out. In order to quantitatively and qualitatively augment datasets, one of the most prevalent concerns of data scientists is to generate synthetic data, which should follow as closely as possible the actual pattern of the original data. This study proposes three different data augmentation strategies based on fractal interpolation, namely the Closest Hurst Strategy, Closest Values Strategy and Formula Strategy. To validate the strategies, we used four public datasets from the literature, as well as a private dataset obtained from meteorological records in the city of Braşov, Romania. The prediction results obtained with the LSTM model using the presented interpolation strategies showed a significant accuracy improvement compared to the raw datasets, thus providing a possible answer to practical problems in the field of remote sensing and sensor sensitivity. Moreover, our methodologies answer some optimization-related open questions for the fractal interpolation step using Optuna framework.

Change point detection through self-supervised learning

Xiangyu Bao, Liang Chen, Jingshu Zhong, Dianliang Wu, Yu Zheng, A self-supervised contrastive change point detection method for industrial time series, Engineering Applications of Artificial Intelligence, Volume 133, Part B, 2024, DOI: 10.1016/j.engappai.2024.108217.

Manufacturing process monitoring is crucial to ensure production quality. This paper formulates the detection problem of abnormal changes in the manufacturing process as the change point detection (CPD) problem for the industrial temporal data. The premise of known data property and sufficient data annotations in existing CPD methods limits their application in the complex manufacturing process. Therefore, a self-supervised and non-parametric CPD method based on temporal trend-seasonal feature decomposition and contrastive learning (CoCPD) is proposed. CoCPD aims to solve CPD problem in an online manner. By bringing the representations of time series segments with similar properties in the feature space closer, our model can sensitively distinguish the change points that do not conform to either historical data distribution or temporal continuity. The proposed CoCPD is validated by a real-world body-in-white production case and compared with 10 state-of-the-art CPD methods. Overall, CoCPD achieves promising results by Precision 70.6%, Recall 68.8%, and the mean absolute error (MAE) 8.27. With the ability to rival the best offline baselines, CoCPD outperforms online baseline methods with improvements in Precision, Recall and MAE by 14.90%, 11.93% and 43.93%, respectively. Experiment results demonstrate that CoCPD can detect abnormal changes timely and accurately.

See also: https://doi.org/10.1016/j.engappai.2024.108155

Reducing discovered skills in DRL to the essential ones, modelling skills with SMDP Q-learning

Shuai Qing, Fei Zhu, Refine to the essence: Less-redundant skill learning via diversity clustering, Engineering Applications of Artificial Intelligence, Volume 133, Part A, 2024 DOI: 10.1016/j.engappai.2024.107981.

In reinforcement learning, skill is a potentially conditional policy that solves tasks in a hierarchically controlled manner. Progress on skill discovery helps agents learn a set of diverse and useful skills without external supervision to tackle complex tasks with sparse rewards. Although most of the studies have aimed to maximize the diversity of skills discovered, the distinguishability between skills diminishes as the number of skills increases, leading to a subset of similar and redundant skills. To tackle this problem, a method called Refine to the Essence of Skills (RE-Skill) is proposed, which aims at learning skills with less redundancy. RE-Skill integrates the concepts of cluster analysis and policy distillation, clustering similar skills together based on their unique features, learning the most optimal performance within each cluster, and filtering out similar skills that involve excessive and intricate actions, thereby reducing redundancy among skills. By refining clusters of similar skills into less-redundant independent skills, RE-Skill demonstrates superior performance compared to other skill discovery algorithms and shows how these less-redundant skills effectively address downstream tasks, indicating that RE-Skill is able to extend its efficacy to engineering applications in robot control and obstacle training tasks within complex environments.

A survey on neurosymbolic RL and planning

K. Acharya, W. Raza, C. Dourado, A. Velasquez and H. H. Song, Neurosymbolic Reinforcement Learning and Planning: A Survey, IEEE Transactions on Artificial Intelligence, vol. 5, no. 5, pp. 1939-1953, May 2024 DOI: 10.1109/TAI.2023.3311428.

The area of neurosymbolic artificial intelligence (Neurosymbolic AI) is rapidly developing and has become a popular research topic, encompassing subfields, such as neurosymbolic deep learning and neurosymbolic reinforcement learning (Neurosymbolic RL). Compared with traditional learning methods, Neurosymbolic AI offers significant advantages by simplifying complexity and providing transparency and explainability. Reinforcement learning (RL), a long-standing artificial intelligence (AI) concept that mimics human behavior using rewards and punishment, is a fundamental component of Neurosymbolic RL, a recent integration of the two fields that has yielded promising results. The aim of this article is to contribute to the emerging field of Neurosymbolic RL by conducting a literature survey. Our evaluation focuses on the three components that constitute Neurosymbolic RL: neural, symbolic, and RL. We categorize works based on the role played by the neural and symbolic parts in RL, into three taxonomies: learning for reasoning, reasoning for learning, and learning–reasoning. These categories are further divided into subcategories based on their applications. Furthermore, we analyze the RL components of each research work, including the state space, action space, policy module, and RL algorithm. In addition, we identify research opportunities and challenges in various applications within this dynamic field.