Category Archives: Probability And Statistics

A very good explanaition of how to model certain data to further sampling from it

Richard E. Turner, Cristiana-Diana Diaconu, Stratis Markou, Aliaksandra Shysheya, Andrew Y. K. Foong and Bruno Mlodozeniec, Denoising Diffusion Probabilistic Models in Six Simple Steps, arXiv:2402.04384 [cs.LG] [cs.LG].

Denoising Diffusion Probabilistic Models (DDPMs) are a very popular class of deep generative model that have been successfully applied to a diverse range of problems including image and video generation, protein and material synthesis, weather forecasting, and neural surrogates of partial differential equations. Despite their ubiquity it is hard to find an introduction to DDPMs which is simple, comprehensive, clean and clear. The compact explanations necessary in research papers are not able to elucidate all of the different design steps taken to formulate the DDPM and the rationale of the steps that are presented is often omitted to save space. Moreover, the expositions are typically presented from the variational lower bound perspective which is unnecessary and arguably harmful as it obfuscates why the method is working and suggests generalisations that do not perform well in practice. On the other hand, perspectives that take the continuous time-limit are beautiful and general, but they have a high barrier-to-entry as they require background knowledge of stochastic differential equations and probability flow. In this note, we distill down the formulation of the DDPM into six simple steps each of which comes with a clear rationale. We assume that the reader is familiar with fundamental topics in machine learning including basic probabilistic modelling, Gaussian distributions, maximum likelihood estimation, and deep learning.

Procrustes analysis as a method for finding the best consensus between two sets of signals

B.G.M. Vandeginste, J. Smeyers-Verbeke, Procrustes Analysis, 1998, https://www.sciencedirect.com/topics/computer-science/procrustes-analysis.

Procrustes analysis is a method in computer science that relates two sets of multivariate observations by finding the transformation that best matches the configuration of points in one set to the corresponding points in the other set, while preserving the internal structure of the objects. It involves operations such as mean centering, reflection, rotation, and finding the best match by minimizing the sum of squared distances between the transformed objects and the target configuration.

Interesting review of denoising methods (applied to vision and ML, but general enough for other applications)

Peyman Milanfar, Mauricio Delbracio, Denoising: A Powerful Building-Block for Imaging, Inverse Problems, and Machine Learning, arXiv:2409.06219 [cs.LG], DOI: 10.48550/arXiv.2409.06219.

Denoising, the process of reducing random fluctuations in a signal to emphasize essential patterns, has been a fundamental problem of interest since the dawn of modern scientific inquiry. Recent denoising techniques, particularly in imaging, have achieved remarkable success, nearing theoretical limits by some measures. Yet, despite tens of thousands of research papers, the wide-ranging applications of denoising beyond noise removal have not been fully recognized. This is partly due to the vast and diverse literature, making a clear overview challenging. This paper aims to address this gap. We present a comprehensive perspective on denoisers, their structure, and desired properties. We emphasize the increasing importance of denoising and showcase its evolution into an essential building block for complex tasks in imaging, inverse problems, and machine learning. Despite its long history, the community continues to uncover unexpected and groundbreaking uses for denoising, further solidifying its place as a cornerstone of scientific and engineering practice.

See also: https://en.wikipedia.org/wiki/Total_variation_denoising

Predicting changes in the environment through time series for better robot navigation

Yanbo Wang, Yaxian Fan, Jingchuan Wang, Weidong Chen, Long-term navigation for autonomous robots based on spatio-temporal map prediction, Robotics and Autonomous Systems, Volume 179, 2024 DOI: 10.1016/j.robot.2024.104724.

The robotics community has witnessed a growing demand for long-term navigation of autonomous robots in diverse environments, including factories, homes, offices, and public places. The core challenge in long-term navigation for autonomous robots lies in effectively adapting to varying degrees of dynamism in the environment. In this paper, we propose a long-term navigation method for autonomous robots based on spatio-temporal map prediction. The time series model is introduced to learn the changing patterns of different environmental structures or objects on multiple time scales based on the historical maps and forecast the future maps for long-term navigation. Then, an improved global path planning algorithm is performed based on the time-variant predicted cost maps. During navigation, the current observations are fused with the predicted map through a modified Bayesian filter to reduce the impact of prediction errors, and the updated map is stored for future predictions. We run simulation and conduct several weeks of experiments in multiple scenarios. The results show that our algorithm is effective and robust for long-term navigation in dynamic environments.

A clustering algorithm that claims to be simpler and faster than others

Yewang Chen, Yuanyuan Yang, Songwen Pei, Yi Chen, Jixiang Du, A simple rapid sample-based clustering for large-scale data, Engineering Applications of Artificial Intelligence, Volume 133, Part F, 2024 DOI: 10.1016/j.engappai.2024.108551.

Large-scale data clustering is a crucial task in addressing big data challenges. However, existing approaches often struggle to efficiently and effectively identify different types of big data, making it a significant challenge. In this paper, we propose a novel sample-based clustering algorithm, which is very simple but extremely efficient, and runs in about O(n×r) expected time, where n is the size of the dataset and r is the category number. The method is based on two key assumptions: (1) The data of each sufficient sample should have similar data distribution, as well as category distribution, to the entire data set; (2) the representative of each category in all sufficient samples conform to Gaussian distribution. It processes data in two stages, one is to classify data in each local sample independently, and the other is to globally classify data by assigning each point to the category of its nearest representative category center. The experimental results show that the proposed algorithm is effective, which outperforms other current variants of clustering algorithm.

Using fractal interpolation for time series prediction

Alexandra Băicoianu, Cristina Gabriela Gavrilă, Cristina Maria Păcurar, Victor Dan Păcurar, Fractal interpolation in the context of prediction accuracy optimization, Engineering Applications of Artificial Intelligence, Volume 133, Part D, 2024 DOI: 10.1016/j.engappai.2024.108380.

This paper focuses on the hypothesis of optimizing time series predictions using fractal interpolation techniques. In general, the accuracy of machine learning model predictions is closely related to the quality and quantitative aspects of the data used, following the principle of garbage-in, garbage-out. In order to quantitatively and qualitatively augment datasets, one of the most prevalent concerns of data scientists is to generate synthetic data, which should follow as closely as possible the actual pattern of the original data. This study proposes three different data augmentation strategies based on fractal interpolation, namely the Closest Hurst Strategy, Closest Values Strategy and Formula Strategy. To validate the strategies, we used four public datasets from the literature, as well as a private dataset obtained from meteorological records in the city of Braşov, Romania. The prediction results obtained with the LSTM model using the presented interpolation strategies showed a significant accuracy improvement compared to the raw datasets, thus providing a possible answer to practical problems in the field of remote sensing and sensor sensitivity. Moreover, our methodologies answer some optimization-related open questions for the fractal interpolation step using Optuna framework.

Change point detection through self-supervised learning

Xiangyu Bao, Liang Chen, Jingshu Zhong, Dianliang Wu, Yu Zheng, A self-supervised contrastive change point detection method for industrial time series, Engineering Applications of Artificial Intelligence, Volume 133, Part B, 2024, DOI: 10.1016/j.engappai.2024.108217.

Manufacturing process monitoring is crucial to ensure production quality. This paper formulates the detection problem of abnormal changes in the manufacturing process as the change point detection (CPD) problem for the industrial temporal data. The premise of known data property and sufficient data annotations in existing CPD methods limits their application in the complex manufacturing process. Therefore, a self-supervised and non-parametric CPD method based on temporal trend-seasonal feature decomposition and contrastive learning (CoCPD) is proposed. CoCPD aims to solve CPD problem in an online manner. By bringing the representations of time series segments with similar properties in the feature space closer, our model can sensitively distinguish the change points that do not conform to either historical data distribution or temporal continuity. The proposed CoCPD is validated by a real-world body-in-white production case and compared with 10 state-of-the-art CPD methods. Overall, CoCPD achieves promising results by Precision 70.6%, Recall 68.8%, and the mean absolute error (MAE) 8.27. With the ability to rival the best offline baselines, CoCPD outperforms online baseline methods with improvements in Precision, Recall and MAE by 14.90%, 11.93% and 43.93%, respectively. Experiment results demonstrate that CoCPD can detect abnormal changes timely and accurately.

See also: https://doi.org/10.1016/j.engappai.2024.108155

A survey on neurosymbolic RL and planning

K. Acharya, W. Raza, C. Dourado, A. Velasquez and H. H. Song, Neurosymbolic Reinforcement Learning and Planning: A Survey, IEEE Transactions on Artificial Intelligence, vol. 5, no. 5, pp. 1939-1953, May 2024 DOI: 10.1109/TAI.2023.3311428.

The area of neurosymbolic artificial intelligence (Neurosymbolic AI) is rapidly developing and has become a popular research topic, encompassing subfields, such as neurosymbolic deep learning and neurosymbolic reinforcement learning (Neurosymbolic RL). Compared with traditional learning methods, Neurosymbolic AI offers significant advantages by simplifying complexity and providing transparency and explainability. Reinforcement learning (RL), a long-standing artificial intelligence (AI) concept that mimics human behavior using rewards and punishment, is a fundamental component of Neurosymbolic RL, a recent integration of the two fields that has yielded promising results. The aim of this article is to contribute to the emerging field of Neurosymbolic RL by conducting a literature survey. Our evaluation focuses on the three components that constitute Neurosymbolic RL: neural, symbolic, and RL. We categorize works based on the role played by the neural and symbolic parts in RL, into three taxonomies: learning for reasoning, reasoning for learning, and learning–reasoning. These categories are further divided into subcategories based on their applications. Furthermore, we analyze the RL components of each research work, including the state space, action space, policy module, and RL algorithm. In addition, we identify research opportunities and challenges in various applications within this dynamic field.

Estimating speed from inertial data by dealing with noise and outliers

W. Xu, X. Peng and L. Kneip, Tight Fusion of Events and Inertial Measurements for Direct Velocity Estimation, IEEE Transactions on Robotics, vol. 40, pp. 240-256, 2024 DOI: 10.1109/TRO.2023.3333108.

Traditional visual-inertial state estimation targets absolute camera poses and spatial landmark locations while first-order kinematics are typically resolved as an implicitly estimated substate. However, this poses a risk in velocity-based control scenarios, as the quality of the estimation of kinematics depends on the stability of absolute camera and landmark coordinates estimation. To address this issue, we propose a novel solution to tight visual\u2013inertial fusion directly at the level of first-order kinematics by employing a dynamic vision sensor instead of a normal camera. More specifically, we leverage trifocal tensor geometry to establish an incidence relation that directly depends on events and camera velocity, and demonstrate how velocity estimates in highly dynamic situations can be obtained over short-time intervals. Noise and outliers are dealt with using a nested two-layer random sample consensus (RANSAC) scheme. In addition, smooth velocity signals are obtained from a tight fusion with preintegrated inertial signals using a sliding window optimizer. Experiments on both simulated and real data demonstrate that the proposed tight event-inertial fusion leads to continuous and reliable velocity estimation in highly dynamic scenarios independently of absolute coordinates. Furthermore, in extreme cases, it achieves more stable and more accurate estimation of kinematics than traditional, point-position-based visual-inertial odometry.

A brief summary of the state of the art in time series clustering

Hailin Li, Zechen Liu, Xiaoji Wan, Time series clustering based on complex network with synchronous matching states, Expert Systems with Applications, Volume 211, 2023 DOI: 10.1016/j.eswa.2022.118543.

Due to the extensive existence of time series in various fields, more and more research on time series data mining, especially time series clustering, has been done in recent years. Clustering technology can extract valuable information and potential patterns from time series data. This paper proposes a time series Clustering method based on Synchronous matching of Complex networks (CSC). This method uses density peak clustering algorithm to identify the state of each time point and obtains the state sequence according to the timeline of the original time series. State sequences is a new method to represent time series. By comparing two state sequences synchronously, the length of state sequence with step is calculated and the similarity is presented, which forms a new method to calculate the similarity of time series. Based on the obtained time series similarity, the relationship network of time series is constructed. Simultaneously, the community discovery technology is applied to cluster the relationship network and further achieve the complete time series clustering. The detailed process and simulation experiments of CSC method are given. Experimental results on different datasets show that CSC method is superior to other traditional time series clustering methods.