Category Archives: Mathematics

Estimating speed from inertial data by dealing with noise and outliers

W. Xu, X. Peng and L. Kneip, Tight Fusion of Events and Inertial Measurements for Direct Velocity Estimation, IEEE Transactions on Robotics, vol. 40, pp. 240-256, 2024 DOI: 10.1109/TRO.2023.3333108.

Traditional visual-inertial state estimation targets absolute camera poses and spatial landmark locations while first-order kinematics are typically resolved as an implicitly estimated substate. However, this poses a risk in velocity-based control scenarios, as the quality of the estimation of kinematics depends on the stability of absolute camera and landmark coordinates estimation. To address this issue, we propose a novel solution to tight visual\u2013inertial fusion directly at the level of first-order kinematics by employing a dynamic vision sensor instead of a normal camera. More specifically, we leverage trifocal tensor geometry to establish an incidence relation that directly depends on events and camera velocity, and demonstrate how velocity estimates in highly dynamic situations can be obtained over short-time intervals. Noise and outliers are dealt with using a nested two-layer random sample consensus (RANSAC) scheme. In addition, smooth velocity signals are obtained from a tight fusion with preintegrated inertial signals using a sliding window optimizer. Experiments on both simulated and real data demonstrate that the proposed tight event-inertial fusion leads to continuous and reliable velocity estimation in highly dynamic scenarios independently of absolute coordinates. Furthermore, in extreme cases, it achieves more stable and more accurate estimation of kinematics than traditional, point-position-based visual-inertial odometry.

A PID-based global optimization algorithm

Yuansheng Gao, PID-based search algorithm: A novel metaheuristic algorithm based on PID algorithm, Expert Systems with Applications, Volume 232, 2023, DOI: 10.1016/j.eswa.2023.120886.

In this paper, a metaheuristic algorithm called PID-based search algorithm (PSA) is proposed for global optimization. The algorithm is based on an incremental PID algorithm that converges the entire population to an optimal state by continuously adjusting the system deviations. PSA is mathematically modeled and implemented to achieve optimization in a wide range of search spaces. PSA is used to solve CEC2017 benchmark test functions and six constrained problems. The optimization performance of PSA is verified by comparing it with seven metaheuristics proposed in recent years. The Kruskal-Wallis, Holm and Friedman tests verified the superiority of PSA in terms of statistical significance. The results show that PSA can be better balanced exploration and exploitation with strong optimization capability. Source codes�of PSA are publicly available at https://ww2.mathworks.cn/matlabcentral/fileexchange/131534-pid-based-search-algorithm.

A brief summary of the state of the art in time series clustering

Hailin Li, Zechen Liu, Xiaoji Wan, Time series clustering based on complex network with synchronous matching states, Expert Systems with Applications, Volume 211, 2023 DOI: 10.1016/j.eswa.2022.118543.

Due to the extensive existence of time series in various fields, more and more research on time series data mining, especially time series clustering, has been done in recent years. Clustering technology can extract valuable information and potential patterns from time series data. This paper proposes a time series Clustering method based on Synchronous matching of Complex networks (CSC). This method uses density peak clustering algorithm to identify the state of each time point and obtains the state sequence according to the timeline of the original time series. State sequences is a new method to represent time series. By comparing two state sequences synchronously, the length of state sequence with step is calculated and the similarity is presented, which forms a new method to calculate the similarity of time series. Based on the obtained time series similarity, the relationship network of time series is constructed. Simultaneously, the community discovery technology is applied to cluster the relationship network and further achieve the complete time series clustering. The detailed process and simulation experiments of CSC method are given. Experimental results on different datasets show that CSC method is superior to other traditional time series clustering methods.

More robust KF through the use of skewed distributions

M. Bai, Y. Huang, B. Chen and Y. Zhang, A Novel Robust Kalman Filtering Framework Based on Normal-Skew Mixture Distribution, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 11, pp. 6789-6805, Nov. 2022 DOI: 10.1109/TSMC.2021.3098299.

In this article, a novel normal-skew mixture (NSM) distribution is presented to model the normal and/or heavy-tailed and/or skew nonstationary distributed noises. The NSM distribution can be formulated as a hierarchically Gaussian presentation by leveraging a Bernoulli distributed random variable. Based on this, a novel robust Kalman filtering framework can be developed utilizing the variational Bayesian method, where the one-step prediction and measurement-likelihood densities are modeled as NSM distributions. For implementation, several exemplary robust Kalman filters (KFs) are derived based on some specific cases of NSM distribution. The relationships between some existing robust KFs and the presented framework are also revealed. The superiority of the proposed robust Kalman filtering framework is validated by a target tracking simulation example.

Reducing outliers in time series with singular spectrum analysis and use of deep learning for change detection

Muktesh Gupta, Rajesh Wadhvani, Akhtar Rasool, Real-time Change-Point Detection: A deep neural network-based adaptive approach for detecting changes in multivariate time series data, Expert Systems with Applications, Volume 209, 2022 DOI: 10.1016/j.eswa.2022.118260.

The behavior of a time series may be affected by various factors. Changes in mean, variance, frequency, and auto-correlation are the most common. Change-Point Detection (CPD) aims to track down abrupt statistical characteristic changes in time series that can benefit many applications in different domains. As demonstrated in recently introduced CPD methodologies, deep learning approaches have the potential to identify more subtle changes. However, due to improper handling of data and insufficient training, these methodologies generate more false alarms and are not efficient enough in detecting change-points. In real-time CPD algorithms, preprocessed data plays a vital role in increasing the algorithm\u2019s efficiency and minimizing false alarm rates. Therefore, preprocessing of data should be a part of the algorithm, but in the existing methods, preprocessing of data is done initially, and then the whole dataset is passed to the CPD algorithm. A new three-phase architecture is proposed to address this issue, in which all phases, from preprocessing to CPD, work in an adaptive manner. The phases are integrated into a pipeline, allowing the algorithm to work in real-time. Our proposed strategy performs optimally and consistently based on performance metrics resulting from experiments on real-world datasets and artifacts. This work effectively addresses the issue of non-stationary data normalization using deep learning approaches. To reduce noise and outliers from the data, a recursive version of singular spectrum analysis is introduced. It is demonstrated that the method\u2019s performance has significantly improved by combining adaptive preprocessing with deep learning CPD techniques.

NOTE: See also C. Ma, L. Zhang, W. Pedrycz and W. Lu, “The Long-Term Prediction of Time Series: A Granular Computing-Based Design Approach,” in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 10, pp. 6326-6338, Oct. 2022, doi: 10.1109/TSMC.2022.3144395.

See also https://babel.isa.uma.es/kipr/?p=1548

Non-parameterical detection of regimes in time series data (SODA), and its use in fuzzy forecasting

Shivani Pant, Sanjay Kumar, IFS and SODA based computational method for fuzzy time series forecasting, Expert Systems with Applications, Volume 209, 2022 DOI: 10.1016/j.eswa.2022.118213.

Time series forecasting has fascinated a great deal of interest from various research communities due to its wide applications in medicine, economics, finance, engineering and many other crucial fields. Various studies in past have shown that intuitionistic fuzzy sets (IFSs) not only handle non-stochastic non-determinism in time series forecasting but also enhance accuracy in forecasted outputs. Clustering is another one of the methods that improves accuracy of time series forecasting. The contribution of this research work is a novel computational fuzzy time series (FTS) forecasting method which relies on IFSs and self-organized direction aware (SODA) approach of clustering. The usage of SODA aids in making the proposed FTS forecasting method as autonomous as feasible, as it does not require human intervention or prior knowledge of the data. Forecasted outputs in proposed FTS forecasting method are computed using a weighted formula and weights are optimized using grey wolf optimization (GWO) method. Proposed FTS is applied to forecast enrolments of the University of Alabama and market price of State Bank of India (SBI) share at Bombay stock exchange (BSE), India and performance is compared in terms of root mean square error (RMSE), average forecasting error (AFE) and mean absolute deviation (MAD). Goodness of the proposed FTS forecasting method in forecasting enrolments of the University of Alabama and market price of SBI share is also tested using coefficient of correlation and determination, criteria of Akaike and Bayesian information.

See also https://babel.isa.uma.es/kipr/?p=1550

Doing a more intelligent exploration in RL based on measuring uncertainty through prediction

Xiaoshu Zhou, Fei Zhu, Peiyao Zhao, Within the scope of prediction: Shaping intrinsic rewards via evaluating uncertainty, Expert Systems with Applications, Volume 206, 2022 DOI: 10.1016/j.eswa.2022.117775.

The agent of reinforcement learning based approaches needs to explore to learn more about the environment to seek optimal policy. However, simply increasing the frequency of stochastic exploration sometimes fails to work or even causes the agent to fall into traps. To solve the problem, it is essential to improve the quality of exploration. An approach, referred to as the scope of prediction based on uncertainty exploration (SPE), is proposed, taking advantage of the uncertainty mechanism and considering the stochasticity of prospecting. As by uncertainty mechanism, the unexpected states make more curiosity, the model derives higher uncertainty by projecting future scenarios to compare with the actual future to explore the world. The SPE method utilizes a prediction network to predict subsequent observations and calculates the mean squared difference value of the real observations and the following observations to measure uncertainty, encouraging the agent to explore unknown regions more effectively. Moreover, to reduce the noise interference caused by uncertainty, a reward-penalty model is developed to discriminate the noise by current observations and action prediction for future rewards to improve the interference ability against noise so that the agent can escape from the noisy region. Experiment results showed that deep reinforcement learning approaches equipped with SPE demonstrated significant improvements in simulated environments.

Semi-Markov HMMs for modelling time series in milling machines

Kai Li, Chaochao Qiu, Xinzhao Zhou, Mingsong Chen, Yongcheng Lin, Xianshi Jia, Bin Li, Modeling and tagging of time sequence signals in the milling process based on an improved hidden semi-Markov model, Expert Systems with Applications, Volume 205, 2022 DOI: 10.1016/j.eswa.2022.117758.

Vibration signals are widely used in the field of tool wear, tool residual life prediction and health monitoring of mechanical equipment. However, the current data-driven research methods mostly rely on high-value and high-density labeled data to establish relevant models and algorithms. Therefore, it is of great significance to solve the problem of automatic tagging of data, realize automatic signal interception, and enhance the value density of manufacturing process data. The Hidden semi-Markov model (HSMM) can describe the real spatial statistical characteristics of random models through observable data. As HSMM does not need the real labels of the signal, it can reduce tagging work to improve the marking efficiency. In this paper, an improved HSMM was proposed to model and tag the spindle vibration signals in the milling process. First, the Mel frequency cepstral coefficients (MFCCs) were extracted as observation sequences from the collected spindle vibration signals, and the dimension of the original features was reduced by linear discriminant analysis (LDA). Subsequently, a signal automatic tagging model based on HSMM was developed, in which the state duration can be explicitly modeled. Finally, the evaluation of the proposed methodology was carried out in the laboratory and real industry machining. The experimental results confirmed the effectiveness and robustness of the proposed model.

Dealing with continuous spaces in Q-learning by maintaining several spaces, each one corresponding to a particular time-step

Joao Pedro Araujo, Mario A.T. Figueiredo, Miguel Ayala Botto, Control with adaptive Q-learning: A comparison for two classical control problems, Engineering Applications of Artificial Intelligence, Volume 112, 2022 DOI: 10.1016/j.engappai.2022.104797.

This paper evaluates adaptive Q-learning (AQL) and single-partition adaptive Q-learning (SPAQL), two algorithms for efficient model-free episodic reinforcement learning (RL), in two classical control problems (Pendulum and CartPole). AQL adaptively partitions the state\u2013action space of a Markov decision process (MDP), while learning the control policy, i.e., the mapping from states to actions. The main difference between AQL and SPAQL is that the latter learns time-invariant policies, where the mapping from states to actions does not depend explicitly on the time step. This paper also proposes the SPAQL with terminal state (SPAQL-TS), an improved version of SPAQL tailored for the design of regulators for control problems. The time-invariant policies are shown to result in a better performance than the time-variant ones in both problems studied. These algorithms are particularly fitted to RL problems where the action space is finite, as is the case with the CartPole problem. SPAQL-TS solves the OpenAI GymCartPole problem, while also displaying a higher sample efficiency than trust region policy optimization (TRPO), a standard RL algorithm for solving control tasks. Moreover, the policies learned by SPAQL are interpretable, while TRPO policies are typically encoded as neural networks, and therefore hard to interpret. Yielding interpretable policies while being sample-efficient are the major advantages of SPAQL. The code for the experiments is available at https://github.com/jaraujo98/SinglePartitionAdaptiveQLearning.

Clustering time series through the moments of the corresponding regimes using fuzzy

Roy Cerqueti, Pierpaolo D\u2019Urso, Livia De Giovanni, Massimiliano Giacalone, Raffaele Mattera, Weighted score-driven fuzzy clustering of time series with a financial application, Expert Systems with Applications, Volume 198, 2022 DOI: 10.1016/j.eswa.2022.116752.

Time series data are commonly clustered based on their distributional characteristics. The moments play a central role among such characteristics because of their relevant informative content. This paper aims to develop a novel approach that faces still open issues in moment-based clustering. First of all, we deal with a very general framework of time-varying moments rather than static quantities. Second, we include in the clustering model high-order moments. Third, we avoid implicit equal weighting of the considered moments by developing a clustering procedure that objectively computes the optimal weight for each moment. As a result, following a fuzzy approach, two weighted clustering models based on both unconditional and conditional moments are proposed. Since the Dynamic Conditional Score model is used to estimate both conditional and unconditional moments, the resulting framework is called weighted score-driven clustering. We apply the proposed method to financial time series as an empirical experiment.