Category Archives: Communication Networks

Communication delays modelled as Gamma distributions for space-earth applications

H. Chen, Z. Liu, P. Huang and Z. Kuang, Time-Delay Modeling and Simulation for Relay Communication-Based Space Telerobot System, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 7, pp. 4211-4222, July 2022 DOI: 10.1109/TSMC.2021.3090806.

In a space telerobot system (STS), effectiveness of the control method in eliminating the time delay\u2019s influences is advisable to be verified under the real circumstance. However, it is difficult and costly for many scholars to obtain confidential information that would allow them to establish an STS. It may be feasible, using some existing results, to model the time delay as close to reality as possible, and to then program a simulation system to generate the simulated time delay, thus verifying validity. In this article, time-delay modeling and simulation problems for relay communication-based STS are first studied. The time delay in relay communication-based STS consists of both processing and communication time delays; the latter is divided into ground and ground-space parts. By extending the available results, processing and ground communication time delays are modeled with the probability distribution function modeling approach. An optimal communication link identification and minimum time-delay realization (OCLIMTDR) method is proposed to model the ground-space communication time delay. In this method, the novel point\u2013vector\u2013sphere (PVS) algorithm serves to judge link connectivity. The PVS algorithm is based on geometric theory, which gives the OCLIMTDR method good extensibility and renders it suitable for any relay communication network in theory. All three parts of the time-delay models are integrated to form the loop time-delay model of the STS. Subsequently, a time-delay simulation system is created by programming the loop time-delay model. Finally, the correctness of the simulation system is further authenticated based on simulations and some prior knowledge.

Interesting account of the “computation/communication” binom in distributed computing, particularly in distributed optimization

A. S. Berahas, R. Bollapragada, N. S. Keskar and E. Wei, Balancing Communication and Computation in Distributed Optimization. IEEE Transactions on Automatic Control, vol. 64, no. 8, pp. 3141-3155, Aug. 2019 DOI: 10.1109/TAC.2018.2880407.

Methods for distributed optimization have received significant attention in recent years owing to their wide applicability in various domains including machine learning, robotics, and sensor networks. A distributed optimization method typically consists of two key components: communication and computation. More specifically, at every iteration (or every several iterations) of a distributed algorithm, each node in the network requires some form of information exchange with its neighboring nodes (communication) and the computation step related to a (sub)-gradient (computation). The standard way of judging an algorithm via only the number of iterations overlooks the complexity associated with each iteration. Moreover, various applications deploying distributed methods may prefer a different composition of communication and computation. Motivated by this discrepancy, in this paper, we propose an adaptive cost framework that adjusts the cost measure depending on the features of various applications. We present a flexible algorithmic framework, where communication and computation steps are explicitly decomposed to enable algorithm customization for various applications. We apply this framework to the well-known distributed gradient descent (DGD) method, and show that the resulting customized algorithms, which we call DGDt, NEAR-DGDt, and NEAR-DGD+, compare favorably to their base algorithms, both theoretically and empirically. The proposed NEAR-DGD+ algorithm is an exact first-order method where the communication and computation steps are nested, and when the number of communication steps is adaptively increased, the method converges to the optimal solution. We test the performance and illustrate the flexibility of the methods, as well as practical variants, on quadratic functions and classification problems that arise in machine learning, in terms of iterations, gradient evaluations, communications, and the proposed cost framework.

Interesting review of time delay measurement in one-way messages in networks at the application level

P. Ferrari, A. Flammini, E. Sisinni, S. Rinaldi, D. Brandão and M. S. Rocha, Delay Estimation of Industrial IoT Applications Based on Messaging Protocols, IEEE Transactions on Instrumentation and Measurement, vol. 67, no. 9, pp. 2188-2199, DOI: 10.1109/TIM.2018.2813798.

Information and operational technologies merge into the so-called industrial Internet of Things, which is one of the basic pillars of the Industry 4.0 paradigm. Roughly speaking, yet-to-come services will be offered in the automation scenario by industrial devices having an internet connection for sharing data in the cloud. Currently, most efforts are in the development of protocols able to ensure horizontal interoperability among heterogeneous applications. Consequently, poor attention is devoted to time-related performance. In this paper, a new, full software, platform-independent approach is proposed for experimentally evaluating the delay in transferring information across local and intercontinental routes by applications leveraging on messaging middleware. The application is realized using the node-RED web-based framework, due to its availability on different platforms; the widely accepted message queue telemetry transport protocol has been chosen thanks to its low overhead and complexity. For sake of completeness, five different, private and public, brokers are used. The adopted industrial-grade hardware, complemented by global positioning system time reference, permits an overall synchronization and timestamping accuracy of a few milliseconds. The vast measurement campaign highlighted that, generally, quality of service (QoS) type 1 offers low end-to-end delay (average value less than 0.5 s) with reduced variability (0.1 s). However, the maximum end-to-end one-way delay ranges from 1 s for QoS 0 to less than 1.5 s for fully acknowledged QoS 2.

A new algorithm that provably converges to a global clock consensus in a network

Miloš S. Stanković, Srdjan S. Stanković, Karl Henrik Johansson, Distributed time synchronization for networks with random delays and measurement noise, Automatica, Volume 93, 2018, Pages 126-137 DOI: 10.1016/j.automatica.2018.03.054.

In this paper a new distributed asynchronous algorithm is proposed for time synchronization in networks with random communication delays, measurement noise and communication dropouts. Three different types of the drift correction algorithm are introduced, based on different kinds of local time increments. Under nonrestrictive conditions concerning network properties, it is proved that all the algorithm types provide convergence in the mean square sense and with probability one (w.p.1) of the corrected drifts of all the nodes to the same value (consensus). An estimate of the convergence rate of these algorithms is derived. For offset correction, a new algorithm is proposed containing a compensation parameter coping with the influence of random delays and special terms taking care of the influence of both linearly increasing time and drift correction. It is proved that the corrected offsets of all the nodes converge in the mean square sense and w.p.1. An efficient offset correction algorithm based on consensus on local compensation parameters is also proposed. It is shown that the overall time synchronization algorithm can also be implemented as a flooding algorithm with one reference node. It is proved that it is possible to achieve bounded error between local corrected clocks in the mean square sense and w.p.1. Simulation results provide an additional practical insight into the algorithm properties and show its advantage over the existing methods.

On the effects of delays in the stability of a network controlled plant due to both clocks not being synchronized

K. Okano, M. Wakaiki, G. Yang and J. P. Hespanha, Stabilization of Networked Control Systems Under Clock Offsets and Quantization, IEEE Transactions on Automatic Control, vol. 63, no. 6, pp. 1708-1723 DOI: 10.1109/TAC.2017.2753938.

This paper studies the impact of clock mismatches and quantization on networked control systems. We consider a scenario where the plant’s state is measured by a sensor that communicates with the controller through a network. Variable communication delays and clock jitter do not permit a perfect synchronization between the clocks of the sensor and controller. We investigate limitations on the clock offset tolerable for stabilization of the feedback system. For a process with a scalar-valued state, we show that there exists a tight bound on the offset above which the closed-loop system cannot be stabilized with any causal controllers. For higher dimensional plants, if the plant has two distinct poles, then the effect of clock mismatches can be canceled with a finite number of measurements, and hence there is no fundamental limitation. We also consider the case where the measurements are subject to quantization in addition to clock mismatches. For first-order plants, we present necessary conditions and sufficient conditions for stabilizability, which show that a larger clock offset requires a finer quantization.

A novel fast algorithm for clock synchronization in a wireless network, with a nice introduction but assuming negligible communication times and thus not directly applicable in teleoperation

Kan Xie, Qianqian Cai, Minyue Fu, A fast clock synchronization algorithm for wireless sensor networks, Automatica, Volume 92, 2018, Pages 133-142, DOI: 10.1016/j.automatica.2018.03.004.

This paper proposes a novel clock synchronization algorithm for wireless sensor networks (WSNs). The algorithm is derived using a fast finite-time average consensus idea, and is fully distributed, meaning that each node relies only on its local clock readings and reading announcements from its neighbours. For networks with an acyclic graph, the algorithm converges in only d iterations for clock rate synchronization and another d iterations for clock offset synchronization, where d is the graph diameter. The algorithm enjoys low computational and communicational complexities and robustness against transmission adversaries. Each node can execute the algorithm asynchronously without the need for global coordination. Due to its fast convergence, the algorithm is most suitable for large-scale WSNs. For WSNs with a cyclic graph, a fast distributed depth-first-search (DFS) algorithm can be applied first to form a spanning tree before applying the proposed synchronization algorithm.

Optimal routing in communication networks with probabilistic models of delays that are acquired on-line

M. S. Talebi, Z. Zou, R. Combes, A. Proutiere and M. Johansson, Stochastic Online Shortest Path Routing: The Value of Feedback, IEEE Transactions on Automatic Control, vol. 63, no. 4, pp. 915-930, DOI: 10.1109/TAC.2017.2747409.

This paper studies online shortest path routing over multihop networks. Link costs or delays are time varying and modeled by independent and identically distributed random processes, whose parameters are initially unknown. The parameters, and hence the optimal path, can only be estimated by routing packets through the network and observing the realized delays. Our aim is to find a routing policy that minimizes the regret (the cumulative difference of expected delay) between the path chosen by the policy and the unknown optimal path. We formulate the problem as a combinatorial bandit optimization problem and consider several scenarios that differ in where routing decisions are made and in the information available when making the decisions. For each scenario, we derive a tight asymptotic lower bound on the regret that has to be satisfied by any online routing policy. Three algorithms, with a tradeoff between computational complexity and performance, are proposed. The regret upper bounds of these algorithms improve over those of the existing algorithms. We also assess numerically the performance of the proposed algorithms and compare it to that of existing algorithms.

Mapping the wifi signal for robot localization both precisely and accurately through a complex model of the signal

Renato Miyagusuku, Atsushi Yamashita, Hajime Asama, Precise and accurate wireless signal strength mappings using Gaussian processes and path loss models, Robotics and Autonomous Systems, Volume 103, 2018, Pages 134-150, DOI: 10.1016/j.robot.2018.02.011.

In this work, we present a new modeling approach that generates precise (low variance) and accurate (low mean error) wireless signal strength mappings. In robot localization, these mappings are used to compute the likelihood of locations conditioned to new sensor measurements. Therefore, both mean and variance predictions are required. Gaussian processes have been successfully used for learning highly accurate mappings. However, they generalize poorly at locations far from their training inputs, making those predictions have high variance (low precision). In this work, we address this issue by incorporating path loss models, which are parametric functions that although lacking in accuracy, generalize well. Path loss models are used together with Gaussian processes to compute mean predictions and most importantly, to bound Gaussian processes’ predicted variances. Through extensive testing done with our open source framework, we demonstrate the ability of our approach to generating precise and accurate mappings, and the increased localization accuracy of Monte Carlo localization algorithms when using them; with all our datasets and software been made readily available online for the community.

A gentle tutorial on Industrial Ethernet

K. Langlois et al., EtherCAT Tutorial: An Introduction for Real-Time Hardware Communication on Windows [Tutorial], IEEE Robotics & Automation Magazine, vol. 25, no. 1, pp. 22-122, DOI: 10.1109/MRA.2017.2787224.

Setting up real-time hardware communication for applications such as precise motion control can be time consuming and confusing. Therefore, this tutorial introduces the deployment of an Ethernet for control automation technology (EtherCAT) protocol. We situate EtherCAT, briefly discuss the origins and working principles, and mention advantages over other widely used protocols. Additionally, the main objectives of the tutorial and the required software to complete it are presented. Online supplements are included, explaining all steps to run a Simulink model in real time on a Windows machine within a few hours.

A framework for the performance analysis of collaborative network clock synchronization

Y. Xiong, N. Wu, Y. Shen and M. Z. Win, Cooperative Network Synchronization: Asymptotic Analysis, IEEE Transactions on Signal Processing, vol. 66, no. 3, pp. 757-772, DOI: 10.1109/TSP.2017.2759098.

Accurate clock synchronization is required for collaborative operations among nodes across wireless networks. Compared with traditional layer-by-layer methods, cooperative network synchronization techniques lead to significant improvement in performance, efficiency, and robustness. This paper develops a framework for the performance analysis of cooperative network synchronization. We introduce the concepts of cooperative dilution intensity (CDI) and relative CDI to characterize the interaction between agents, which can be interpreted as properties of a random walk over the network. Our approach enables us to derive closed-form asymptotic expressions of performance limits, relating them to the quality of observations as well as the network topology.