Mathematical model of quartz crystal clocks and Kalman Filter estimation for clock synchronization

Giorgi, G., An Event-Based Kalman Filter for Clock Synchronization, Instrumentation and Measurement, IEEE Transactions on , vol.64, no.2, pp.449,457, Feb. 2015, DOI: 10.1109/TIM.2014.2340631

The distribution of a time reference has long been a significant research topic in measurement and different solutions have been proposed over the years. In this context, the design of servo clocks plays an important role to get better performances by smoothing the influence of noise sources affecting a synchronization system. A servo clock is asked to provide an adaptive and conservative measure of the time distance between the local clock and the time reference by minimizing, if possible, the energy consumption. In this paper, we propose a servo clock based on an efficient implementation of the Kalman filter (KF), called in the following event-based KF that allows to overcome drawbacks of existing KF-based servo clocks with furthermore a significant reduction of the computational cost. An in-depth analysis of the synchronization uncertainty has been reported to completely characterize the proposed solution; and finally, some guidelines on how to correctly initialize the KF are provided.

Estimating an empirical distribution from a number of estimates distributed among several agents, minimizing the information exchange between the agents

Sarwate, A.D.; Javidi, T., Distributed Learning of Distributions via Social Sampling, Automatic Control, IEEE Transactions on , vol.60, no.1, pp.34,45, Jan. 2015, DOI: 10.1109/TAC.2014.2329611

A protocol for distributed estimation of discrete distributions is proposed. Each agent begins with a single sample from the distribution, and the goal is to learn the empirical distribution of the samples. The protocol is based on a simple message-passing model motivated by communication in social networks. Agents sample a message randomly from their current estimates of the distribution, resulting in a protocol with quantized messages. Using tools from stochastic approximation, the algorithm is shown to converge almost surely. Examples illustrate three regimes with different consensus phenomena. Simulations demonstrate this convergence and give some insight into the effect of network topology.

How to bypass the NP-hardness of estimating the best explanation of given data (instantiated as MAP, i.e., Maximum A Posteriori, not as maximum likelihood) in discrete Bayesian Networks, through distinction of relevant and irrelevant variables

Johan Kwisthout, Most frugal explanations in Bayesian networks, Artificial Intelligence, Volume 218, January 2015, Pages 56-73, ISSN 0004-3702, DOI: 10.1016/j.artint.2014.10.001

Inferring the most probable explanation to a set of variables, given a partial observation of the remaining variables, is one of the canonical computational problems in Bayesian networks, with widespread applications in AI and beyond. This problem, known as MAP, is computationally intractable (NP-hard) and remains so even when only an approximate solution is sought. We propose a heuristic formulation of the MAP problem, denoted as Inference to the Most Frugal Explanation (MFE), based on the observation that many intermediate variables (that are neither observed nor to be explained) are irrelevant with respect to the outcome of the explanatory process. An explanation based on few samples (often even a singleton sample) from these irrelevant variables is typically almost as good as an explanation based on (the computationally costly) marginalization over these variables. We show that while MFE is computationally intractable in general (as is MAP), it can be tractably approximated under plausible situational constraints, and its inferences are fairly robust with respect to which intermediate variables are considered to be relevant.

On search as a consequence of the exploration-exploitation trade-off, and as a core element in human cognition

Thomas T. Hills, Peter M. Todd, David Lazer, A. David Redish, Iain D. Couzin, the Cognitive Search Research Group, Exploration versus exploitation in space, mind, and society, Trends in Cognitive Sciences, Volume 19, Issue 1, January 2015, Pages 46-54, ISSN 1364-6613, DOI: 10.1016/j.tics.2014.10.004.

Search is a ubiquitous property of life. Although diverse domains have worked on search problems largely in isolation, recent trends across disciplines indicate that the formal properties of these problems share similar structures and, often, similar solutions. Moreover, internal search (e.g., memory search) shows similar characteristics to external search (e.g., spatial foraging), including shared neural mechanisms consistent with a common evolutionary origin across species. Search problems and their solutions also scale from individuals to societies, underlying and constraining problem solving, memory, information search, and scientific and cultural innovation. In summary, search represents a core feature of cognition, with a vast influence on its evolution and processes across contexts and requiring input from multiple domains to understand its implications and scope.

On the way humans reduce perceptual information during decision making, falling apart from statistically optimal behavior, in order to deal with the overwhelming sensory flow

Christopher Summerfield, Konstantinos Tsetsos, Do humans make good decisions?, Trends in Cognitive Sciences, Volume 19, Issue 1, January 2015, Pages 27-34, ISSN 1364-6613, DOI: 10.1016/j.tics.2014.11.005

Human performance on perceptual classification tasks approaches that of an ideal observer, but economic decisions are often inconsistent and intransitive, with preferences reversing according to the local context. We discuss the view that suboptimal choices may result from the efficient coding of decision-relevant information, a strategy that allows expected inputs to be processed with higher gain than unexpected inputs. Efficient coding leads to \u2018robust\u2019 decisions that depart from optimality but maximise the information transmitted by a limited-capacity system in a rapidly-changing world. We review recent work showing that when perceptual environments are variable or volatile, perceptual decisions exhibit the same suboptimal context-dependence as economic choices, and we propose a general computational framework that accounts for findings across the two domains.

A new simple method for mobile robot path planning based on particles and inspired in bacteria

Md. Arafat Hossain, Israt Ferdous, Autonomous robot path planning in dynamic environment using a new optimization technique inspired by bacterial foraging technique, Robotics and Autonomous Systems, Volume 64, February 2015, Pages 137-141, ISSN 0921-8890, DOI: 10.1016/j.robot.2014.07.002

.

Path planning is one of the basic and interesting functions for a mobile robot. This paper explores the application of Bacterial Foraging Optimization to the problem of mobile robot navigation to determine the shortest feasible path to move from any current position to the target position in an unknown environment with moving obstacles. It develops a new algorithm based on Bacterial Foraging Optimization (BFO) technique. This algorithm finds a path towards the target and avoiding the obstacles using particles which are randomly distributed on a circle around a robot. The criterion on which it selects the best particle is the distance to the target and the Gaussian cost function of the particle. Then, a high level decision strategy is used for the selection and thus proceeds for the result. It works on local environment by using a simple robot sensor. So, it is free from having generated additional map which adds cost. Furthermore, it can be implemented without requirement to tuning algorithm and complex calculation. To simulate the algorithm, the program is written in C language and the environment is created by OpenGL. To test the efficiency of the proposed technique, results are compared with Basic Bacterial Foraging Optimization (BFO) and another well-known algorithm called Particle Swarm Optimization (PSO) to give the guarantee that the proposed method gives better and optimal path.

Taking into account the way a path serves to avoid obstacles in order to improve the three main methods of robot path planning: graph-search, probabilistic and bug

Emili Hernandez, Marc Carreras, Pere Ridao, A comparison of homotopic path planning algorithms for robotic applications , Robotics and Autonomous Systems, Volume 64, February 2015, Pages 44-58, ISSN 0921-8890, DOI: 10.1016/j.robot.2014.10.021

.

This paper addresses the path planning problem for robotic applications using homotopy classes. These classes provide a topological description of how paths avoid obstacles, which is an added value to the path planning problem. Homotopy classes are generated and sorted according to a lower bound heuristic estimator using a method we developed. Then, the classes are used to constrain and guide path planning algorithms. Three different path planners are presented and compared: a graph-search algorithm called Homotopic A∗ (HA∗), a probabilistic sample-based algorithm called Homotopic RRT (HRRT), and a bug-based algorithm called Homotopic Bug (HBug). Our method has been tested in simulation and in an underwater bathymetric map to compute the trajectory of an Autonomous Underwater Vehicle (AUV). A comparison with well-known path planning algorithms has also been included. Results show that our homotopic path planners improve the quality of the solutions of their respective non-homotopic versions with similar computation time while keeping the topological constraints.

A survey on topological localization and mapping

Emilio Garcia-Fidalgo, Alberto Ortiz, Vision-based topological mapping and localization methods: A survey , Robotics and Autonomous Systems, Volume 64, February 2015, Pages 1-20, ISSN 0921-8890, DOI: 10.1016/j.robot.2014.11.009

.

Topological maps model the environment as a graph, where nodes are distinctive places of the environment and edges indicate topological relationships between them. They represent an interesting alternative to the classic metric maps, due to their simplicity and storage needs, what has made topological mapping and localization an active research area. The different solutions that have been proposed during years have been designed around several kinds of sensors. However, in the last decades, vision approaches have emerged because of the technology improvements and the amount of useful information that a camera can provide. In this paper, we review the main solutions presented in the last fifteen years, and classify them in accordance to the kind of image descriptor employed. Advantages and disadvantages of each approach are thoroughly reviewed and discussed.

Estimating the bandwidth of a communication channel for adjusting the bitrate in high-definition video streaming, using Pareto and Gamma distributions (that are conjugate) in a bayesian estimation framework

Javadtalab, A.; Semsarzadeh, M.; Khanchi, A.; Shirmohammadi, S.; Yassine, A., Continuous One-Way Detection of Available Bandwidth Changes for Video Streaming Over Best-Effort Networks, Instrumentation and Measurement, IEEE Transactions on , vol.64, no.1, pp.190,203, Jan. 2015. DOI: 10.1109/TIM.2014.2331423

Video streaming over best-effort networks, such as the Internet, is now a significant application used by most Internet users. However, best-effort networks are characterized by dynamic and unpredictable changes in the available bandwidth, which adversely affect the quality of video. As such, it is important to have real-time detection mechanisms of bandwidth changes to ensure that video is adapted to the available bandwidth and transmitted at the highest quality. In this paper, we propose a Bayesian instantaneous end-to-end bandwidth change prediction model and method to detect and predict one-way bandwidth changes at the receiver. Unlike existing congestion detection mechanisms, which use network parameters such as packet loss probability, round trip time (RTT), or jitter, our approach uses weighted interarrival time of video packets at the receiver side. Furthermore, our approach is continuous, since it measures available bandwidth changes with each incoming video packet, and therefore detects congestion occurrence in <200 ms, on average, which is significantly faster than existing approaches. In addition, it is a one-way scheme, since it only takes into account the characteristics of the incoming path and not the outgoing path, as opposed to other approaches, which use RTT and are hence less accurate. In this paper, we provide extensive experimental simulations and real-world network implementation. Our results indicate that the proposed detection method is superior to existing solutions.

Good related work of graph-based SLAM algorithms that employ some reduction technique on the graph to improve long-term operation, and proposal of a new method of reduction

Carlevaris-Bianco, N.; Kaess, M.; Eustice, R.M., Generic Node Removal for Factor-Graph SLAM, Robotics, IEEE Transactions on , vol.30, no.6, pp.1371,1385, Dec. 2014. DOI: 10.1109/TRO.2014.2347571

This paper reports on a generic factor-based method for node removal in factor-graph simultaneous localization and mapping (SLAM), which we call generic linear constraints (GLCs). The need for a generic node removal tool is motivated by long-term SLAM applications, whereby nodes are removed in order to control the computational cost of graph optimization. GLC is able to produce a new set of linearized factors over the elimination clique that can represent either the true marginalization (i.e., dense GLC) or a sparse approximation of the true marginalization using a Chow-Liu tree (i.e., sparse GLC). The proposed algorithm improves upon commonly used methods in two key ways: First, it is not limited to graphs with strictly full-state relative-pose factors and works equally well with other low-rank factors, such as those produced by monocular vision. Second, the new factors are produced in such a way that accounts for measurement correlation, which is a problem encountered in other methods that rely strictly upon pairwise measurement composition. We evaluate the proposed method over multiple real-world SLAM graphs and show that it outperforms other recently proposed methods in terms of Kullback–Leibler divergence. Additionally, we experimentally demonstrate that the proposed GLC method provides a principled and flexible tool to control the computational complexity of long-term graph SLAM, with results shown for ${34.9}, {rm {h}}$ of real-world indoor–outdoor data covering ${147.4}{hbox{ km}}$ collected over $27$ mapping sessions spanning a period of $15$ months.