Monthly Archives: February 2024

You are browsing the site archives by month.

Clock synchronization in the CAN bus

M. Akp\u0131nar and K. W. Schmidt, Predictable Timestamping for the Controller Area Network: Evaluation and Effect on Clock Synchronization Accuracy, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 3, pp. 1926-1935, March 2024 DOI: 10.1109/TSMC.2023.3332559.

Accurate timestamps are important for clock synchronization (CS) and cyber-security on the controller area network (CAN). This article proposes a new predictable timestamping (TS) method on CAN. Different from existing TS methods, our method reduces the effect of uncertainties that are caused by the CAN bit timing, oscillator drifts, and different cable lengths. Accordingly, our TS method provides an improved TS quality, which is confirmed in comprehensive hardware experiments. We further show the positive impact of our TS method on CS for CAN with clock accuracies below 100 ns.

Improving sample efficiency of RL through memory reconstruction

Y. Kang et al., Sample Efficient Reinforcement Learning Using Graph-Based Memory Reconstruction, IEEE Transactions on Artificial Intelligence, vol. 5, no. 2, pp. 751-762, Feb. 2024 DOI: 10.1109/TAI.2023.3268612.

Reinforcement learning (RL) algorithms typically require orders of magnitude more interactions than humans to learn effective policies. Research on memory in neuroscience suggests that humans’ learning efficiency benefits from associating their experiences and reconstructing potential events. Inspired by this finding, we introduce a human brainlike memory structure for agents and build a general learning framework based on this structure to improve the RL sampling efficiency. Since this framework is similar to the memory reconstruction process in psychology, we name the newly proposed RL framework as graph-based memory reconstruction (GBMR). In particular, GBMR first maintains an attribute graph on the agent’s memory and then retrieves its critical nodes to build and update potential paths among these nodes. This novel pipeline drives the RL agent to learn faster with its memory-enhanced value functions and reduces interactions with the environment by reconstructing its valuable paths. Extensive experimental analyses and evaluations in the grid maze and some challenging Atari environments demonstrate GBMRs superiority over traditional RL methods. We will release the source code and trained models to facilitate further studies in this research direction.

Correcting systematic and non-systematic errors in odometry

Bibiana Fari�a, Jonay Toledo, Leopoldo Acosta, Improving odometric sensor performance by real-time error processing and variable covariance, Mechatronics, Volume 98, 2024 DOI: 10.1016/j.mechatronics.2023.103123.

This paper presents a new method to increase odometric sensor accuracy by systematic and non-systematic errors processing. Mobile robot localization is improved combining this technique with a filter that fuses the information from several sensors characterized by their covariance. The process focuses on calculating the odometric speed difference with respect to the filter to implement an error type detection module in real time. The correction of systematic errors consists in an online parameter adjustment using the previous information and conditioned by the filter accuracy. This data is also applied to design a variable odometric covariance which describes the sensor reliability and determines the influence of both errors on the robot localization. The method is implemented in a low-cost autonomous wheelchair with a LIDAR, IMU and encoders fused by an UKF algorithm. The experimental results prove that the estimated poses are closer to the real ones than using other well-known previous methods.

Survey on methods for learning from demonstration in robotics

M. Tavassoli, S. Katyara, M. Pozzi, N. Deshpande, D. G. Caldwell and D. Prattichizzo, Learning Skills From Demonstrations: A Trend From Motion Primitives to Experience Abstraction, IEEE Transactions on Cognitive and Developmental Systems, vol. 16, no. 1, pp. 57-74, Feb. 20248 DOI: 10.1109/TCDS.2023.3296166.

The uses of robots are changing from static environments in factories to encompass novel concepts such as human\u2013robot collaboration in unstructured settings. Preprogramming all the functionalities for robots becomes impractical, and hence, robots need to learn how to react to new events autonomously, just like humans. However, humans, unlike machines, are naturally skilled in responding to unexpected circumstances based on either experiences or observations. Hence, embedding such anthropoid behaviors into robots entails the development of neuro-cognitive models that emulate motor skills under a robot learning paradigm. Effective encoding of these skills is bound to the proper choice of tools and techniques. This survey paper studies different motion and behavior learning methods ranging from movement primitives (MPs) to experience abstraction (EA), applied to different robotic tasks. These methods are scrutinized and then experimentally benchmarked by reconstructing a standard pick-n-place task. Apart from providing a standard guideline for the selection of strategies and algorithms, this article aims to draw a perspective on their possible extensions and improvements.

On the complexities of RL when it confronts the real (natural) world

Toby Wise, Kara Emery, Angela Radulescu, Naturalistic reinforcement learning, Trends in Cognitive Sciences, Volume 28, Issue 2, 2024, Pages 144-158 DOI: 10.1016/j.tics.2023.08.016.

Humans possess a remarkable ability to make decisions within real-world environments that are expansive, complex, and multidimensional. Human cognitive computational neuroscience has sought to exploit reinforcement learning (RL) as a framework within which to explain human decision-making, often focusing on constrained, artificial experimental tasks. In this article, we review recent efforts that use naturalistic approaches to determine how humans make decisions in complex environments that better approximate the real world, providing a clearer picture of how humans navigate the challenges posed by real-world decisions. These studies purposely embed elements of naturalistic complexity within experimental paradigms, rather than focusing on simplification, generating insights into the processes that likely underpin humans\u2019 ability to navigate complex, multidimensional real-world environments so successfully.

On the need of interacting with the real world to acquire meaning

Giovanni Pezzulo, Thomas Parr, Paul Cisek, Andy Clark, Karl Friston, Generating meaning: active inference and the scope and limits of passive AI, Trends in Cognitive Sciences, Volume 28, Issue 2, 2024, Pages 97-112, DOI: 10.1016/j.tics.2023.10.002.

Prominent accounts of sentient behavior depict brains as generative models of organismic interaction with the world, evincing intriguing similarities with current advances in generative artificial intelligence (AI). However, because they contend with the control of purposive, life-sustaining sensorimotor interactions, the generative models of living organisms are inextricably anchored to the body and world. Unlike the passive models learned by generative AI systems, they must capture and control the sensory consequences of action. This allows embodied agents to intervene upon their worlds in ways that constantly put their best models to the test, thus providing a solid bedrock that is \u2013 we argue \u2013 essential to the development of genuine understanding. We review the resulting implications and consider future directions for generative AI.

On the relations between symbolic and subsymbolic systems in AI

Giuseppe Marra, Sebastijan Duman\u010di\u0107, Robin Manhaeve, Luc De Raedt, From statistical relational to neurosymbolic artificial intelligence: A survey, Artificial Intelligence, Volume 328, 2024 DOI: 10.1016/j.artint.2023.104062.

This survey explores the integration of learning and reasoning in two different fields of artificial intelligence: neurosymbolic and statistical relational artificial intelligence. Neurosymbolic artificial intelligence (NeSy) studies the integration of symbolic reasoning and neural networks, while statistical relational artificial intelligence (StarAI) focuses on integrating logic with probabilistic graphical models. This survey identifies seven shared dimensions between these two subfields of AI. These dimensions can be used to characterize different NeSy and StarAI systems. They are concerned with (1) the approach to logical inference, whether model or proof-based; (2) the syntax of the used logical theories; (3) the logical semantics of the systems and their extensions to facilitate learning; (4) the scope of learning, encompassing either parameter or structure learning; (5) the presence of symbolic and subsymbolic representations; (6) the degree to which systems capture the original logic, probabilistic, and neural paradigms; and (7) the classes of learning tasks the systems are applied to. By positioning various NeSy and StarAI systems along these dimensions and pointing out similarities and differences between them, this survey contributes fundamental concepts for understanding the integration of learning and reasoning.