Category Archives: Computer Science

Interesting survey of floating-point arithmetic in computers

David Goldberg, What Every Computer Scientist Should Know About Floating-Point Arithmetic, March, 1991 issue of Computing Surveys of the ACM, https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html.

Floating-point arithmetic is considered an esoteric subject by many people. This is rather surprising because floating-point is ubiquitous in computer systems. Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on those aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating-point standard, and concludes with numerous examples of how computer builders can better support floating-point.

On the importance of the static structures + execution flow in learning programming languages

B. Bettin, M. Jarvie-Eggart, K. S. Steelman and C. Wallace, Preparing First-Year Engineering Students to Think About Code: A Guided Inquiry Approach, IEEE Transactions on Education, vol. 65, no. 3, pp. 309-319, Aug. 2022 DOI: 10.1109/TE.2021.3140051.

In the wake of the so-called fourth industrial revolution, computer programming has become a foundational competency across engineering disciplines. Yet engineering students often resist the notion that computer programming is a skill relevant to their future profession. Here are presented two activities aimed at supporting the early development of engineering students\u2019 attitudes and abilities regarding programming in a first-year engineering course. Both activities offer students insights into the way programs are constructed, which have been identified as a source of confusion that may negatively affect acceptance. In the first activity, a structured, language-independent way to approach programming problems through guided questions was introduced, which has previously been used successfully in introductory computer science courses. The team hypothesized that guiding students through a structured reflection on how they construct programs for their class assignments might help reveal an understandable structure to them. Results showed that students in the intervention group scored nearly a full letter grade higher on the unit\u2019s final programming assessment than those in the control condition. The second activity aimed to help students recognize how their experience with MATLAB might help them interpret code in other programming languages. In the intervention group, students were asked to review and provide comments for code written in a variety of programming languages. A qualitative analysis of their reflections examined what skills students reported they used and, specifically, how prior MATLAB experience may have aided their ability to read and comment on the unfamiliar code. Overall, the ability to understand and recognize syntactic constructs was an essential skill in making sense of code written in unfamiliar programming languages. Syntactic constructs, lexical elements, and patterns were all recognized as essential landmarks used by students interpreting code they did not write, especially in new languages. Developing an understanding of the static structure and dynamic flow required of programs was also an essential skill which helped the students. Together, the results from the first activity and the insights gained from the second activity suggest that guided questions to build skills in reading code may help mitigate confusion about program construction, thereby better preparing engineering students for computing-intensive careers.

For compilers to be WCET-aware

Heiko Falk, Paul Lokuciejewski, A compiler framework for the reduction of worst-case execution times, Real-Time Systems volume 46, pages251–300(2010), DOI: 10.1007/s11241-019-09337-9.

The current practice to design software for real-time systems is tedious. There is almost no tool support that assists the designer in automatically deriving safe bounds of the worst-case execution time (WCET) of a system during code generation and in systematically optimizing code to reduce WCET. This article presents concepts and infrastructures for WCET-aware code generation and optimization techniques for WCET reduction. All together, they help to obtain code explicitly optimized for its worst-case timing, to automate large parts of the real-time software design flow, and to reduce costs of a real-time system by allowing to use tailored hardware.

Interesting account of the “computation/communication” binom in distributed computing, particularly in distributed optimization

A. S. Berahas, R. Bollapragada, N. S. Keskar and E. Wei, Balancing Communication and Computation in Distributed Optimization. IEEE Transactions on Automatic Control, vol. 64, no. 8, pp. 3141-3155, Aug. 2019 DOI: 10.1109/TAC.2018.2880407.

Methods for distributed optimization have received significant attention in recent years owing to their wide applicability in various domains including machine learning, robotics, and sensor networks. A distributed optimization method typically consists of two key components: communication and computation. More specifically, at every iteration (or every several iterations) of a distributed algorithm, each node in the network requires some form of information exchange with its neighboring nodes (communication) and the computation step related to a (sub)-gradient (computation). The standard way of judging an algorithm via only the number of iterations overlooks the complexity associated with each iteration. Moreover, various applications deploying distributed methods may prefer a different composition of communication and computation. Motivated by this discrepancy, in this paper, we propose an adaptive cost framework that adjusts the cost measure depending on the features of various applications. We present a flexible algorithmic framework, where communication and computation steps are explicitly decomposed to enable algorithm customization for various applications. We apply this framework to the well-known distributed gradient descent (DGD) method, and show that the resulting customized algorithms, which we call DGDt, NEAR-DGDt, and NEAR-DGD+, compare favorably to their base algorithms, both theoretically and empirically. The proposed NEAR-DGD+ algorithm is an exact first-order method where the communication and computation steps are nested, and when the number of communication steps is adaptively increased, the method converges to the optimal solution. We test the performance and illustrate the flexibility of the methods, as well as practical variants, on quadratic functions and classification problems that arise in machine learning, in terms of iterations, gradient evaluations, communications, and the proposed cost framework.

A brief (and relatively shallow) account of computer programming as a cognitive ability

Evelina Fedorenko, Anna Ivanova, Riva Dhamala, Marina Umaschi Bers, The Language of Programming: A Cognitive Perspective, Trends in Cognitive Sciences,
Volume 23, Issue 7, 2019, Pages 525-528 DOI: 10.1016/j.tics.2019.04.010.

Computer programming is becoming essential across fields. Traditionally grouped with science, technology, engineering, and mathematics (STEM) disciplines, programming also bears parallels to natural languages. These parallels may translate into overlapping processing mechanisms. Investigating the cognitive basis of programming is important for understanding the human mind and could transform education practices.

High performance robotic computing (HPRC) vs. High performance computing, and its application to multirobot systems

Leonardo Camargo-Forero, Pablo Royo, Xavier Prats, Towards high performance robotic computing, Robotics and Autonomous Systems, Volume 107, 2018, Pages 167-181 DOI: 10.1016/j.robot.2018.05.011.

Embedding a robot with a companion computer is becoming a common practice nowadays. Such computer is installed with an operatingsystem, often a Linux distribution. Moreover, Graphic Processing Units (GPUs) can be embedded on a robot, giving it the capacity of performing complex on-board computing tasks while executing a mission. It seems that a next logical transition, consist of deploying a cluster of computers among embedded computing cards. With this approach, a multi-robot system can be set as a High Performance Computing (HPC) cluster. The advantages of such infrastructure are many, from providing higher computing power up to setting scalable multi-robot systems. While HPC has been always seen as a speeding-up tool, we believe that HPC in the world of robotics can do much more than simply accelerating the execution of complex computing tasks. In this paper, we introduce the novel concept of High Performance Robotic Computing — HPRC, an augmentation of the ideas behind traditional HPC to fit and enhance the world of robotics. As a proof of concept, we introduce novel HPC software developed to control the motion of a set of robots using the standard parallel MPI (Message Passing Interface) library. The parallel motion software includes two operation modes: Parallel motion to specific target and swarm-like behavior. Furthermore, the HPC software is virtually scalable to control any quantity of moving robots, including Unmanned Aerial Vehicles, Unmanned Ground Vehicles, etc.

The security problems of ROS

Bernhard Dieber, Benjamin Breiling, Sebastian Taurer, Severin Kacianka, Stefan Rass, Peter Schartner, Security for the Robot Operating System, Robotics and Autonomous Systems,
Volume 98, 2017, Pages 192-203, DOI: 10.1016/j.robot.2017.09.017.

Future robotic systems will be situated in highly networked environments where they communicate with industrial control systems, cloud services or other systems at remote locations. In this trend of strong digitization of industrial systems (also sometimes referred to as Industry 4.0), cyber attacks are an increasing threat to the integrity of the robotic systems at the core of this new development. It is expected, that the Robot Operating System (ROS) will play an important role in robotics outside of pure research-oriented scenarios. ROS however has significant security issues which need to be addressed before such products should reach mass markets. In this paper we present the most common vulnerabilities of ROS, attack vectors to exploit those and several approaches to secure ROS and similar systems. We show how to secure ROS on an application level and describe a solution which is integrated directly into the ROS core. Our proposed solution has been implemented and tested with recent versions of ROS, and adds security to all communication channels without being invasive to the system kernel itself.

On the problem of the future limits of information storage

Cambria, E., Chattopadhyay, A., Linn, E. et al, Storages Are Not Forever, Cogn Comput (2017) 9: 646, DOI: 10.1007/s12559-017-9482-4.

Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. We chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering out noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.