Interesting approach to deal with the design of complex systems based on analogies with simpler ones

Victor Ragusila, M. Reza Emami, Mechatronics by analogy and application to legged locomotion, Mechatronics, Volume 35, May 2016, Pages 173-191, ISSN 0957-4158, DOI: 10.1016/j.mechatronics.2016.02.007.

A new design methodology for mechatronic systems, dubbed as Mechatronics by Analogy (MbA), is introduced. It argues that by establishing a similarity relation between a complex system and a number of simpler models it is possible to design the former using the analysis and synthesis means developed for the latter. The methodology provides a framework for concurrent engineering of complex systems while maintaining the transparency of the system behavior through making formal analogies between the system and those with more tractable dynamics. The application of the MbA methodology to the design of a monopod robot leg, called the Linkage Leg, is also presented. A series of simulations show that the dynamic behavior of the Linkage Leg is similar to that of a combination of a double pendulum and a spring-loaded inverted pendulum, based on which the system kinematic, dynamic, and control parameters can be designed concurrently.

Real-time trajectory generation for omnidirectional robots, and a good set of basic bibliographical references

Tamás Kalmár-Nagy, Real-time trajectory generation for omni-directional vehicles by constrained dynamic inversion, Mechatronics, Volume 35, May 2016, Pages 44-53, ISSN 0957-4158, DOI: 10.1016/j.mechatronics.2015.12.004.

This paper presents a computationally efficient algorithm for real-time trajectory generation for omni-directional vehicles. The algorithm uses a dynamic inversion based approach that incorporates vehicle dynamics, actuator saturation and bounded acceleration. The algorithm is compared with other trajectory generation algorithms for omni-directional vehicles. The method yields good quality trajectories and is implementable in real-time. Numerical and hardware tests are presented.

Improvements on the ICP algorithm to point cloud registration from a low precision RGB-D sensor

Rogério Yugo Takimoto, Marcos de Sales Guerra Tsuzuki, Renato Vogelaar, Thiago de Castro Martins, André Kubagawa Sato, Yuma Iwao, Toshiyuki Gotoh, Seiichiro Kagei, 3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor, Mechatronics, Volume 35, May 2016, Pages 11-22, ISSN 0957-4158, DOI:j.mechatronics.2015.10.014.

A 3D reconstruction method using feature points is presented and the parameters used to improve the reconstruction are discussed. The precision of the 3D reconstruction is improved by combining point clouds obtained from different viewpoints using structured light. A well-known algorithm for point cloud registration is the ICP (Iterative Closest Point) that determines the rotation and translation that, when applied to one of the point clouds, places both point clouds optimally. The ICP algorithm iteratively executes two main steps: point correspondence determination and registration algorithm. The point correspondence determination is a module that, if not properly executed, can make the ICP converge to a local minimum. To overcome this drawback, two techniques were used. A meaningful set of 3D points using a technique known as SIFT (Scale-invariant feature transform) was obtained and an ICP that uses statistics to generate a dynamic distance and color threshold to the distance allowed between closest points was implemented. The reconstruction precision improvement was implemented using meaningful point clouds and the ICP to increase the number of points in the 3D space. The surface reconstruction is performed using marching cubes and filters to remove the noise and to smooth the surface. The factors that influence the 3D reconstruction precision are here discussed and analyzed. A detailed discussion of the number of frames used by the ICP and the ICP parameters is presented.

Calculating (experimental) probability distributions of the execution of sequential software

Laurent David, Isabelle Puaut, Static Determination of Probabilistic Execution Times, Proceedings of the 12th 16th Euromicro Conference on Real-Time Systems (ECRTS’04). Link.

Most previous research done in probabilistic schedulability analysis assumes a known distribution of execution times for each task of a real-time application. This is however not trivial to determine it with a high level of confidence. Methods based on measurements are often biased since not in general exhaustive on all the possible execution paths, whereas methods based on static analysis are mostly Worst-Case Execution Time – WCET – oriented. Using static analysis, this work proposes a method to obtain probabilistic distributions of execution times. It assumes that the given real time application is divided into multiple tasks, whose source code is known. Ignoring in this paper hardware considerations and based only on the source code of the tasks, the proposed technique allows designers to associate to any execution path an execution time and a probability to go through this path. A source code example is presented to illustrate the method.

Pdf form of the WCET of code execution

S. Edgar and A. Burns, Statistical analysis of WCET for scheduling, Real-Time Systems Symposium, 2001. (RTSS 2001). Proceedings. 22nd IEEE, 2001, pp. 215-224. DOI: 10.1109/REAL.2001.990614.

To perform a schedulability test, scheduling analysis relies on a known worst-case execution time (WCET). This value may be difficult to compute and may be overly pessimistic. This paper offers an alternative analysis based on estimating a WCET from test data to within a specific level of probabilistic confidence. A method is presented for calculating an estimate given statistical assumptions. The implications of the level of confidence on the likelihood of schedulability are also presented.

Dealing with multiple hypothesis in Graph-SLAM through multigraphs (as in multi-hierarchical graphs)

Max Pfingsthorn and Andreas Birk, Generalized graph SLAM: Solving local and global ambiguities through multimodal and hyperedge constraints, The International Journal of Robotics Research May 2016 35: 601-630, DOI: 10.1177/0278364915585395.

Research in Graph-based Simultaneous Localization and Mapping has experienced a recent trend towards robust methods. These methods take the combinatorial aspect of data association into account by allowing decisions of the graph topology to be made during optimization. The Generalized Graph Simultaneous Localization and Mapping framework presented in this work can represent ambiguous data on both local and global scales, i.e. it can handle multiple mutually exclusive choices in registration results and potentially erroneous loop closures. This is achieved by augmenting previous work on multimodal distributions with an extended graph structure using hyperedges to encode ambiguous loop closures. The novel representation combines both hyperedges and multimodal Mixture of Gaussian constraints to represent all sources of ambiguity in Simultaneous Localization and Mapping. Furthermore, a discrete optimization stage is introduced between the Simultaneous Localization and Mapping frontend and backend to handle these ambiguities in a unified way utilizing the novel representation of Generalized Graph Simultaneous Localization and Mapping, providing a general approach to handle all forms of outliers. The novel Generalized Prefilter method optimizes among all local and global choices and generates a traditional unimodal unambiguous pose graph for subsequent continuous optimization in the backend. Systematic experiments on synthetic datasets show that the novel representation of the Generalized Graph Simultaneous Localization and Mapping framework with the Generalized Prefilter method, is significantly more robust and faster than other robust state-of-the-art methods. In addition, two experiments with real data are presented to corroborate the results observed with synthetic data. Different general strategies to construct problems from real data, utilizing the full representational power of the Generalized Graph Simultaneous Localization and Mapping framework are also illustrated in these experiments.

Interesting survey of relevant long-term applications of service robots in real environments

Roberto Pinillos, Samuel Marcos, Raul Feliz, Eduardo Zalama, Jaime Gómez-García-Bermejo, Long-term assessment of a service robot in a hotel environment, Robotics and Autonomous Systems, Volume 79, May 2016, Pages 40-57, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.01.014.

The long term evaluation of the Sacarino robot is presented in this paper. The study is aimed to improve the robot‘s capabilities as a bellboy in a hotel; walking alongside the guests, providing information about the city and the hotel and providing hotel-related services. The paper establishes a three-stage assessment methodology based on the continuous measurement of a set of metrics regarding navigation and interaction with guests. Sacarino has been automatically collecting information in a real hotel environment for long periods of time. The acquired information has been analyzed and used to improve the robot’s operation in the hotel through successive refinements. Some interesting considerations and useful hints for the researchers of service robots have been extracted from the analysis of the results.

Theoretical models for explaining the human (quick) decicion-making process

Roger Ratcliff, Philip L. Smith, Scott D. Brown, Gail McKoon, Diffusion Decision Model: Current Issues and History, Trends in Cognitive Sciences, Volume 20, Issue 4, April 2016, Pages 260-281, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.01.007.

There is growing interest in diffusion models to represent the cognitive and neural processes of speeded decision making. Sequential-sampling models like the diffusion model have a long history in psychology. They view decision making as a process of noisy accumulation of evidence from a stimulus. The standard model assumes that evidence accumulates at a constant rate during the second or two it takes to make a decision. This process can be linked to the behaviors of populations of neurons and to theories of optimality. Diffusion models have been used successfully in a range of cognitive tasks and as psychometric tools in clinical research to examine individual differences. In this review, we relate the models to both earlier and more recent research in psychology.

Cognitive Models as Bridge between Brain and Behavior

Bradley C. Love, Cognitive Models as Bridge between Brain and Behavior, Trends in Cognitive Sciences, Volume 20, Issue 4, April 2016, Pages 247-248, ISSN 1364-6613, DOI: 10.1016/j.tics.2016.02.006.

How can disparate neural and behavioral measures be integrated? Turner and colleagues propose joint modeling as a solution. Joint modeling mutually constrains the interpretation of brain and behavioral measures by exploiting their covariation structure. Simultaneous estimation allows for more accurate prediction than would be possible by considering these measures in isolation.

Integrating humans and robots in the factories

Andrea Cherubini, Robin Passama, André Crosnier, Antoine Lasnier, Philippe Fraisse, Collaborative manufacturing with physical human–robot interaction, Robotics and Computer-Integrated Manufacturing, Volume 40, August 2016, Pages 1-13, ISSN 0736-5845, DOI: 10.1016/j.rcim.2015.12.007.

Although the concept of industrial cobots dates back to 1999, most present day hybrid human–machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human–robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified.