On the process of the brain for detecting similarities, with a proposal for its structure and its timing

Qingfei Chen, Xiuling Liang, Peng Li, Chun Ye, Fuhong Li, Yi Lei, Hong Li, 2015, The processing of perceptual similarity with different features or spatial relations as revealed by P2/P300 amplitude, International Journal of Psychophysiology, Volume 95, Issue 3, March 2015, Pages 379-387, ISSN 0167-8760, DOI: 10.1016/j.ijpsycho.2015.01.009.

Visual features such as “color” and spatial relations such as “above” or “beside” have complex effects on similarity and difference judgments. We examined the relative impact of features and spatial relations on similarity and difference judgments via ERPs in an S1–S2 paradigm. Subjects were required to compare a remembered geometric shape (S1) with a second one (S2), and made a “high” or “low” judgment of either similarity or difference in separate blocks of trials. We found three main differences that suggest that the processing of features and spatial relations engages distinct neural processes. The first difference is a P2 effect in fronto-central regions which is sensitive to the presence of a feature difference. The second difference is a P300 in centro-parietal regions that is larger for difference judgments than for similarity judgments. Finally, the P300 effect elicited by feature differences was larger relative to spatial relation differences. These results supported the view that similarity judgments involve structural alignment rather than simple feature and relation matches, and furthermore, indicate the similarity judgment could be divided into three phases: feature or relation comparison (P2), structural alignment (P3 at 300–400 ms), and categorization (P3 at 450–550 ms).

On the role of emotions in cognition, in particular in cognitive control

Michael Inzlicht, Bruce D. Bartholow, Jacob B. Hirsh, 2015, Emotional foundations of cognitive control, Trends in Cognitive Sciences, Volume 19, Issue 3, March 2015, Pages 126-132, DOI: 10.1016/j.tics.2015.01.004.

Often seen as the paragon of higher cognition, here we suggest that cognitive control is dependent on emotion. Rather than asking whether control is influenced by emotion, we ask whether control itself can be understood as an emotional process. Reviewing converging evidence from cybernetics, animal research, cognitive neuroscience, and social and personality psychology, we suggest that cognitive control is initiated when goal conflicts evoke phasic changes to emotional primitives that both focus attention on the presence of goal conflicts and energize conflict resolution to support goal-directed behavior. Critically, we propose that emotion is not an inert byproduct of conflict but is instrumental in recruiting control. Appreciating the emotional foundations of control leads to testable predictions that can spur future research.

On the not-so-domain-generic nature of statistical learning in the human brain

Ram Frost, Blair C. Armstrong, Noam Siegelman, Morten H. Christiansen, 2015, Domain generality versus modality specificity: the paradox of statistical learning, Trends in Cognitive Sciences, Volume 19, Issue 3, March 2015, Pages 117-125, DOI: 10.1016/j.tics.2014.12.010.

Statistical learning (SL) is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying distributional properties of the input. However, recent studies examining whether there are commonalities in the learning of distributional information across different domains or modalities consistently reveal modality and stimulus specificity. Therefore, important questions are how and why a hypothesized domain-general learning mechanism systematically produces such effects. Here, we offer a theoretical framework according to which SL is not a unitary mechanism, but a set of domain-general computational principles that operate in different modalities and, therefore, are subject to the specific constraints characteristic of their respective brain regions. This framework offers testable predictions and we discuss its computational and neurobiological plausibility.

Automatic synthetis of controllers for robotic tasks from the specification of state-machine-like missions, nonlinear models of the robot and a representation of the robot workspace

Jonathan A. DeCastro and Hadas Kress-Gazit, 2015, Synthesis of nonlinear continuous controllers for verifiably correct high-level, reactive behaviors, The International Journal of Robotics Research, 34: 378-394, DOI: 10.1177/0278364914557736.

Planning robotic missions in environments shared by humans involves designing controllers that are reactive to the environment yet able to fulfill a complex high-level task. This paper introduces a new method for designing low-level controllers for nonlinear robotic platforms based on a discrete-state high-level controller encoding the behaviors of a reactive task specification. We build our method upon a new type of trajectory constraint which we introduce in this paper, reactive composition, to provide the guarantee that any high-level reactive behavior may be fulfilled at any moment during the continuous execution. We generate pre-computed motion controllers in a piecewise manner by adopting a sample-based synthesis method that associates a certificate of invariance with each controller in the sample set. As a demonstration of our approach, we simulate different robotic platforms executing complex tasks in a variety of environments.

A nice review of the problem of kinematic modeling of wheeled mobile robots and a new approach that delays the use of coordinate frames

Alonzo Kelly and Neal Seegmiller, 2015, Recursive kinematic propagation for wheeled mobile robots, The International Journal of Robotics Research, 34: 288-313, DOI: 10.1177/0278364914551773.

The problem of wheeled mobile robot kinematics is formulated using the transport theorem of vector algebra. Doing so postpones the introduction of coordinates until after the expressions for the relevant Jacobians have been derived. This approach simplifies the derivation while also providing the solution to the general case in 3D, including motion over rolling terrain. Angular velocity remains explicit rather than encoded as the time derivative of a rotation matrix. The equations are derived and can be implemented recursively using a single equation that applies to all cases. Acceleration kinematics are uniquely derivable in reasonable effort. The recursive formulation also leads to efficient computer implementations that reflect the modularity of real mechanisms.

Interesting and gentle introduction to WCET analysis and synchronous design for hard real-time systems

Pascal Raymond, Claire Maiza, Catherine Parent-Vigouroux, Fabienne Carrier, Mihail Asavoae, 2015, Timing analysis enhancement for synchronous program, Real-Time Systems, Volume 51, Issue 2, pp 192-220, DOI: 10.1007/s11241-015-9219-y.

Real-time critical systems can be considered as correct if they compute both right and fast enough. Functionality aspects (computing right) can be addressed using high level design methods, such as the synchronous approach that provides languages, compilers and verification tools. Real-time aspects (computing fast enough) can be addressed with static timing analysis, that aims at discovering safe bounds on the worst-case execution time (WCET) of the binary code. In this paper, we aim at improving the estimated WCET in the case where the binary code comes from a high-level synchronous design. The key idea is that some high-level functional properties may imply that some execution paths of the binary code are actually infeasible, and thus, can be removed from the worst-case candidates. In order to automatize the method, we show (1) how to trace semantic information between the high-level design and the executable code, (2) how to use a model-checker to prove infeasibility of some execution paths, and (3) how to integrate such infeasibility information into an existing timing analysis framework. Based on a realistic example, we show that there is a large possible improvement for a reasonable computation time overhead.

A survey of semantic mapping for mobile robots

Ioannis Kostavelis, Antonios Gasteratos, 2015, Semantic mapping for mobile robotics tasks: A survey, Robotics and Autonomous Systems, Volume 66, April 2015, Pages 86-103, ISSN 0921-8890, DOI: 10.1016/j.robot.2014.12.006.

The evolution of contemporary mobile robotics has given thrust to a series of additional conjunct technologies. Of such is the semantic mapping, which provides an abstraction of space and a means for human\u2013robot communication. The recent introduction and evolution of semantic mapping motivated this survey, in which an explicit analysis of the existing methods is sought. The several algorithms are categorized according to their primary characteristics, namely scalability, inference model, temporal coherence and topological map usage. The applications involving semantic maps are also outlined in the work at hand, emphasizing on human interaction, knowledge representation and planning. The existence of publicly available validation datasets and benchmarking, suitable for the evaluation of semantic mapping techniques is also discussed in detail. Last, an attempt to address open issues and questions is also made.

Novel recursive bayesian estimator based on approaching pdfs by polynomials and keeping a hypothesis for each of its modes

Huang, G.; Zhou, K.; Trawny, N.; Roumeliotis, S.I., (2015), A Bank of Maximum A Posteriori (MAP) Estimators for Target Tracking, Robotics, IEEE Transactions on , vol.31, no.1, pp.85,103. DOI: TRO.2014.2378432

.

Nonlinear estimation problems, such as range-only and bearing-only target tracking, are often addressed using linearized estimators, e.g., the extended Kalman filter (EKF). These estimators generally suffer from linearization errors as well as the inability to track multimodal probability density functions. In this paper, we propose a bank of batch maximum a posteriori (MAP) estimators as a general estimation framework that provides relinearization of the entire state trajectory, multihypothesis tracking, and an efficient hypothesis generation scheme. Each estimator in the bank is initialized using a locally optimal state estimate for the current time step. Every time a new measurement becomes available, we relax the original batch-MAP problem and solve it incrementally. More specifically, we convert the relaxed one-step-ahead cost function into polynomial or rational form and compute all the local minima analytically. These local minima generate highly probable hypotheses for the target’s trajectory and hence greatly improve the quality of the overall MAP estimate. Additionally, pruning of least probable hypotheses and marginalization of old states are employed to control the computational cost. Monte Carlo simulation and real-world experimental results show that the proposed approach significantly outperforms the standard EKF, the batch-MAP estimator, and the particle filter.

Novel algorithm for inexact graph matching of moderate size graphs based on Gaussian process regression

Serradell, E.; Pinheiro, M.A.; Sznitman, R.; Kybic, J.; Moreno-Noguer, F.; Fua, P., (2015), Non-Rigid Graph Registration Using Active Testing Search, Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.37, no.3, pp.625,638. DOI: http://doi.org/10.1109/TPAMI.2014.2343235

.

We present a new approach for matching sets of branching curvilinear structures that form graphs embedded in R^2 or R^3 and may be subject to deformations. Unlike earlier methods, ours does not rely on local appearance similarity nor does require a good initial alignment. Furthermore, it can cope with non-linear deformations, topological differences, and partial graphs. To handle arbitrary non-linear deformations, we use Gaussian process regressions to represent the geometrical mapping relating the two graphs. In the absence of appearance information, we iteratively establish correspondences between points, update the mapping accordingly, and use it to estimate where to find the most likely correspondences that will be used in the next step. To make the computation tractable for large graphs, the set of new potential matches considered at each iteration is not selected at random as with many RANSAC-based algorithms. Instead, we introduce a so-called Active Testing Search strategy that performs a priority search to favor the most likely matches and speed-up the process. We demonstrate the effectiveness of our approach first on synthetic cases and then on angiography data, retinal fundus images, and microscopy image stacks acquired at very different resolutions.

Demonstration that students benefit from using colors while teaching electrical circuit analysis

Reisslein, J.; Johnson, A.M.; Reisslein, M., (2015), Color Coding of Circuit Quantities in Introductory Circuit Analysis Instruction, Education, IEEE Transactions on , vol.58, no.1, pp.7,14, DOI: 10.1109/TE.2014.2312674

Learning the analysis of electrical circuits represented by circuit diagrams is often challenging for novice students. An open research question in electrical circuit analysis instruction is whether color coding of the mathematical symbols (variables) that denote electrical quantities can improve circuit analysis learning. The present study compared two groups of high school students undergoing their first introductory learning of electrical circuit analysis. One group learned with circuit variables in black font. The other group learned with colored circuit variables, with blue font indicating variables related to voltage, red font indicating those related to current, and black font indicating those related to resistance. The color group achieved significantly higher post-test scores, gave higher ratings for liking the instruction and finding it helpful, and had lower ratings of cognitive load than the black-font group. These results indicate that color coding of the notations for quantities in electrical circuit diagrams aids the circuit analysis learning of novice students.