Tag Archives: Directly Bioinspired

An orientation sensor for robot navigation that uses the sky

Julien Dupeyroux, Stéphane Viollet, Julien R. Serres, An ant-inspired celestial compass applied to autonomous outdoor robot navigation, Robotics and Autonomous Systems, Volume 117, 2019, Pages 40-56, DOI: 10.1016/j.robot.2019.04.007.

Desert ants use the polarization of skylight and a combination of stride and ventral optic flow integration processes to track the nest and food positions when traveling, achieving outstanding performances. Navigation sensors such as global positioning systems and inertial measurement units still have disadvantages such as their low resolution and drift. Taking our inspiration from ants, we developed a 2-pixel celestial compass which computes the heading angle of a mobile robot in the ultraviolet range. The output signals obtained with this optical compass were investigated under various weather and ultraviolet conditions and compared with those obtained on a magnetometer in the vicinity of our laboratory. After being embedded on-board the robot, the sensor was first used to compensate for random yaw disturbances. We then used the compass to keep the Hexabot robot’s heading angle constant in a straight forward walking task over a flat terrain while its walking movements were imposing yaw disturbances. Experiments performed under various meteorological conditions showed the occurrence of steady state heading angle errors ranging from 0.3∘ (with a clear sky) to 2.9∘ (under changeable sky conditions). The compass was also tested under canopies and showed a strong ability to determine the robot’s heading while most of the sky was hidden by the foliage. Lastly, a waterproof, mono-pixel version of the sensor was designed and successfully tested in a preliminary underwater benchmark test. These results suggest this new optical compass shows great precision and reliability in a wide range of outdoor conditions, which makes it highly suitable for autonomous robotic outdoor navigation tasks. A celestial compass and a minimalistic optic flow sensor called M2APix (based on Michaelis–Menten Auto-adaptive Pixels) were therefore embedded on-board our latest insectoid robot called AntBot, to complete the previously mentioned ant-like homing navigation processes. First the robot was displaced manually and made to return to its starting-point on the basis of its absolute knowledge of the coordinates of this point. Lastly, AntBot was tested in fully autonomous navigation experiments, in which it explored its environment and then returned to base using the same sensory modes as those on which desert ants rely. AntBot produced robust, precise localization performances with a homing error as small as 0.7% of the entire trajectory.

Modelling emotions in adaptive agents through the action selection part of reinforcement learning, plus some references on the neurophysiological bases of RL and a good review of literature on emotions

Joost Broekens , Elmer Jacobs , Catholijn M. Jonker, A reinforcement learning model of joy, distress, hope and fear, Connection Science, Vol. 27, Iss. 3, 2015, DOI: 10.1080/09540091.2015.1031081.

In this paper we computationally study the relation between adaptive behaviour and emotion. Using the reinforcement learning framework, we propose that learned state utility, V(s), models fear (negative) and hope (positive) based on the fact that both signals are about anticipation of loss or gain. Further, we propose that joy/distress is a signal similar to the error signal. We present agent-based simulation experiments that show that this model replicates psychological and behavioural dynamics of emotion. This work distinguishes itself by assessing the dynamics of emotion in an adaptive agent framework – coupling it to the literature on habituation, development, extinction and hope theory. Our results support the idea that the function of emotion is to provide a complex feedback signal for an organism to adapt its behaviour. Our work is relevant for understanding the relation between emotion and adaptation in animals, as well as for human–robot interaction, in particular how emotional signals can be used to communicate between adaptive agents and humans.

Detecting objects in images through the timing of the changes in the visual sensor, rather than through the analysis of frames (without time information)

Orchard, G.; Meyer, C.; Etienne-Cummings, R.; Posch, C.; Thakor, N.; Benosman, R., HFirst: A Temporal Approach to Object Recognition, in Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.37, no.10, pp.2028-2040, Oct. 1 2015 DOI: 10.1109/TPAMI.2015.2392947.

This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% ± 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% ± 1.9% for a new more difficult 36 class character recognition task.

Quantum probability theory as an alternative to classical (Kolgomorov) probability theory for modelling human decision making processes, and a curious description of the effect of a particular ordering of decisions in the complete result

Peter D. Bruza, Zheng Wang, Jerome R. Busemeyer, Quantum cognition: a new theoretical approach to psychology, Trends in Cognitive Sciences, Volume 19, Issue 7, July 2015, Pages 383-393, ISSN 1364-6613, DOI: 10.1016/j.tics.2015.05.001.

What type of probability theory best describes the way humans make judgments under uncertainty and decisions under conflict? Although rational models of cognition have become prominent and have achieved much success, they adhere to the laws of classical probability theory despite the fact that human reasoning does not always conform to these laws. For this reason we have seen the recent emergence of models based on an alternative probabilistic framework drawn from quantum theory. These quantum models show promise in addressing cognitive phenomena that have proven recalcitrant to modeling by means of classical probability theory. This review compares and contrasts probabilistic models based on Bayesian or classical versus quantum principles, and highlights the advantages and disadvantages of each approach.

Semantic and syntactic bootstrapped learning for robots, inspired in similar processes in humans, that use language as a scaffolding mechanism to improve learning in unknown situations

Worgotter, F.; Geib, C.; Tamosiunaite, M.; Aksoy, E.E.; Piater, J.; Hanchen Xiong; Ude, A.; Nemec, B.; Kraft, D.; Kruger, N.; Wachter, M.; Asfour, T., Structural Bootstrapping—A Novel, Generative Mechanism for Faster and More Efficient Acquisition of Action-Knowledge, Autonomous Mental Development, IEEE Transactions on , vol.7, no.2, pp.140,154, June 2015, DOI: 10.1109/TAMD.2015.2427233.

Humans, but also robots, learn to improve their behavior. Without existing knowledge, learning either needs to be explorative and, thus, slow or-to be more efficient-it needs to rely on supervision, which may not always be available. However, once some knowledge base exists an agent can make use of it to improve learning efficiency and speed. This happens for our children at the age of around three when they very quickly begin to assimilate new information by making guided guesses how this fits to their prior knowledge. This is a very efficient generative learning mechanism in the sense that the existing knowledge is generalized into as-yet unexplored, novel domains. So far generative learning has not been employed for robots and robot learning remains to be a slow and tedious process. The goal of the current study is to devise for the first time a general framework for a generative process that will improve learning and which can be applied at all different levels of the robot’s cognitive architecture. To this end, we introduce the concept of structural bootstrapping-borrowed and modified from child language acquisition-to define a probabilistic process that uses existing knowledge together with new observations to supplement our robot’s data-base with missing information about planning-, object-, as well as, action-relevant entities. In a kitchen scenario, we use the example of making batter by pouring and mixing two components and show that the agent can efficiently acquire new knowledge about planning operators, objects as well as required motor pattern for stirring by structural bootstrapping. Some benchmarks are shown, too, that demonstrate how structural bootstrapping improves performance.

Developmental approach for a robot manipulator that learns in several bootstrapped stages, strongly inspired in infant development

Ugur, E.; Nagai, Y.; Sahin, E.; Oztop, E., Staged Development of Robot Skills: Behavior Formation, Affordance Learning and Imitation with Motionese, Autonomous Mental Development, IEEE Transactions on , vol.7, no.2, pp.119,139, June 2015, DOI: 10.1109/TAMD.2015.2426192.

Inspired by infant development, we propose a three staged developmental framework for an anthropomorphic robot manipulator. In the first stage, the robot is initialized with a basic reach-and- enclose-on-contact movement capability, and discovers a set of behavior primitives by exploring its movement parameter space. In the next stage, the robot exercises the discovered behaviors on different objects, and learns the caused effects; effectively building a library of affordances and associated predictors. Finally, in the third stage, the learned structures and predictors are used to bootstrap complex imitation and action learning with the help of a cooperative tutor. The main contribution of this paper is the realization of an integrated developmental system where the structures emerging from the sensorimotor experience of an interacting real robot are used as the sole building blocks of the subsequent stages that generate increasingly more complex cognitive capabilities. The proposed framework includes a number of common features with infant sensorimotor development. Furthermore, the findings obtained from the self-exploration and motionese guided human-robot interaction experiments allow us to reason about the underlying mechanisms of simple-to-complex sensorimotor skill progression in human infants.

Deducing the space concept from the sensorimotor behaviour of a robot, and an interesting related work of uninterpreted sensors and actuators in developmental robotics that deserves a deeper look

Alban Laflaquière, J. Kevin O’Regan, Sylvain Argentieri, Bruno Gas, Alexander V. Terekhov, Learning agent’s spatial configuration from sensorimotor invariants, Robotics and Autonomous Systems, Volume 71, September 2015, Pages 49-59, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.01.003.

The design of robotic systems is largely dictated by our purely human intuition about how we perceive the world. This intuition has been proven incorrect with regard to a number of critical issues, such as visual change blindness. In order to develop truly autonomous robots, we must step away from this intuition and let robotic agents develop their own way of perceiving. The robot should start from scratch and gradually develop perceptual notions, under no prior assumptions, exclusively by looking into its sensorimotor experience and identifying repetitive patterns and invariants. One of the most fundamental perceptual notions, space, cannot be an exception to this requirement. In this paper we look into the prerequisites for the emergence of simplified spatial notions on the basis of a robot’s sensorimotor flow. We show that the notion of space as environment-independent cannot be deduced solely from exteroceptive information, which is highly variable and is mainly determined by the contents of the environment. The environment-independent definition of space can be approached by looking into the functions that link the motor commands to changes in exteroceptive inputs. In a sufficiently rich environment, the kernels of these functions correspond uniquely to the spatial configuration of the agent’s exteroceptors. We simulate a redundant robotic arm with a retina installed at its end-point and show how this agent can learn the configuration space of its retina. The resulting manifold has the topology of the Cartesian product of a plane and a circle, and corresponds to the planar position and orientation of the retina.

Novelty detection as a way for enhancing learning capabilities of a robot, and a brief but interesting survey of motivational theories and their difference with attention

Y. Gatsoulis, T.M. McGinnity, Intrinsically motivated learning systems based on biologically-inspired novelty detection, Robotics and Autonomous Systems, Volume 68, June 2015, Pages 12-20, ISSN 0921-8890, DOI: 10.1016/j.robot.2015.02.006.

Intrinsic motivations play an important role in human learning, particularly in the early stages of childhood development, and ideas from this research field have influenced robotic learning and adaptability. In this paper we investigate one specific type of intrinsic motivation, that of novelty detection and we discuss the reasons that make it a powerful facility for continuous learning. We formulate and present one original type of biologically inspired novelty detection architecture and implement it on a robotic system engaged in a perceptual classification task. The results of real-world robot experiments we conducted show how this original architecture conforms to behavioural observations and demonstrate its effectiveness in terms of focusing the system’s attention in areas that are potential for effective learning.

Reinforcement learning used for an adaptive attention mechanism, and integrated in an architecture with both top-down and bottom-up vision processing

Ognibene, D.; Baldassare, G., Ecological Active Vision: Four Bioinspired Principles to Integrate Bottom–Up and Adaptive Top–Down Attention Tested With a Simple Camera-Arm Robot, Autonomous Mental Development, IEEE Transactions on , vol.7, no.1, pp.3,25, March 2015. DOI: 10.1109/TAMD.2014.2341351.

Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture (“BITPIC”) to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob “objects.” The results show that the architecture solves the problems, and hence the tasks, very efficiently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions.

On the not-so-domain-generic nature of statistical learning in the human brain

Ram Frost, Blair C. Armstrong, Noam Siegelman, Morten H. Christiansen, 2015, Domain generality versus modality specificity: the paradox of statistical learning, Trends in Cognitive Sciences, Volume 19, Issue 3, March 2015, Pages 117-125, DOI: 10.1016/j.tics.2014.12.010.

Statistical learning (SL) is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying distributional properties of the input. However, recent studies examining whether there are commonalities in the learning of distributional information across different domains or modalities consistently reveal modality and stimulus specificity. Therefore, important questions are how and why a hypothesized domain-general learning mechanism systematically produces such effects. Here, we offer a theoretical framework according to which SL is not a unitary mechanism, but a set of domain-general computational principles that operate in different modalities and, therefore, are subject to the specific constraints characteristic of their respective brain regions. This framework offers testable predictions and we discuss its computational and neurobiological plausibility.