Difference between revisions of "Architecture:Specification of BABEL Modules"

From The BABEL Development Site
(Layer 1: HAD Modules)
(Layer 2: SD Modules)
Line 40: Line 40:
 
|}
 
|}
  
=== Layer 2: SD Modules ===
+
<br>
 +
 
 +
=== Layer 2: Basic Sensory (BS) Modules ===
 +
 
 +
Recall that all these modules share a common IF as described in the previous section.
 +
 
 +
 
 +
==== BS_IncrementalEgoMotion ====
 +
 
 +
* Description: This module collects data from odometry, and possibly from visual odometry, etc... to provide incremental estimations of the robot pose.
 +
* Interface: Sensory frames of objects [http://babel.isa.uma.es/mrpt/reference/svn/classmrpt_1_1slam_1_1_c_observation_odometry.html CObservationOdometry].
 +
 
 +
==== BS_Vision ====
 +
 
 +
* Description: This module collects images from the available cameras and:
 +
** Provides the raw images, as CObservationImage or CObservationStereoImages objects.
 +
** Performs real-time feature tracking from stereo images (if available) and provides the tracked landmarks as an object of the class [http://babel.isa.uma.es/mrpt/reference/svn/classmrpt_1_1slam_1_1_c_observation_visual_landmarks.html CObservationVisualLandmarks].
 +
 
 +
==== BS_RangeSensors ====
 +
 
 +
* Description: This module collects data from laser range scanners, and ultrasonic sensors.
 +
 
 +
 
 +
<br>
 +
 
 +
=== Layer 3: Sensory Detector (SD) Modules ===
 +
 
 +
 
 +
==== SD_PeopleDetector ====
 +
 
 +
* Description: This module detects people around the robot using sensor data from range scanners, images, etc....
 +
* Interface: Create a new type of observation? and/or add some specific method to retrieve the detected data.
 +
 
 +
 
 +
==== SD_Local3DMap ====
 +
 
 +
* Description: This module maintains a 3D representation of close obstacles around the robot, by fusing data from sonars, laser scanners, and stereo vision.
 +
* Interface: TODO: Specific IF to retrieve:
 +
** A 2D obstacle point map.
 +
** A 2D occupancy grid.
 +
** A 3D point cloud.

Revision as of 00:10, 18 May 2009

Previous: Interfaces Next: RPDs


This page contains detailed information about a collection of modules and their role in the control architecture. The complete list of modules available for download is available here.


Layer 1: HAD Modules

Module (ICE) name Description Implemented Driver Interfaces
HAD_MobileBase_Pioneer Driver for the Pioneer3 DX/AT mobile bases, through a COM serial port and the ARIA library. DRV_WHEELS_MOBILE_BASE, DRV_IOBOARD, DRV_BATTERY_LEVEL
HAD_MobileBase_Simulator A simulated 2D mobile base. DRV_IOBOARD, DRV_WHEELS_MOBILE_BASE, DRV_2D_RANGE_SCANNER
HAD_Laser_Sick_USB Driver for the SICK laser range scanner via RS422-USB board. Hw version is JLBC/APR-05. DRV_2D_RANGE_SCANNER
HAD_Laser_Hokuyo Driver for the HOKUYO laser range scanner via a COM serial port. DRV_2D_RANGE_SCANNER
HAD_GPS Driver for COM-connected GPSs. DRV_GPS
HAD_GasSensors Driver for the custom-built e-Noses (Hardware Feb-2007). DRV_GAS_SENSOR
HAD_IMU_XSens Driver for the XSENS IMU. DRV_IMU_SENSOR
HAD_Generic_Camera Driver for any OpenCV/FFmpeg/Bumblebee camera (any camera supported by MRPT's CCameraSensor). DRV_CAMERA_SENSOR


Layer 2: Basic Sensory (BS) Modules

Recall that all these modules share a common IF as described in the previous section.


BS_IncrementalEgoMotion

  • Description: This module collects data from odometry, and possibly from visual odometry, etc... to provide incremental estimations of the robot pose.
  • Interface: Sensory frames of objects CObservationOdometry.

BS_Vision

  • Description: This module collects images from the available cameras and:
    • Provides the raw images, as CObservationImage or CObservationStereoImages objects.
    • Performs real-time feature tracking from stereo images (if available) and provides the tracked landmarks as an object of the class CObservationVisualLandmarks.

BS_RangeSensors

  • Description: This module collects data from laser range scanners, and ultrasonic sensors.



Layer 3: Sensory Detector (SD) Modules

SD_PeopleDetector

  • Description: This module detects people around the robot using sensor data from range scanners, images, etc....
  • Interface: Create a new type of observation? and/or add some specific method to retrieve the detected data.


SD_Local3DMap

  • Description: This module maintains a 3D representation of close obstacles around the robot, by fusing data from sonars, laser scanners, and stereo vision.
  • Interface: TODO: Specific IF to retrieve:
    • A 2D obstacle point map.
    • A 2D occupancy grid.
    • A 3D point cloud.