[Body Schema Modules]

Learns the body schema from sensorimotor contingencies. More...

Learns the body schema from sensorimotor contingencies.


This module learns the body schema (that is something similar to the Denavit-Hartenberg parameters) of a humanoid robot. This is performed online, as partly described in the following paper: . There are two ways of updating the body schema. One is to track a marker on placed on a limb (for example the hand). By knowing the position of the the marker with respect to the eyes, the system can update the entire kinematic chain going from the eyes to the marker. The second way is to analyze the visual flow produced by head movements and update the kinematics of the head accordingly.




Mandatory parameters

Optional parameters

File format

Ports Accessed

Ports Created

Input ports

Output ports:

Input Data Files

This files specifies the kinematic structure of the manipulator. The robot can be described by a "kinematic tree", having (for example) the torso as root where edges represent rigid connections (translation) and nodes represent joints (rotations). (see the paper mentioned above for more details) The file is an xml file with the following structure (see conf/icub_head_right_arm_tree.xml for the file describing the kinematics of the icub right arm and head.)

 <Segment> r_sfe
  <Axis> 1 0 0  </Axis>
   <Range> -90 90 </Range>
   <Position> -100 0 0 </Position>      
     <Segment> r_saa

In the example below r_sfe is the name of the proximal joint. The <Position> tag refers to the position of the joint with resect to the previous joint, when in the zero joint position. Similarly the <Axis> tag refers the the rotation axis, when all previous joints are in the zero position. <Range> provides the joint angle boundaries in degrees, and <Children> announce the next link. The tip of the manipulator is specified as an additional joint, but it has no <Axis> tag, only a <Position> tag

Output Data Files

lines corresponding the optical flow based update:

 "0 <angle1> <angle2> .. <angleN> <diff_angle1> <diff_angle2> .. <diff_angleN> <rotX> <rotY> <rotZ>" 

where <anglei> are the angles of the kinematic chain from <eye> to <world> at time t0, <diff_anglei> are small angular displacements of the same joints and <rotX> <rotY> <rotZ> describes the rotation of the visual flow that this displacement caused. <rotX> <rotY> <rotZ> are the 3 independent quaternion components describing the rotation (i.e the norm of the vector is sin(theta/2) where theta is the rotation angle.

Configuration Files

all the parameters can be written in a configuration file using the --file <config_file> argument This configuration file may look like this

 # initial body schema
 structure conf/icub_head_right_arm_tree.xml
 #name of joint attached to the eye
 #(here we don't use eye_pan to keep stereovision accurate
 eye eye_tilt
 # name of joint attached to the world
 world neck_tilt
 # name of joint where the stereovision system is attached
 stereo eye_tilt
 # name of joint where the marker is attached
 marker r_hand
 # most proximal joint of the arm
 arm_base r_sfe
 # most distal joint of the arm
 arm_end r_hand
 # most proximal joint of the head
 head_base neck_tilt
 # most distal joint of the head 
 head_end eyes
 # size of buffer used to store proprioceptive input
 bufferSize 200
 # distance between perceived and predicted marker position above which
 # perception is discarded as a vision outlier 
 threshold 500
 # file where to log the updates
 logfile data/test8.txt
 ## for just reading the log file
 ## for wrong head axis

Tested OS

Developped and tested on Linux, partially on Windows. This module uses no weird library or commands, just standard c++ so it should be portable.

Example Instantiation of the Module

 ./body_schema_module --file conf/config_body_schema.ini --name /body_schema
 sleep 1
 yarp connect icub/head/state:o /body_schema/proprioception_head
 yarp connect icub/right_arm/state:o /body_schema/proprioception_arm
 yarp connect stereovision/vision1:o /body_schema/vision:i

where the stereovision/vision1:o is assumed to provide the 3d position of the marker seen by the stereovision system (e.g. stereoVisualTracker) If you want to use the learning based on the optical flow you can add

 yarp connect /vFlow/visualRotation:o /body_schema/visual_rotation:i

where /vFlow/visualRotation:o outputs the optical flow (see visual_rotation)

Micha Hersch

CopyPolicy: Released under the terms of the GNU GPL v2.0.

This file can be edited at src/lasaBodySchema/include/BodySchemaLearning.h

 All Data Structures Functions Variables

Generated on Wed Sep 22 16:51:26 2010 for Body_Schema_Learning by  doxygen 1.6.1