Research Area "Neurocybernetics"

 
Home
Research topics
Projects
Tools
Publications
People
Links
Contact

Doctorate Programme
Institute of Cognitive Science
University of Osnabrück

Home - Downloads - Media - Contact - Links


NERD Toolkit

Neurodynamics and Evolutionary Robotics Development Toolkit



Media

This page contains videos and screenshots of the NERD applications. The page will be frequently updated.

Available media files:

Closed-Chain-Animat
M-Series
A-Series
Simulator Model A-Series

Neural Behaviors for the M-Series Humanoid

M-Series Grasping


simpleHandGrasping.avi Simple Grasping Motion

This behavior demonstrates a grasping behavior in simulation.
GraspingAndPutting_Myon.avi GraspingAndPutting2_Myon.avi Grasping Motions on the Myon Hardware

In this behavior the robot grasps an object that was previously specified by the user. The behavior here does not use camera feedback.

M-Series Gestures

Gestures_Myon.avi Gesture Library on the Hardware

This neuro-controller provides several gestures (hand waving, arm waving, ...). The behaviors can be switched with the acticity of a single control neuron. In the beginnging and the end of the video one can see the automatic posture calibration, that uses the acceleration sensors of the robot to correct small misadjustments of the joint angle sensors. That way the behavior can be used without the need to recalibrate the joint angle sensors of the robot.
GesturesWithNet.avi Extended Gesture Library using an Alternative Arm

The neuro-controller shown in this video uses an alternative arm version of Myon, that can be build as part of the modular extension paradigm of Myon. This arm has its elbow joint rotated by 90 degrees and thus allows different gestures. Because this arm is not built yet, the behavior only works in simulation. The neuro-controller provides all presented gestures and motions. They can be selected with the activity of a single control neuron. The videos below show the single gestures separately.
HandWaving.avi

Hand Waving

Aerobics.avi

Aerobics

ArmWaving.avi

Arm Waving

BellyDance.avi

Belly Dance

Bowing.avi

Bowing

CrossArms.avi

Arm Crossing

Idle.avi

Idle...

Saluting.avi

Saluting

Swearing.avi

Swearing

M-Series Walking

MSeriesWalking.avi Simple Reflex Walking in Simulation

This behavior uses reflex loops to keep the robot swinging from one leg to the other and hereby more forwards.
PDW_Hanging_Myon.avi PDW_Simulation_Cantilever.avi PDW_Simulation_Colored.avi Reflex Walking with Energy Efficient Joint Release

This walking behavior uses the so called release mode of Myon to decouple the motors in certain phases of the behavior and thus to freely swing the joints where possible. This leads to a more natural looking walking and saves energy. The phases can be seen in the third video, where the limbs are color-coded to indicate torque in positive direction, torque in negative direction and the release mode. The behavour could not be transfered to the physical machine yet. A first impression is show in the first video, where the robot is hanging without ground contact.

M-Series Crawling

MSeriesCrawler.avi CrawlingStandUp_Simulation.avi Crawling on Elbows and Hands. Standing up.

This behavior is a simple crawling behavior. The first video shows a crawling on the elbows, the second video crawling with straight arms. In the second video a stand-up behavior is shown. To stand up the robot currently requires an object as aid. The neural network detects the object and uses it to lift the body weight upwards. The color coding of the right arm indicates the state detections of the network. If green, then the network detected a reliable object that can be used to stand up.

M-Series Robust Standing

M-SeriesPoinint.avi
stability.avi

M-Series Pointing

Pointing_1D_Myon.avi Pointing at Objects of Interest

Pointing behavior on the M-Series hardware (Myon). This behavior points one arm to the direction the robot is looking at. Hereby the speed of the movements can be influenced leading to a smoother or more rapid - feedback loop controlled - motion. The behavior can be stopped, here by tilting the head to the side, which leads to a controlled lowering of both arms. Also, the current pointing direction can be fixed, which allows the head to be moved somewhere else, e.g. to search for other objects. Furthermore the behavior always selects the most relevant arm to point to the object, trying to change the pointing hand as seldom as possible. The behavior can be controlled via interface neurons, which is here relalized with a small control box that allowes to set the activation of such interface neurons with potentiometers.
Pointing_1D_Myon.avi Pointing at Given Coordinates (With Posture Calibration)

In this pointing behavior the x and y position of an object can be set as neural activity with the control box using potentiometers (or higher control instances). The x and y position hereby are absolute positions of the object relative to the robot body. The network is trained to translate these coordinates to the angle positions of the corresponding joint angles. As in the behavior before, the pointing arm is switched as seldom as possible.

In the beginning of the video the automatic posture calibration is shown. This sub-behavior automatically calibrates the posture of the robot according to its acceleration sensors, correcting small errors in the angle sensor settings. That way the user can simply start the robot without the need to recalibrate the angle sensors first.
M-SeriesPoinint.avi M-SeriesPoinint2.avi Simple Pointing in Simulation

This simple network shows the first realization in simulation.

Neural Motion Capturing

The following videos demonstrate networks with the capability to capture a 'trained' motion and to recall that motion later. With such networks arbitrary gestures, motions and movements can be modelled when they are required e.g. in an ALEAR language game. This prevents time consuming programming of such motions. It is sufficient to show the robot the motion. Captured motions can also be transferred to fixed motion networks when such behaviors are needed later. The videos only show motion capturing with a single arm, but the network can be extended to all limbs of the robot. In principle, also a fusion of motion mimicry and reactice, sensor driven behavior (e.g. camera guided motions) is possible.
ArmMovement_MotionCapture1_Myon.avi ArmMovement_MotionCapture2_Myon.avi ArmMovement_MotionCapture3_Myon.avi
ArmMovement_MotionCapture4_Myon.avi ArmMovement_MotionCapture5_Myon.avi GraspingObject_MotionCapture1_Myon.avi
GraspingObject_MotionCapture2_Myon.avi GraspingObject_MotionCapture3_Myon.avi GraspingObject_MotionCapture4_Myon.avi


Neural Behaviors for the A-Series Humanoid

These videos show some behaviors evolved for the A-Series humanoid robot.

Walking with Acceleration Sensors

ASeries_walkingFront_icone.avi ASeries_walkingFront2_icone.avi ASeries_walkingSide_icone.avi For this motion the robot initially has to be moved by an additional neural structure or by a human to trigger the walking behavior.

Stable Standing on Shaking and Tilting Platforms

ASeries_standShaking.mpg ASeries_standOnTiltedPlatform..avi These neural networks stabilize the A-Series robot to counteract shocks and tilting of the ground.

Gesture Library

ASeries_gestureLibrary.avi This single neural network contains a number of gestures and motions that can be triggered by changing a single bias value.

Standing Up

ASeries_getUp.avi ASeries_getUp_side.avi When laying on the ground, this neural network can be used to make the robot stand up again.

Pointing to Specific Directions

ASeries_pointer.avi In this network a desired pointing direction can be given by setting bias values. The robot determines autonomously which hand to use. In the redundant area in front of the robot the arm that is currently used is preferrably used.


Behavior Comparison of NERD A-SeriesSimulator with the A-Series Humanoid

The A-Series humanoid robots are based on the ROBOTIS Biolid Construction Kit. The A-SeriesSimulator tries to simulate this robot close enought to transfer behaviors between the simulation and the humanoid robot. The goal is to adapt the motor parameters to achieve a behavior close to the physical robot. The simulator will then be used to evolve sensor-driven neuro-controllers for the humanoids, that should be transferable to the physical robots.

The following movies show a direct comparison of the simulated and the physical robot. For this a tool called MotionEditor was used to control both the simulated and the physicsl robot simultaneously. The simulator (shown on the TFT screen behind the robot) runs at approximately runtime to allow a direct comparison of the behaviors.

StandUp (Keyframe Motion)

ORCS_standUpSequence.AVI ORCS_standUpSequence2.AVI This motion net enables the robot to stand up when lying at its front side. The videos show two trials from different perspectives.

SitUp (Keyframe Motion)

ORCS_standUpSequence.AVI This motion net makes the robot sit up when lying at its back.

Moving Waist Down and Up (Keyframe Motion)

ORCS_waistMovement.AVI ORCS_waistMovement2.AVI This motion net moves the robot's waist down to maximum and then sets the desired waist motor angle back to full up. Due to the weight of the robot and the motor limitations both, the simulated and the physical robot, can not get into the upright position again. The videos show two trials from different perspectives.

Simple Walking (Keyframe Motion)

ORCS_walk.AVI This motion net makes the robots do a simple walking pattern by tilting the robots to the left and to the right successively.