Overview
Neural Motion Capturing
The following videos demonstrate networks with the capability to capture a 'trained' motion and to recall that motion later. With such networks arbitrary gestures, motions and movements can be modelled when they are required e.g. in an ALEAR language game. This prevents time consuming programming of such motions. It is sufficient to show the robot the motion. Captured motions can also be transferred to fixed motion networks when such behaviors are needed later. The videos only show motion capturing with a single arm, but the network can be extended to all limbs of the robot. In principle, also a fusion of motion mimicry and reactice, sensor driven behavior (e.g. camera guided motions) is possible.Important Note:
To view the networks you need either the Network Editor of the NERD Toolkit (NERD Format *.onn) or a vector graphics programm (SVG) such as InkScape. The scalable vector graphics images are comparably large and require a fast computer to be rendered in acceptable time!Scalable Vector Graphics of full network
Full network in NERD Toolkit XML format
Contributors:
- Concepts, Neural Networks and Experiments:
- Christian Rempis
- Myon Humanoid Hardware:
- Institute: