Neural Networks, Robotics, Evolutionary Algorithms, Neural Science

Personal Website of Christian Rempis

Recent Experiments and Projects

Learning in the
Sensori-Motor Loop

Fall 2012. In the context of the DFG priority program "Autonomes Lernen" (Autonomous Learning) we investigate mechanisms to enable recurrent neural networks to adapt the behavior of a situated animat. The plasticity here is part of the long-term dynamics of the animat and embedded into the sensori-motor loop of the neuro-control. The challenge here is primarily to control the learning as part of the behavior and to find suitable network topologies that lead to the desired types of behaviors. In that context, learning is not understood as an autonomous, multi-purpose learning able to solve all kinds of problems. Instead, the plasticity of the network is recognized as an essential part of a specific control network that only leads to a single, specific behavioral expression (the picture shows how an obstacle avoidance behavior is learned from experience). The advantage here does not lay in having a network capable of learning any kind of behavior, but in having networks capable of running on different variants of similar (even morphing) animats in changing environments without further manual changes at the network level.

Increasing the Variety
of Evolved Controllers

July 2012. In a simple locomotion scenario, we examine how different constraint masks can be used to systematically search for very specific, distinct control strategies. We can show here, that many, very different controllers can be found, that do not only show a different locomotion behavior, but that are also based on very different control principles.
Among others, the rolling behavior here is generated by force control, angular control, by coodrination of oscilaltors, traversing activation centers, the use of different sensors, and much more. The experiments also demonstrate clearly how constraint masks allow a very specific search for concreate network structure classes without loosing the chance of getting interisting, novel and especially also surprising results, despite of the induced domain knowledge.

Neural Motion-Capturing

April 2011. Short-term memory in artificial neural networks based on the activation dynamics of the networks is an interesting and useful concept. Because no plasticity mechanisms for changing synapses are needed, such a memory can be used on all robots that only support static neuro-controllers. In this experiment, a network was designed that allows to "memorize" motion trajectories demonstrated on a humanoid robot. These motions can then be replicated later on demand. Motion primitives can therefore be generated rapidly, simply by demonstrating the motions directly on the hardware. With an additional optional plasticity rule, motions trained in that way can also be memorized in the synaptic weights to make them persistent and reuseable as behavior building-blocks.

Six-Legged Walking

September 2012. This series of evolution experiments examines three aspects of walking with a 6-legged insect-like animat: How can the walking of the legs be realized in the sensori-motor loop? How are the separate legs coupled to generate the desired walking pattern? And how can the evolution be influenced using constraint masks?

The preliminary results suggests, that - with proper constraint masks - evolution can find numerous variants for such walking behaviors. The connectivity often is quite dense and the robustness with respect to external disturbances (e.g. obstacles) is often much higher compared to manually constructed networks based on simple coordination models. Stay tuned for updates on the results.

Big Alice: A Sensor-Rich
Benchmark Animat

October 2012. For the examination of learning with artificial neural networks in the sensori-motor loop a whole set of different experiments are required that cover different domains of learning. For this, a benchmark animat was designed, that provides the potential to be used in many such experiments. run A design goal of the animat was the availability of many motors and sensors that interact in a complex way through the body. This allows experiments focussing on prediction learning, (anti-) correlation, sensor-fusion, homeostatic body regulation and much more. On the other hand, the animat should be so simple that experiments can be designed from simple scenarios to complex learning tasks. Meeting all these demands, the BigAlice animat with its selectively chooseable 15 different sensors types and multiple actuators and motors (~70 sensor, ~20 motor neurons) allows the comparison of different approaches in different domains of learning on a single comparative animat.

runLocomotion of a
Closed-Chain Animat

January 2012. A closed-chain animat with 10 to 15 connected segments is used as platform to evolve neuro-controllers for a rolling locomotion. The animat provides a bunch of sensors (angle, 3D acceleration, 3D gyroscope, force) and motors (angle and torque) per segment, which makes it a difficult task for neuro-evolution approaches. Network sizes range from 100 to 200 neurons and allow over 30000 synapses. The use of constraint masks during evolution (ICONE method) still allows to sucessfully evolve interesting and efficient locomotion behaviors.

NERD Toolkit:The Neurodynamics and Evolutionary Robotics Development Toolkit

2008 - now. The development of the Neurodynamics and Evolutionary Robotics Development Toolkit has been one of my major, long lasting projects during the last few years. This software package provides a comprehensive set of applications and tools required to do a wide variety of experiments in the domain of neuro-evolution, neurorobotics and neuro-control.
The NERD Toolkit provides a physical simulator, various neuron models and multple neuro-evolution methods, including the novel ICONE method (see article at the right).

ICONEInteractively Constrained Neuro-Evolution

2009 - now. Evolving neuro-controllers for complex robots frequently involves large search spaces that prevent successful evolution experiments by the sheer number of parameters that have to be optimized simultaneously. Most experiments in that domain therefore have mostly been applied to the evolution of quite small networks realizing simple behaviors. To overcome this limitation and to allow larger, more complex networks to evolve successfully, the ICONE method has been proposed. In addition to the usual measures to define an evolution experiment (simulation scenario, fitness function, evolution parameters), ICONE introduces so-called ConstraintMasks to specify dependencies as constraints for the evolving networks (see figure for examples).
Such constraint masks can easily be defined directly in the networks and allow the induction of almost arbitrary domain knowledge about the expected network topologies. During evolution, all evolving networks are guaranteed to remain on the well-defined parameter manifold that is described by the constraint masks. This can massively restrict the search space, so that larger networks evolve in search spaces that are still manageable by the evolution algorithm. In addition to an increased success rate, ICONE also allows to systematically search for specific network topologies, network organizations and variations in a fast and intuitive way. This makes the method a powerful tool for neuro-robotics and neurocybernetics experiments and allows a whole range of novel evolution experiments for the current class of complex robots.

Older Projects

runShort-Term Memory
in Recurrent Neural Networks

Februrary 2007. Adaptability and learning as part of a working memory allows an agent to memorize relevant features of its environment to adapt in a beneficial way. Such memory is assumed to be realizable without adaptation of weights or the network structure, just in the activation dynamics of the neuro-controllers. In this series of experiments, multiple such mechanisms have been designed and tested in various scenarios. The agents memorize orders, sequences, sets and distances as part of their behavior. The memory is realized with different approaches, such as hystereses, associative memory banks, ring networks, comparing oscillator differences, simplified neural fields, etc.

Walking with the A-Series Humanoid

Fall 2010. run In this experiment, a walking behavior for a non-simplified, physical humanoid robot has been designed with the constrained ICONE neuro-evolution approach. The A-Series robot has been modelled accurately in simulation to allow the necessary evolution on a computer cluster. The network has been highly constrained with constraint masks to restrict the search space to a feasible complexity domain. The evolved controllers have then been adapted and transfered sucessfully to the physical hardware.

runNeuro-Controllers for Effective Pointing with Humanoid Robots

March 2011. Goal of this behavior controller is to enable a humanoid robot with two arms to effectively point at specific locations in space. In the context of the ALEAR project, this is an essential prerequisite for the language games, in which the robots have to point to objects in their environment while they talk about them and develop a vocabulary for the objects and their spatial relations. The pointing behavior should be implemented in such a way, that the high-level information of the external vision sensor drives the pointing behavior. The robots also should efficiely choose their the pointing arm to minimize motions. So especially in the front of the robot, where both arms can point to the same locations, the arms should only be switched when this is required. Otherwise, the currently pointing arm should be moved to point to the new location. The behavior has been implemented for the A-Series and the Myon humanoid robot, both in simulation and on the hardware.