ES Recruitment Drive
ya... i liek it... i wish i could fight like the robot Neat guy
<Shlimby> lol, no Ad, just stop seeing those melons
<ADTerminal> Sure
<Shlimby> and dont take them into ur bed at night
<ADTerminal> But it's where I keep them warm
Read most of the thread. I have to say that I find the project extremely interesting (and the miscommunications at the beginning extremely entertaining). AI has always been one of my interests, though I sadly wasn't able to take the more advanced AI courses before I graduated college. I'll definitely be watching this progress with a lot of interest.

Honestly I think your biggest obstacle is the convergence speed (as you mentioned), I'm a little surprised that it doesn't parallelize at all. I would expect most modern evolution alg. to at least parallelize to some extent (I mean, with copies on each machine of the simulation, it seems like it should be a semi-easy task to do). I have a feeling its possible to get natural looking motions, but I honestly think that with the humongous amount of movements that have to be perfect its for all intents and purposes infeasible to find without significantly more computing power. In any case, I'm greatly looking forward to seeing more progress.
Just a late addition, you should note that if you don't make it so it can change its joint state only once every frame or every x frames, it's not going to be effective at all in real TB situations.
Hey, some following up since I haven't posted in a while. I've been preoccupied with studying neuroevolution; a lot of stuff has happened in the past little while thats really changed the prospects. It turns out there have been several people whove attempted to use NEAT to evolve neural nets as humanoid ragdoll controllers. Tyler Streeter used neat and ODE to generate walking with great success; his method was unconventional in that he had the input neurons generate patterns rather than act as sensors, and his neuron model was leaky with respect to time (this means that the strength of activation in a neuron fades gradually). One clever trick he used in another version was to bypass the problem of constantly falling over by putting invisible springs on the body which propped it up, sort of like training wheels.

More recently, Petar Chervenski has implemented his version of humanoid neuroevolution, implementing hyperNEAT and more recently "novelty search".

A paper was published recently that suggests fitness based searches are actually inefficient and less successful overall than searches based strictly on the criteria of novelty. Specifically its supposed to avoid the pitfall of deceptive dead-ends in the fitness landscape. There is a lively discussion over at the NEAT users group, and you can also read the papers on it if you like. If you like you can mess around with a couple of week old semi-buggy version of toribash novelty search, I posted it there a while ago as a link, although that one currently works under linux only. I will fix this in the future when I've established the code I will be using to make my Toribash AI, whenever I do that.
The link for the neat discussion group is
http://tech.groups.yahoo.com/group/neat/

Right now I'm working on adapting the novelty search algorithm for parallelization in an unconventional way and if that pans out I should be able to use that for attacking the Toribash problem with some real brute force. I will post an update here when I have something to show you.
I think you should just release an alpha build of your script or maybe even a video. Something!