Hey, some following up since I haven't posted in a while. I've been preoccupied with studying neuroevolution; a lot of stuff has happened in the past little while thats really changed the prospects. It turns out there have been several people whove attempted to use NEAT to evolve neural nets as humanoid ragdoll controllers. Tyler Streeter used neat and ODE to generate walking with great success; his method was unconventional in that he had the input neurons generate patterns rather than act as sensors, and his neuron model was leaky with respect to time (this means that the strength of activation in a neuron fades gradually). One clever trick he used in another version was to bypass the problem of constantly falling over by putting invisible springs on the body which propped it up, sort of like training wheels.
More recently, Petar Chervenski has implemented his version of humanoid neuroevolution, implementing hyperNEAT and more recently "novelty search".
A paper was published recently that suggests fitness based searches are actually inefficient and less successful overall than searches based strictly on the criteria of novelty. Specifically its supposed to avoid the pitfall of deceptive dead-ends in the fitness landscape. There is a lively discussion over at the NEAT users group, and you can also read the papers on it if you like. If you like you can mess around with a couple of week old semi-buggy version of toribash novelty search, I posted it there a while ago as a link, although that one currently works under linux only. I will fix this in the future when I've established the code I will be using to make my Toribash AI, whenever I do that.
The link for the neat discussion group is
http://tech.groups.yahoo.com/group/neat/
Right now I'm working on adapting the novelty search algorithm for parallelization in an unconventional way and if that pans out I should be able to use that for attacking the Toribash problem with some real brute force. I will post an update here when I have something to show you.