Toribash
Originally Posted by Solax View Post
So we're stuck at the first hurdle there.
Two possible solutions to kick off that first step:
1. Community source the replays. Get as many people as possible to upload all their replays that meet the parameters of what we're looking for. The main issue is that there probably still wouldn't be near enough data and it requires effort for people to contribute. Even if you made a script to crawl the replays section and grab all replays and later filter them for desired parameters you'll probably be short.

2. Create a script that connects to the toribash servers and "watches" games being played in rooms. This involves knowing the protocol to connect to the servers and read game data into a usable format that we can then use to train our machine. I did look into this back a bit and I think with a bit of work should be possible. Then let it run, recording games of the mod you want until you have enough data.

The issue with this is that the machine doesn't learn from itself at all in both scenarios. While "studying" replays isn't half bad idea, it means it could pick up just as many bad tactics and moves as good ones, which would get it nowhere.

Starting off with sourcing few thousand replays is definitely the right thing to do, but it's only the first and smallest step. The machine would still have to play tens of thousands of games to "realise" which moves are superior to others or how to counter a certain move. The "countering" part would probably be extremely hard to overcome because it requires acting against what ghost is suggesting. It creates a massive spiral of checking previous games to guess what the opponent is going to do, and even then all it can act against is the most statistically probable outcome. Anything that the opponent does out of the ordinary instantly throws off the AI because it can't act on statistical probability no more.

Idk to me it seems like all too much for a system to handle at this point. Maybe after 10-20 years this would be possible from a home computer, but now it would require loads and loads of processing power. This sort of AI would have to update its database constantly with no end to it since there are infinite sets of moves that can be done every match.
Last edited by Smaguris; Jul 5, 2018 at 11:45 PM.
I'm a software engineer, and creating an AI for TB really depends on what information can you extract from the API, if there is even one.
You have to take into account the 4 joint states of all the joints to map what might happen in the next x frames the mod is set to use in a turn, plus you have to factor in collision effects, but otherwise there is a limited number of states (4^40 = 1208925819614629174706176, basically a metric fuckton) you'd have to investigate if you want your bot to have the best winrate possible.
If it was possible to have this done in a reasonable amount of time, what you'd need to do is see which state gets you closer to victory in the majority of scenarios (so 4^20 per state you can choose, 4^40), be that more points or your opponent getting closer to disqualification and just roll with that. Optimal (short term) play by the bot would follow. You can improve this by calculating each possibility for the next turn, and the next turn, and so on, and choose your move based on that extended map instead.
Really resource-intensive, but this bot could play TB even if the guy making it couldn't.
You can simplify this model by introducing openers / pre-set moves, limiting the scenarios you investigate per turn, etc.

Edit: fractures and DMs obviously reduce the amount of calculations needed.
Last edited by ynvaser; Jul 8, 2018 at 03:52 PM.
Self learning? I don't think so.
But you could narrow down the possibilities
1.always do the same opener
2.predict the players 2nd move depending on his opener. ( following replays that the bot watched online like Solax said )
3.predict the rest of the match, depending his 2nd move. ( following replays that the bot watched online like Solax said )

A more in depth explanation:
Let the bot watch online games and learn from players that do 1 opener (e.g noob clap because it's common), and then follow the rest of his moves depending on the opponents first two moves ( only if the noob clapper wins obviously )
The rest of the match after the 2nd turn is a lot easier to predict. ( which in this case won't be predicting but more like hoping it will be like previous match )
This would make a bot that's probably a bit challenging and fun for new players to play against.
Other than this, it's nearly impossible to make anything so little in size that people can download and run.

If things go well, you could make several bots to learn a different opener each.
And then mix the bots and make the new bot play one of the openers randomly each game.
Last edited by Mafi; Jul 9, 2018 at 11:11 PM.
You can't fight change. You can't fight nature.
Toribash is definitely not as complex as Dota, but it has so many possible combinations for moves that machine learning is pretty much the only solution. Since replays are not recorded (and it would literally take years to gather enough replays even with a script), self-learning seems to be the best option. This would be much easier if there was a built-in AI (not machine learning) to play against at the start, but not having one would just mean playing more games against itself.

I'm pretty positive that such an AI is in reach without having an insane amount of resources as long as Toribash has enough information in their an API (as ynvaser said). I think a lot of you guys are not considering how quickly a Toribash AI could train itself. Sure, there are a TON of possibilities in this game but even for players a typical ABD game takes like 5 minutes, and an AI wouldn't need the reaction time—it would come up with a move in less than a second. The only information you need is the state of all 21 joints for each player for each frame. That's not intensive at all and can easily be handled by a decent computer.

I would think of a Toribash AI as being essentially the same as chess, except every piece can move every turn. Moving at the same time vs. taking turns actually doesn't make a difference at all since they're both turn-based, either way you're just trying to predict what your opponent will do next. You would almost certainly use a Minimax algorithm as well. With how much machine learning has progressed in the recent years and the libraries available to the public, I don't think this would be nearly as hard as you guys think it is.
Last edited by Laser; Jul 13, 2018 at 06:46 PM.
S A D B O Y S
Originally Posted by Laser View Post
Toribash is definitely not as complex as Dota, but it has so many possible combinations for moves that machine learning is pretty much the only solution. Since replays are not recorded (and it would literally take years to gather enough replays even with a script), self-learning seems to be the best option. This would be much easier if there was a built-in AI (not machine learning) to play against at the start, but not having one would just mean playing more games against itself.

I'm pretty positive that such an AI is in reach without having an insane amount of resources as long as Toribash has enough information in their an API (as ynvaser said). I think a lot of you guys are not considering how quickly a Toribash AI could train itself. Sure, there are a TON of possibilities in this game but even for players a typical ABD game takes like 5 minutes, and an AI wouldn't need the reaction time—it would come up with a move in less than a second. The only information you need is the state of all 21 joints for each player for each frame. That's not intensive at all and can easily be handled by a decent computer.

There are actually several factors that would make this AI harder to develop than the DOTA one. Just think about this simple concept: in Dota, let's say, an enemy attacks the AI with a skillshot. Using its processing power the AI can make a split second calculation on how to dodge that skillshot, and also if it can fire anything back meanwhile. Now if you look at similar scenario in toribash, the combat works in much more limited way, because both players play out their set of moves at the same time, which doesn't give space for any perfectly calculated moves or counters. In basic terms, Dota can be almost 100% calculated, while Toribash is a game of guessing and predicting. Even if AI knew the joint states of the opponent, it wouldn't help unless it was real time states, something like realtimeghost.

Now when you factor that in, and how Dota AIs were being trained on 100,000 CPUs at the same time, you can realise how big of a scale this project would have to be.

Originally Posted by Laser View Post
Moving at the same time vs. taking turns actually doesn't make a difference at all since they're both turn-based, either way you're just trying to predict what your opponent will do next.

That is extremely wrong. Moving at the same time is massively more complicated than taking turns. You can't even compare chess to Toribash because of how primitive chess is in comparison. Imagine if in a first turn of Toribash your opponent was T-posing and not moving at all. You would most certainly decap or otherwise gain a huge advantage over your opponent, this is how much different it is.

In the game of chess you have the whole board in front of you to make your decision for next move. Nothing changes on the board between you observing the positions, making a decision and moving the piece until the opponent's turn. This way you can make an informed move. In toribash you can never make an informed move, it just wouldn't be possible until you were 100% certain that your opponent is afk or not moving otherwise. Honestly I don't think I need to go on because anyone could figure out by themselves why it's such a huge difference.
Originally Posted by Smaguris View Post
That is extremely wrong. Moving at the same time is massively more complicated than taking turns. You can't even compare chess to Toribash because of how primitive chess is in comparison. Imagine if in a first turn of Toribash your opponent was T-posing and not moving at all. You would most certainly decap or otherwise gain a huge advantage over your opponent, this is how much different it is.

Yeah my wording was retarded they're obviously not the same, I just meant that they're both reaction-based. Sure there is always an element of unpredictability in Toribash, but at least in ABD you can usually tell when someone is going for a kick or a flying knee etc. and it wouldn't be very difficult for an AI to recognize. I fail to see why the AI couldn't just prepare for the ideal move (what it would do in their situation) as this is exactly what top players do and are usually successful (at least I would assume). Some of the best Toribash players play with intuition rather than analyzing their opponent too heavily as well.

Originally Posted by Smaguris View Post
There are actually several factors that would make this AI harder to develop than the DOTA one. Just think about this simple concept: in Dota, let's say, an enemy attacks the AI with a skillshot. Using its processing power the AI can make a split second calculation on how to dodge that skillshot, and also if it can fire anything back meanwhile. Now if you look at similar scenario in toribash, the combat works in much more limited way, because both players play out their set of moves at the same time, which doesn't give space for any perfectly calculated moves or counters. In basic terms, Dota can be almost 100% calculated, while Toribash is a game of guessing and predicting. Even if AI knew the joint states of the opponent, it wouldn't help unless it was real time states, something like realtimeghost.

Now when you factor that in, and how Dota AIs were being trained on 100,000 CPUs at the same time, you can realise how big of a scale this project would have to be.

I'm not going to claim to know a lot about machine learning, but I do know a lot about Dota. Skill shots are an extremely small component of Dota and OpenAI themselves stated that their bot is coded to have the reaction time of an average human to not make it unfair. Also, OpenAI purposely created an extremely vague neural network so that it could be reused for purposes other than being good at a video game which would not be an issue for a Toribash AI, and would speed up the process exponentially.
I may be understating how hard a Toribash AI would be to make without insane amounts of resources, but there is no way it can be more complex than Dota. There are so many mechanics that their AI had to perfect in order to beat the best players in the world. Abusing hero turn rate, keeping creep equilibrium, predicting where the enemy would be when they're under fog of war, etc. Their AI actually taught the best players new 1v1 strategies.
S A D B O Y S
Originally Posted by SkulFuk View Post
Vox had a WIP script that sorta did it years ago (it controlled both toris), but it took days of leaving the game running before it would figure out staying upright let alone managing to fight properly.

Either way it was interesting to watch them slowly improve. The file it created did get horrifically big though since it stored everything that happened through thousands of matches.

No idea if anyone still has a copy of it handy though, it's from 2011/12 and was only passed around the staff on IRC.

The script was uploaded to one of the older dropboxes, ask snake, pretty sure he got hold of it
Now doing recoloring for people not in the clan as-well, PM for more info!
PROUD OWNER OF THORN'S GOOD ENOUGH WRITER AWARD!
Here are a couple of problems and ideas:

1. what would be the input data? Sure replays - but how to encode them for the AI?

I am thinking the easiest way is to use a 3D space with coordinates of both toris joints at each timer stop along with the states of the joints that caused them. Perhaps even input the gravity, grab/no grab mod and all relevant physics stuff.

The 3D data could be handled by a 3D covnet and the joint states could be handled by a feed forward stack of layers, which could be concatenated into a LSTM or GRU type of layer. The output of the LSTM layer would have 21, 4 state neurons (do not know of any implementation of this now but I will dig a bit) which basically predict the next joint state.

2. what would be the loss function - how to score what is good and what is bad?

One would need to score each timer stop as well as the end of the match.

Damage dealt can be used as input to a loss functions but this is not all that significant for many mods, and has little correlation with the winner of the match. Perhaps avoid scoring at all unless fracture of dismemberment has occurred.

3. Great variety of mods.

In my opinion it would be hard to make an overall AI but rather an AI for each class of similar mods.

4. Lack of available data.

I trust that at least 100k replays per mod would be required to even consider attempting something like an AI, and even then there would need to be a stringent human level replay review to remove non relevant replays such as too short replays, replays when one player was really bad, where players did not make contact and so on.

When I consider the amount of effort needed to attempt this I think the time can be spent wiser on some other AI task. Most likely the end result would be either a really bad AI which would lose most of the time or a really good (unfair advantage) AI no one would like to play against.

A Keggle competition with real money prizes would be a good kick off, however I don't think tb is in a state to offer that kind of incentive.

If I knew about AI as much as I expect in two years, eight years ago (when tb was the holy grail for me) I would have definitely attempted it .
Last edited by missuse; Aug 7, 2018 at 10:17 PM.
wishful eyes deceive me
Sorry If I will have mistakes. I think AI for toribash will be a good idea. It will work like a chess cause in toribash you click on joints. It's not hard movement. So I think it's will not be so hard