The Coders Strike Back AI Contest ended this weekend. You had 8 days to code the smartest AI able to drive 2 pods to win a race through checkpoints. Sounds pretty easy, no? But as soon as you start thinking about acceleration, paths, angles, vectors, rotations, collisions… it’s a whole different story. This is the contest where we observed the most amazing (mathematical) strategies. 8 days were a long run to hold, but how rewarding in the end for those who managed to reach the Top 100!



7,500 participants took part in the contest, of which 4,900 finishers ended up in the leaderboard: congrats folks!

On the podium: Jeff06 (France, C++), Owl (Germany, C#), and pb4608 (France, C++). Congratulations to the 3 of you!


The game consisted in a race of pods, 2 VS 2. You had 2 pods to control, and the aim was to activate all checkpoints in the right order, as fast as possible, to win the race. As soon as one pod completed the race, that pod’s team was declared the winner, so the second pod could be used for defense strategies or more vicious attacks to counter the opponent…
So how could you proceed?
A majority of players opted for the following approach: target the next checkpoint and adapt the pod’s speed according to the angle of the pod and the proximity of the checkpoint. The more aligned you are, the fastest you can go. The closest the checkpoint is, the more you need to slow down, to have enough time to rotate.
There were a lot of possibilities to choose from for the second pod: should it take part in the race, or play a defensive strategy? Some players stationed their 2nd pod next to a checkpoint to block the opponent, others rushed on the opponent to bash it off its track.
Top ranked players (over Top 500) rather favored this strategy: simulate the game several turns in advance, and select the strategy that would bring the maximum amount of points according to the simulations you could make. The main difficulty with this approach was that there were so many different combinations to analyze that isolating the most promising ones got very tricky (for example, some players used genetic algorithms to make the right choices).


A lot of players have shared their strategies on the forum! Here are a few of them to give you hints on how you could build your AI: 

Hastings (USA, C++, Rank on Contest: 4)

“My AI was composed of two levels, a heuristic bot designed to make a decent move with very little computation, and a minmax bot that uses the heuristic bot, which plays both sides, combined with a simulator to evaluate positions.

The heuristic bot is somewhat simple. It first checks if it is the furthest along on its team. If is is, it plays offense, otherwise it plays defense.
On offense, the heuristic bot targets the point nextCheckPointPosition – 4 * velocity, and only thrusts if it is approximately facing the point it is targeting. The defense bot uses a similar formula to either block the opposing bot, or beat it to the checkpoint after the one it is currently passing. See if you can work out why this formula approximates correct behavior!
The minmax bot selects the closest enemy pod as the opponent it will minmax against. For each turn in the future, it considers four moves for itself, and four moves for the selected opponent, giving 16 combinations of moves per turn. The other two pods are controlled by the heuristic bot. These are each simulated recursively for three steps, giving 6500 positions. This is where the two level hierarchy comes in: to evaluate further into the future, the bots are assumed to all be controlled by the heuristic bot for an additional 4 moves. Each position’s score three moves in the future is given by a simple function of the predicted position after an additional 4 heuristic moves, giving a total depth of 7 moves.
Once each of the 6500 positions are evaluated, the bot simply selects the best using the standard minmax formula: maximizing the score, assuming that the opponent will select moves that minimize the score.
Obviously I have made some simplifications and omissions here, largely in the details of the “simple” score function and the details of simulating motion and collision. However, I think I’ve presented the important parts of the algorithm clearly.”
Frunit (Norway, Bash, Rank on Contest: 482)
“I didn’t have much time, but I made it to the Top 500. The first pod follows the checkpoints and the second pod attacks the best enemy pod. The calculations were quite easy. I measured the distance and angle to the next checkpoint (i.e. polar coordinates). The thrust was very basic: When the checkpoint is further than x units away, do full thrust, else go with a thrust of 100. If the checkpoint is not within 120 degrees of the flight direction, don’t accelerate (only turn). The aiming was a bit adjusted later because I saw that the pod would circle around checkpoints if coming too fast from a difficult direction.
No magic for the second pod: Just use full thrust directly to the best enemy pod. Not very effective, but hey, it was enough for mediocre AIs.”



Nergal (France, Swift, Rank on Contest: 43)

“I used a pretty simple approach:
For each turn and each of my pod:
– Check if will collide with an opponent pod next turn considering current speed of all pods, and if yes, and if speeds are different enough between the pods activate shield
– If no collision planned:
Generate target coordinates for a turn of -18°/-9°/0°/+9°/+18°
Have possible thrust list of 0,100,200 (I tried 0,50,100,150,200 also)
Now compute my next turn positions considering each combination between the above parameters, and for each of these positions, redo the same thing (total depth of 3 to 5 depending of the parameter list I choosed. at the end, compute a score for each position, considering if I have crossed a checkpoint, my distance to the next checkpoint (second next if I crossed one), my speed, the angle between my pod heading and the next checkpoint, and the angle between my speed vector and the ideal vector to next check point.
Then pick the best thrust/target combination for next turn.
Pretty simple but works fine when the evaluation function is good :)”


Psyho (Poland, C++, Rank on Contest: 8)
“The core of my strategy is pretty naive beam search for pathfinding (0/50/100/150/200 + -18/-9/0/+9/+18 first move and 0/200 + -18/+18 for subsequent). Without any big optimizations, I was able to get depth of 60 and width of 1000 with duplicates removal (interchanging left/right rotations). Evaluation is done based on (laps, checkpoint, distance_to_next_checkpoint). With 60 steps simulation, I can safely ignore rotations, etc. By default pathfinding ignores position of all of the ships. I’ve added “no go” zones to my beam search and a different evaluation function for tracking enemy player.
Overall, my AI did horrible job at blocking / pushing away enemy blockers (adding to my eval collision angles would help tremendously). I also didn’t simulate the collisions, so quite often my blocker helped enemy pod by pushing them into correct spot. On the flipside, when enemy didn’t have any good blockers (which wasn’t the case in top10), I was able to quickly race through the checkpoints.”
Mr Anderson (USA, C#, Rank on Contest: 593)
“Pretty vanilla for me; go to the next checkpoint, slow down if you have to turn or floor it if the next checkpoint was at a similar angle. Tried having a dedicated blocker but found that having 2 racers was generally a better choice. Built an AI module that would calculate 5 turns in advance but couldn’t get it tuned well enough to outperform what I already had. Ended up reverting it back a few hours before the contest ended…. Finished just inside the top 25%, so a pretty poor outing overall. This was my first contest and I learned a lot, so it wasn’t a waste of time by any means.”