We gave you a mission: take the role of Christian Wolff, aka “The Accountant”, and secure data that was threatened by his enemies. Simple mission but difficult to master. The movie “The Accountant” from which the hackathon game was inspired has been live in theatres for three weeks now. Time to come back on the best strategies for the contest.

For everyone, this contest was special. Two weeks of competition instead of just one, an overlap with Hypersonic (unfortunately we couldn’t change the dates of either of the contests), an optimization problem after four consecutive contests of multiplayer bot games… and awesome prizes. So awesome that some players tried to access the validators and based their strategy on it. This was strictly forbidden by the rules of the competition, so sadly we had to invalidate their participation.

At first the hackathon looked straightforward. At each turn of the game, Wolff had to move or to shoot an enemy. Most enemies he could one-shot (kill with one bullet only). However, some enemies had specific amounts of life and the damage Wolff could deal depended on the distance he was from them. Enemies were moving slowly towards the data points Wolff needed to secure. He could afford to lose some data points but could also die if he got in range of the enemies. First difficulty was then to run away from enemies he couldn’t kill in time.

To make the game really challenging, you would score most points if you managed to save the most data points using the less bullets as possible. To move or to shoot, that was the question.

Strategy of Rafbill, winner of The Accountant hackathon

My source code is available here but is probably really not easy to read.

The core of my algorithm is to optimize a strategy generating a move sequence, rather that the move sequence itself. The notion of strategy I settled for is an ordering of the enemies, along with which enemies to intercept and how to intercept them, and where to move in the first steps of the execution (The first move is usually more important than later ones).

I generate several base strategies moving to a close position and then repeatedly killing the closest enemy. I have a function which executes and evaluates strategies. The score of a strategy is simply the end score, plus a large negative constant if I lose. I generate additional strategies to deal with test cases similar to the IDE test 26 by moving to a position required to kill an enemy before he takes his first data point.I define several operations to randomly change strategies : swap two enemies in the order, decide to start or stop intercepting an enemy, and change the initial move position.

By hill climbing, it is then possible to find a local optimum. I originally used Simulated Annealing (SA) to avoid getting stuck in a bad local optimum. However the changes in scores were to big, so it didn’t work very well. Repeatedly restarting (I restart after 100 iterations without improvement) from a random initial strategy gives better results. This algorithm with the right tweaks gave scores in the 42k-44k range.

At that point, I focused on the stability of the results, as I noticed large score differences between executions. As my algorithm usually found the same ordering of the enemies in both the good executions and the bad ones, I decided to use its move sequence in another algorithm that would try to turn a bad execution into a good one.

The main idea is to generate paths that are close to the original path. I considered them better if they manage to kill enemies earlier/in fewer shots than in the original path. I used beam search. The states at iteration i are game states at time i, ordered so as to prefer move sequences that kill enemies faster. The children of a state are obtained by moving to random positions, moving towards the next enemy or the position from where we shoot it in the original move sequence, or shooting the next enemy. This ensures this can find the original move sequence, and potentially some better ones.

It achieved the original goals, as scores were much more stable using this second algorithm. Tweaking a bit the scoring function for the beam search, using all the available time to keep on improving the solution and a lucky submission got my final score.

Strategy of Ghirtor: 2nd of The Accountant hackathon

First of all, I did a basic simulator to simulate the full game without any optimization (like trying to remove cos or sin).

When I finished to correct all different bugs in my simulator, my first strategy for a turn was a random choice. During a turn, I then simulated all the games from the position I had, and calculated the score for each one (the one given in the rules). I then just saved the sequence of actions which lead to the best score and output the first one.

Then I implemented a few optimizations, to be able to play more games. For example, removing slow functions (cos, sin) and trying to use less square roots. To handle the moves, I chose a random point on the map and I used vectors. For the enemies, I tried to update the target (data point) of each enemy as little as possible. I only updated it if the target of a specific enemy was removed during the previous turn.

In the middle of the contest, I added a new optimization: if there was only one enemy or only one data point on the map, then it was possible to know exactly where the enemy would be (enemies in case of one data point) at each turn of the game, so I calculated their path. In these situations, any action I did had no impact on the future positions of the enemies. It enabled me to simulate many more games (+200%) during the allocated time since I didn’t have to update the position of enemies.

After my optimizations, I added some more strategies. First, if I could one-shot an enemy (one bullet to kill) then I did it. If there were several enemies I could one-shot, I preferred to target the enemy who was the farther away from me. Else I could randomly make a move or a shoot action. In the case of a move action, I had a chance to make a random move or a chance to make a move toward the closest enemy. In case of a shoot action, I had a chance (a very small one) to shoot a random enemy on the map. Otherwise I shot the closest one.

And my latest strategy was the following: if I didn’t find any way to save a datapoint in 50 or 100ms of simulation, then I tried a new strategy (which was better for test case 26 for example). Indeed the strategy I described above focused on the enemies to shoot and not the data points to save.

With all of these optimizations and strategies, I managed to reach more than 44k of score at the end of the contest.

I really enjoyed the contest and I had fun doing it.

Strategy of Vadasz, 3rd of The Accountant hackathon

Thanks for the contest it was a great amount of fun! When I first saw the contest, I noticed it was very close to Code vs Zombies. Even if there were some base components that I could reuse, the optimization part of the game was different. I thought it will be a contest where I would use a Genetic Algorithm (GA), but I was wrong. Let me explain my strategy:

First of all, I based my strategy on game simulations. So I built a game model, where I could simulate games very fast. I think all top players did this.

When I finished my model, then came the questions: how to simulate games, how to generate random steps, which heuristic should I use? At first I tried GA, where gens were 1-1 steps (1 action – 1 turn). There were too many variations, so I realized I had to make more heuristics to my algorithm.

I noticed that when I started shooting an enemy, I didn’t want to start shooting another one in parallel. I changed my gen from one step to the following: (EnemyId, MaxDistance). So when there were 5 enemies on the map, I had 5 (EnemyId, MaxDistance) gens. A gen determined the maximum distance needed before I started shooting the enemy. When I was further than the max distance I moved closer. When I was in the distance, I started shooting.

I tried to crossover and mutate my gens, but it was not efficient enough, so I dropped my GA. I started to calculate the next enemy to target from the current state. The nearer an opponent is, the more chances it has to be picked. With this technique I think I had about 40-41k points.

There were still too many enemies to be picked, so I added a filter: consider only the opponents that were the closest to their datapoint. If there was a datapoint with 5 enemies, then I chose only the closest. My score was then around 41-42k.

My third idea came from the following observation: when I moved towards an enemy, there could be too many enemies, so I had to dodge. Then the idea was to one-shot enemies if I could. 42-43k with this.

My last modification was simple. Instead of always killing the enemies on my route, I sometimes tried to target a random one. With this I reached 44.3k points.

Overall it was a great contest, I didn’t sleep too much for 3 weeks (with hypersonic) 🙂
I’m looking forward to the next contest!

Thank you all of you for sharing with us and congratulations again! You can find strategies and feedbacks from a lot of other players in this forum topic.
Unfortunately we are not going to release the game in the bot programming section of the platform, sorry.