The Codebusters coding challenge has come back on the AI page. The game is a bit different from the contest one since two rules have been added in the Silver and Gold leagues. We’re putting a highlight on the strategies from players who reached the top 10 of the contest leaderboard.

For those who didn’t have the time to participate in the contest, this is the opportunity to learn more and dive in.

## Exploration of the Map (Romka, 5th, C++)

I coded my exploration algorithm in the first hour after the contest had started and didn’t change it during the whole contest.

I consider a grid on the playing area with step = 100. I have a set of “unseen” nodes of this grid which I update every turn. When one of my busters wants to target a new place to explore, he selects the closest node from this set but adding to all distances a random number from 0 to 3000.

Ultimately, I could not decide what was better: to explore large areas with all busters walking alone, or to explore small parts of the map with a group of busters in order to defend against the opponent and catch ghosts faster. This random change of the distance could lead to either of these scenarios.

## Unit Testing (Hohol, 2nd, Java)

I’m quite proud of it. It allowed me to create some rather complex logic without fear to get lost in bugs.

About half of my strategy features are mentioned in these tests. All numbers used in tests are quite small (less than 100) because I use a separate set of game parameters for unit-tests and for real runs.

```public static GameParameters createTestGameParameters() {

GameParameters r = new GameParameters();

r.W = 51;
r.H = 51;
r.FOG_RANGE = 7;
r.MAX_BUST_RANGE = 6;
r.STUN_RANGE = 5;
r.RELEASE_RANGE = 4;
r.MIN_BUST_RANGE = 3;
r.MOVE_RANGE = 2;
r.GHOST_MOVE_RANGE = 1;

return r;
}```

It allows me to use more convenient numbers in tests, easier calculations, and exact (without any scaling) schemes on checkered paper.

Some may say that writing such tests slows down development. For me, it’s the thing that speeds up development. When you implement a feature, you test it anyways. If you don’t have tests, you upload a new version and watch replays.

Writing a test for a feature takes the same time as watching a replay (if you already have reasonably good testing framework). But tests will stay with you and will be run dozens of times later.

So, are unit-tests always such a great idea?

No.

Unit-tests make you sure that your code does what you want it to do. But it can’t check if what you want is the good thing. If you have a nice, but risky idea, only real matches can tell if this idea is good. But if you are confident in your idea, you can start with unit-test for it right away.

So happened, that this specific contest was full of such obviously good heuristics.

The other case when unit-testing is not so good is when game rules are just too complex. If the world state contains many parameters, it’ll be hard to set them up for a test. If game events require some complex calculations (physics simulation, for example), it may be too hard to find the right answer for your test manually.

Again, this specific contest had small game state and simple rules. It was easy to set up tests and simulate game events manually.

## Code Organization (Romka, 5th, C++)

I had a base class for Entity and two derived classes for Ghost and Buster. All game information such as arrays of busters, ghosts, base locations were stored in a class Game. It could read information about a new turn and update corresponding fields.

I also had one class GameAI which possessed tactical information such as a behavior mode, a list of moving targets, a list of enemy busters to intercept and so on. I had one main method named “makeDecisions” that subsequently called methods dedicated to one particular activity. Part of it was as follows:

```setSupportWhenEnemyNearby();
setInterceptors();
setBusting();
setStunEnemyBustingGhost();
setExplorers();
for (int index : movingToBase)
setHerding(game.myBusters[index]);
setFancyChat();```

Each method considered only busters that weren’t used in the previous methods (with a few exceptions, like for herding). The last one was the most useful method, of course 🙂

I didn’t use any unit tests as I’m totally fine with controlling code of this small size (1.5k lines of code, including empty lines — about 50kb in total). I had a bunch of asserts here and there, though. Asserts are a great way to prevent using some methods in the way you didn’t suppose to use them earlier.

So for those of you who think that unit tests take too much time to write, I suggest you try asserts. Nevertheless, I liked the way Hohol organized his testing, maybe I’ll try his recipe in the future.

I can answer any question that you may have about some details of my implementation.

## Observing the Best Strategies (csj, 3rd, Scala)

By watching countless replays I made minor modifications throughout the contest. I made a few major modifications when I observed some brilliant behaviours from some top AIs.

### Chain-zapping by Hohol

Despite my best efforts, Hohol always seemed to get the best of me during gun battles, and I sought to figure out why. I watched replays frame by frame and observed that in some circumstances, Hohol would zap my buster one turn before I would zap his! I panicked and thought that I was not interpreting the input correctly and that I was misjudging the cooldowns or stun durations by one turn, but it was not so.

So what happened?

He was actually zapping my guys the turn before they came out of stunned state. This way they would not have a chance to fire upon enemies the turn they woke up. I immediately implemented this idea: when looking for enemies to stun, consider also those who are in Stunned status for 1 more turn.

### Herding by Recar

At 10 AM on Sunday (4 hours before the contest ended), I noticed a new leader, Recar, and immediately started studying replays against him. While carrying a ghost (even sometimes when not carrying a ghost!) his guys would gently shepherd ground ghosts towards his base!

This struck me as brilliant – while there would be a cost of doing this (ghosts float half as fast), the dividends would show: better map position for the mid/end game and fewer running costs later on (not to mention return safety). I set to work on this immediately and within a few attempts, I got it working. This was submitted with around 2 hours remaining.

## Learning a New Language (csj, 3rd, Scala)

I learned Scala on the job and have been using it for about 8 months. This is my first contest attempt using Scala and I can confidently say it will not be my last. Scala is such an expressive language that makes it extremely easy to communicate intent — not once during this contest (at least that I noticed) did I discover a careless error in my code.

Errors are extremely difficult to track down in a contest like this — without a debugger you’re reduced to writing a lot of console entries to diagnose the problems, and this takes up a lot of time — time that could be spent watching replays and studying top opponents. This time I was able to focus my attention where it mattered and it showed on the scoreboard.

Thanks to CG team, it was very fun and high-quality contest.

Hohol

I enjoyed this contest a lot because I like to write team-oriented AI very much.

Romka

An excellent contest, beautiful visuals, superb execution as always and a great pleasure to participate in!

csj

You can find a lot more strategies in this forum topic from which the excerpts were taken.