Fantastic Bits has been a lot of fun. Congratulations again to the three winners Magus, reCurse and pb4! They have been eager to explain how they managed to win. Ready to become a boss in bot programming? Read on…

He’s a legend on the platform but had never won a contest before. Magus finally did it with Fantastic Bits!

Magus postmortem

Introduction

Fantastic Bits was by far the hardest challenge ever created by CodinGame. It is simply impossible to just hope reaching the top 10 without putting everything you know on the table. It’s a long, hard and dangerous road.

When I first saw the rules of the game, I immediately knew what method I was going to use for my AI.

Continue reading

This method has been widely used by Coders Strike Back players. I also wrote about it in another article. It goes as follows:

  • Build a complete functional engine capable of simulating every action of the game in order to emulate a turn and compute the state of the next turn
  • Use this engine to test several chains of actions and check what game state I would reach after playing these actions
  • Build a genetic algorithm to refine and maximise these chains of actions in order to pick the best possible moves

When you’re starting a new challenge and aiming for a top ranking, the first thing to do is to plan your schedule. For me, I wasn’t on vacation and thus working from Monday to Friday. I’m never here on Friday night (neither on Saturday morning, I need to sleep!). I could thus code for Fantastic Bits on weekends, lunch breaks (around 1h), and 4 nights on week days. To plan your schedule, you need to set milestones, and say “on Monday night, I need this task to be completed”. Here is how I laid out my plan.

  • Tuesday night: simulation engine is done
  • Wednesday night: performance optimisation completed, the engine should be able to simulate a fair number of turns in 100ms

As of Friday night, I had planned to work only on the evaluation function (also called “fitness” function). It is the function which points the direction that your AI should take.

Everything actually did go faster than I expected. I don’t know if this is luck, or pure skill or simply a rigorous mind-set that I have built during 5 years of professional development. On Monday night, the engine was up and running, and bug free. I tweaked performances to reach 50 000 simulated turns in 100 ms. So as of Tuesday, around lunch time, I could start working on my evaluation function. By Thursday night, I was done coding what was going to be Fantastic Bits’ winning AI. I didn’t know it yet, but after Thursday night, every single line I coded ended up in a failure.

At the end of the day, it means that my AI was finished in around 25 hours of work.

But what should you do if it doesn’t go according to your plans? For starters, take a few minutes to weep, then get your head straight. You need to consider the fact that your AI will not behave entirely as you wanted it to.

Developing the game engine

When you want to make an AI based on a search algorithm (Monte-Carlo, Alpha-beta, Genetic Algorithm…), you must develop a program that is able to simulate the behaviour of the game. This way it can compute the next state of the game depending on the next moves that the players will execute.

Fantastic Bits’ game engine is tricky. It uses the collision and physics engin from Coders Strike Back, while adding collisions with walls, posts, goals, and spells! This impacts performances, and thus leads to sacrifices. My engine has “voluntary bugs”, which are bugs I can’t fix without ruining performances. For instance, when a Snaffle bounces off a vertical wall, it has a 0 units radius instead of a 150 units radius. This bug exists because, in order to score, the center of the snaffle should go beyond the goal line. Scoring is the same as having a collision with a radius of 0. And it is really too slow to test whether the Snaffle actually hit the wall outside the goal or inside, and then adjust the equation.

When I started developing my engine, I faced 2 choices:

  1. Make a clean code, easy to maintain, but impacting performance
  2. Make performant code that will be harder to build without bugs

I decided to go for the first solution. My C++ code is object-oriented and uses polymorphism in order to help comprehension. Of course this reduces performances, but improves the readability of the code.

Please know that you can find my code in my personal GitHub. I stripped my evaluation function and the “intelligence” of the code. It contains the complete simulation of the game’s physics, as well as the genetic algorithm. I know, I know: why C++? Well, it is the best performing language. If you know how to use C++, you get code that is as fast as C. Sorry for those who can’t read C++, that’s my best offer.

I made a UML class diagram to help you better understand my code.

I won’t dive into details on how the game simulation works. First because I will end up repeating a lot of what I already wrote in my previous article, but mostly because it would take so much time. Just know that if you followed what I did for Coders Strike Back, you will notice a few of my functions changed (they are faster and more stable). Here are the line numbers for the changes so you can find them easily.

  • Line 469: method to detect the next collision between a Unit instance and a wall
  • Line 503: method to detect the next collision between two Unit instances
  • Line 543: method to apply effects of a collision between two Unit instances
  • Line 543: method to apply effects of a collision between a Unit instance and a wall
  • Line 1474: the play function simulates one game turn. It plays every action in order, and make everyone move
  • Line 1317: the move function handles displacement and collisions of units on the board (equivalent to my play method for Coders Strike Back)

Be careful as a lot of these methods are overloaded/overridden by child classes of Unit. Polymorphism and Object-Oriented Programming are the foundations of my code!

Boosting performance

I can’t go into detail. You could write entire books for C++ optimisation. Just know that hunting for performance everywhere is pointless, tedious and dangerous. You will probably create bugs and not get any performance improvement at all. You need to focus on “high risk areas”. In my code, easily 60% of the time is spent on collision detection (between lines 469 and 503). Looking to optimise elsewhere would be a waste of time.

About the rest of the code, here are a few pointers:

  • Avoid unnecessary copies of values
  • New and delete are slow. Don’t use them in a key zone of your code
  • Of course, avoid unnecessary operations
  • Do not forget, CodinGame doesn’t use -O3. The STL is slow. Very, very slow.

The Genetic Algorithm

In order to build a genetic algorithm (or a Monte-Carlo), you first need to define what is a solution. At first I made the choice of always use a thrust of 150 for a move, and 500 for a throw. My AI does not try any other values of thrust. I cannot confirm if it was a good or a bad idea. Next, randomly picking an angle between 0° and 360° doesn’t work that great. Because if you’re out of luck, you will test with angles that are really close, like 90, 90.1, etc. And most probably, the related solutions will be really close. If you look at my code at line 56, you will see that my code only uses 36 different angles. It allows me to pre-compute cosinus/sinus of these angles. My solutions are evaluated over 4 turns. They consist of:

  • 2 arrays of 4 angles where each angle represents the angle towards each wizard will move or throw their Snaffle
  • 2 additional structures indicating at what game turn a wizard will cast a spell. There is also the type and target of the spell

Each wizard can thus only cast one spell per solution. I took that tip from pb4608, who uses the same tactic for the shield in Coders Strike Back.

If you have already implemented a genetic algorithm, you probably had the same problem I got. There are dozens and dozens of different implementations. Some are solely mutation-based, some only use crossing between candidates… Eventually I asked pb4608 for his algorithm and implemented it. I added a few tricks of my own. In my code, my genetic algorithm is between lines 1885 and 2070. Here’s a rough explanation:

  1. Generate a completely random starting population
  2. Simulate all solutions and score them using a fitness function (the “eval” function line 1501, which I stripped from my code 😉 )
  3. Randomly pick two separate solutions
  4. Keep the best out of the two, and you get your solution “A”
  5. Randomly pick two separate solutions again (make sure they are different from solution A)
  6. Keep the best of these two. You have your solution “B”
  7. Create a child solution from A and B (“merge” method line 159)
  8. If you want, you can apply a mutation on the child solution
  9. Place this child in the next generation
  10. Jump back to step 3 and repeat until you have a complete new generation
  11. Simulate every solution of the new generation and score them
  12. Repeat creating generation until the end of the 100ms

Every time you simulate a solution, check its score. Every time you see a better score, keep this solution, as it’s the winning candidate of natural selection!

I tweaked this algorithm a bit in my code. The 2 modifications are the following:

  • I take the winning solution of the last turn in the starting population of the next turn (with a random move at the end)
  • In every generation, I add the currently winning solution with a mutation

Predicting the opponent’s moves

Simulating is nice. Simulating without predicting the opponent moves doesn’t work so great. For Coders Strike Back, the most used method was to use your genetic algorithm for some time (like 30ms) in order to find a solution for the opponent, and then use this solution in your simulation in order take their actions into account. But for Fantastic Bits, there is a problem: we are lacking a lot of information. There is no way to know how much magic the opponent has left, or to know what spells are cast… I decided to predict the opponent with a simple AI:

  • Every wizard goes towards the closest Snaffle
  • If a wizard carries a Snaffle, they throw it towards the goal

You can find the code simulating these opponents in the “dummies” function line 1520.

The evaluation function (or “fitness” function)

The evaluation function provides a score for a solution. It decides if a solution is good or bad. It adds the “intelligence” in “Artificial Intelligence”. In this function, you want to code the fact that Snaffles should get closer to the goal! I stripped this function from my code, I will let you find out what to put in it, but I can give you some pointers.

The basis of my eval function are these lines:

for (int i = 0; i < snafflesFE; ++i) {
    if (!snaffles[i]->dead) {
        score -= dist(snaffles[i], myGoal);
    }
 }

This simple loop allows your AI to bring the Snaffles closest to the goal. In order to build the rest of your evaluation function, you need to use this as a foundation. For instance, you will probably want to add a criteria on mana, or else your AI will think spells are free and spend mana constantly.

score += mana * 200;

By adding this line, you tell your AI that each mana point should be spent to get a Snaffle closer to the goal by at least 200 units. For every criteria you add in your evaluation function, you need to ask yourself how much is this criteria worth in getting Snaffles closer to the goal.

Here are a few others leads and criteria from my own evaluation function:

  • You can probably add a bonus for scored goals. Do not forget that a goal scored in one turn is better than a goal scored in two!
  • Getting wizards closer to Snaffles is important
  • It is better to spread your wizard to cover bigger areas
  • Get back to your zone and defend when there are two many Snaffle close to your goal
  • Keep opponent wizards away from Snaffles

How do you get ideas for your evaluation function? The secret tip is: don’t code. The best ideas will come when you aren’t coding. You wanna know how I got my best ideas for my evaluation function?

  • I found a tactics purely by mistake by setting a wrong “if” condition
  • I had the best idea when I was cooking crêpes
  • Most of my ideas came to me in the restroom, in the shower or in the bus
I had the best idea when I was cooking crêpes @Magusgeek #TipsFromTheBest #FantasticBits

In the end, the best advice I can give you is to unplug. Get your brain to do other things, and it will come naturally.

Conclusion

Fantastic Bits is a really difficult challenge. The engine is rather complex and you have to make sacrifices in order to get acceptable performance. If you want to build a powerful AI, you will need courage and rigour. Good luck!

— Magus


His first contest was Hypersonic, and he finished 6th. He got the second place in Fantastic Bits. All we can expect from reCurse is to win next contest.

reCurse postmortem

The AI I have created for the Fantastic Bits contest was, like most players in the top, based on accomplishing three things:

  1. Re-code a full simulation of the game mechanics to accurately predict the result of a turn
  2. Write an evaluation function to return a score for a given state
  3. Search for a plan that gives the best score according to the evaluation function

For the sake of brevity, I am not going to enter into the details of writing a simulation.

Continue reading

 

The mechanics are for the most part well explained in the statement, and the Coders Strike Back post-mortem of Magus already goes into most details of managing the physics.

Evaluating a state

Like most people, I have started my evaluation function with these straightforward heuristics to get a good AI up and running:

  • Difference in goals scored between me and the enemy
  • Sum of distances of snaffles to enemy goal
  • Sum of distances of own wizards to their closest snaffle

With this alone, I think it is easy to reach gold league, maybe even legend with a good search algorithm. But the real challenge to get past that point is finding better complementary signals.

After chatting with others in the top 4, namely Magus, pb4 and Neumann, I was surprised to find out we have tried many similar heuristics, yet ended up with different conclusions as to their effectiveness. It seems there really is no guaranteed way to figure out a good AI without experimenting on your own, at least on Fantastic Bits.

Here are a few of the heuristics I have tried that yielded measurable improvements to my AI:

  • Better score for having mana in reserve, giving a sense of cost for using magic, otherwise the AI will not use it wisely. Casting petrificus on a snaffle barely moving is a waste. I was way too conservative on this at first, towards the end I made the mana much cheaper and saw massive improvements
  • Put more weight on “important” snaffles, which are the snaffles required for a team to win, that are closest to their opponent’s goal. I distorted the distance calculations to make them seem closer to goals and wizards. I put too much weight on this at first, making the AI go “all-in” way too often and underestimating the enemy’s ability to stop its plans
  • A small penalty when my two wizards were too close to each other. This allows them to cover more space and avoid reaching for the same snaffle, among others
  • Taking velocity into account when calculating distances. For example, a wizard and a snaffle should not have the same distance if they are moving away from each other than the other way around
  • An exponential curve applied on snaffle-goal distances, so that a medium distance appears almost as threatening as a short distance. This allows snaffles to converge more quickly towards an enemy goal, and make the AI consider threats to its own goal more seriously
  • A significant negative score for an enemy wizard’s distance to snaffles. This usually leads to more defensive behaviour by ramming into an enemy wizard and avoids throwing a snaffle towards an enemy

Of course, I also tried many other things that ended up not improving anything in the best case, and a notable decrease of game quality in the worst. For example, I often tried to implement metrics to play more defensively, but it ended up playing too conservatively and scared, therefore losing more often. The best defence is offense…

As a rule of thumb, I always test for the possibility of removing heuristics when they become unnecessary. The simpler the AI the better, as the less variables there are to tweak. The faster it will converge to something great. I ended up removing at least 4 or 5 metrics after discovering they were not helping as much as they used to.

The simpler the AI the better #TipsFromTheBest #FantasticBits

The search for a search algorithm

Very often when reading the chat and forums, I notice that a lot of people assume the top players systematically use a genetic algorithm (GA) to search for plans to then test with the evaluation function. I have lost track of the number of times I have been asked if I was using a GA. It’s probable their groundbreaking usage during the Coders Strike Back contest, along with the post-mortem written by Magus, has massively contributed to their popularity on CG.

But I feel it’s too easy to consider GA as a silver bullet without truly understanding how, why and when it works. One does not simply use a GA to reach top 5. Not only there is more to a good usage of GA than simply copying whatever implementation comes out of a Google search, but it is far from being the only option for good results. For instance, GA is only a part of a much larger class known as evolutionary algorithms, which contains many interesting alternatives, such as particle swarm optimization.

The search algorithm I ended up experimenting with is probably not revolutionary. It’s actually quite inspired from existing techniques, including GA. But building it progressively on my own, therefore having a stronger grasp and intuition of how it works, helped me a lot towards designing and tweaking it appropriately for good results. So here is how it came to be.

Monte Carlo

Starting with a Monte Carlo method, regardless of whether the final intention is on using randomness or not, is a great way to begin testing the whole simulation framework, as well as the first steps of the evaluation function. It is also trivial to implement:

``` c++
Random rnd;
Action action;
Entity myself;
// If I am grabbing a snaffle...
if (myself.status == Status::Grab) {
    // Throw at a random angle with a random strength.
    action.type = ActionType::Throw;
    action.thrust = rnd.getInt(MAX_THROW_THRUST + 1);
    action.dest = myself.pos + Vec2::fromAngle(rnd.getDouble() * 2 * M_PI) * 1000;
} else {
    // Do the same for moving at a random destination.
}
```

And then use the resulting actions to form a turn, tick the state with it and evaluate its resulting score, keeping the actions with the best score. It is also essential to look at least a few turns ahead, the evaluation function is usually not good enough for a single turn.

Weighted Monte Carlo

This may not be the official term for the technique described here, but that’s what I am calling it: a biased Monte Carlo through weights. Because while Monte Carlo is a good start, it converges very, very slowly to a proper plan, especially if looking multiple turns ahead.

One reason for the poor convergence is the amount of time wasted into looking nonsensical actions, such as throwing towards your own goal, or thrusting back and forth. Remember how the simple strategy of “move to closest snaffle, throw towards enemy goal” works so well? Why not have both?

And this is exactly what Weighted Monte Carlo does. With domain knowledge, if some actions are likely to perform better, then they should be tested more often according to that probability. Adding on the previous code:

``` c++
Random rnd;
Action action;
Entity myself;
// If I am grabbing a snaffle...
if (myself.status == Status::Grab) {
    if (rnd.getDouble() < 0.5) {
        // 50% chance of throwing at a random angle and thrust, just like Monte Carlo.
    } else {
        // 50% chance of throwing directly at enemy goal, because it's generally a good thing to do.
    }
}
```

Great improvement for not much added complexity of implementation. Of course there are now more numbers to tweak, and it will become increasingly difficult to reach a fine balance between trusting intuition and testing for emergent behaviors. But it starts to get really fun to watch and figure out.

Overcoming the Dory Syndrome

Remember Dory, the blue fish with short-term memory loss from the movie Finding Nemo? An AI based on Monte Carlo as described so far will act in a very similar way (just keep searching…). One turn it will figure out a genius plan, only to forget about it the next turn, and then hope to figure it out again from scratch later. The end result is often perpetual indecision, visible through moments of fidgeting when you see the AI keeps switching between different ideas, only to fail at doing anything of significance.

Fortunately, the fix is simple: short-term memory is necessary. Always keep the best plan from the last turn, and evaluate it again on the next turn, using it as the initial best plan found.

Perfecting a good plan

It is often said in software development 20% of the time is spent doing 80% of a task, and the rest on polishing and finishing it. The same holds true for the search, it will find a good plan easily, but what about a great one? More time should be spent on improving a good plan and see if it can become a great one.

Here’s the way I accomplished this during the search:

  • Keep a few of the best plans found so far instead of just one
  • On each search iteration, randomly decide whether to pick one of these plans and follow it, or start from scratch
  • At the beginning of each turn, if a plan is being followed, randomly decide to keep following it or discard it for the rest of the search
  • If following a plan, pick the turn of that plan, and randomly decide whether to modify the turn
  • If modifying the turn, try varying the moving or throwing angle a little bit, or maybe cast a spell if you weren’t or vice versa
  • If not following a plan, then keep searching with Weighted Monte Carlo

With the right balance of probabilities, this algorithm will make the search converge to a great plan very quickly!

Knowing what to plan against

One aspect I did not cover so far is what to do with the enemy wizards during the search. At first, I used the simple ‘go to the closest snaffle and shoot at goal’ as expected. But then it was often acting hopelessly optimistic by putting itself between the enemy and the goal, expecting to catch the snaffle thrown by the enemy and do something crazy afterwards.

So to have something smarter to plan against, I then used my own AI to search for the enemy plan for 15 ms (using the simple AI above for myself), and then use that plan to play against me during the real search, and it ended up having much better anticipation. It’s likely all the efforts put on quick convergence helped to make that possible. I even tried to run a 5ms on myself, 15 ms on the enemy using the result, and the rest as normal, but that did not help much…

We need to go deeper

All the foundations of my AI have been covered so far, but there is one more important aspect to touch upon. There is a large quantity of numbers controlling either the evaluation or search of a plan, and finding the right combination is very tricky but worth the investment as it makes a big difference.

But how to know if a new version is better than the old one? I believe sampling a large number of games and observe the win rate of both is the safest way to go about it, even if quite slow. So I reused the simulation code that was already there for the AI, and made a local game simulator to pit my AI against itself, which was very easy because all the code was designed with this scenario in mind from the very beginning.

By making heavy use of optimization, multithreading and the computers at my disposal, I was able to simulate at least half a million complete games locally throughout the tournament. This allowed me to test a large number of combinations, by doing some sort of meta-optimization search for good heuristic parameters.

I also needed to validate with real world opponents in order to prevent overfitting. So with the excellent CG Spunk tool, I ran batches of games against the top submissions using the most promising combinations found during the contest, and therefore make sure the improvement was real before changing the constants for good.

Magic detection

It was a real bummer to not have access to vital inputs such as enemy spells, mana, etc. I spent a whole day coding “magic detection” for guessing both of these in a 800 lines horrible spaghetti function humorously called “ministryOfMagic()”. The basic idea was to play out the physics in reverse using the final velocities and make inferences from the difference between the previous turn and the expected previous turn from using the next turn data.

It was able to reliably tell the cast of Petrificus, Accio and Flipendo 90% of the time, and correct enemy mana maybe 60% of the time. Knowing the spells in play did a little bit of good, but not as much as I hoped for the time I put into it. I could not make any good use of the enemy mana, though it might be due to its unreliability.

Conclusion

Even though the gains were incredible, I still ended up spending too much time on meta-optimization and reached a dead-end on Saturday before the deadline. I was not even watching any game for a long while, just going with the winrates and tweaking numbers. In retrospect that was a mistake. I then started watching games again and not only noticed a few bugs right away, but also had more ideas of new heuristics to try out. A few more hours tweaking these numbers later, to finally break through and ending up first place with a large lead on Saturday night, which made me really happy and hopeful of ending that way.

Unfortunately for me, Magus submitted his real version on Sunday morning, which left me little time to come up with a response as the contest ends at 2PM in my timezone. With a very minor change, I had a version beating his last on a reliable 55 to 60% winrate, but it was then losing more against pb4 and Neumann. Running out of time, I went with the safe option and stayed with my version of the night before, which allowed me to finish (very narrowly) on second place.

I want to take this opportunity to thank all the participants and CodinGame for making this whole contest such an awesome experience, and my girlfriend for supporting me through the long nights I spent working on my AI. 🙂

Don’t hesitate to poke me if you have questions or comments, I tried to keep it short here, but might end up writing more later if enough interest is shown. Thanks for reading!

— reCurse


I guess you all remember pb4608 for winning the Smash the Code contest. He’s still around! He ranked 3rd in Fantastic Bits.

pb4608 postmortem

“Evaluation functions sometimes feel like magic…”

(during the contest)

“Why doesn’t this work, it’s the smartest code I have ever written!”

[…]

(2 hours before the end of the contest)

“Oh well… let’s push that code from 3 days ago. Everything I have written since then has failed”

[…]

(Fantastic Bits is finished)

Finally. Now I have some time to reflect and think about everything that I have tried during eight days.

[…]

(It’s been a week now)

In hindsight, I still can’t figure what made that code better than other versions.

[…]

Oh, I should write a post-mortem? Well, time to tell everyone how – even to me -, this code feels like magic.

Continue reading

 

Writing an AI as an optimization problem

Fantastic Bits is a contest where your AI controls two wizards and strives to collect snaffles to send them towards the opponent’s goal.

Magus and reCurse have written detailed post-mortems where they explain that the game can be considered as an optimisation problem: the aim is to find a 5 moves deep sequence that results is the best possible outcome.

How can you say that a situation is better than another ? Rate them with an evaluation function.

Definition from ChessProgramming.com:

An evaluation function is used to heuristically determine the relative value of a position, i.e. the chances of winning. If we could see to the end of the game in every line, the evaluation would only have values of -1 (loss), 0 (draw), and 1 (win). In practice, however, we do not know the exact value of a position, so we must make an approximation.

I like to think about an evaluation function as a collection of features, correctly weighted so that two positions can be reliably compared. This post-mortem will try to show what worked for me… and what didn’t.

What worked…

Feature: Fantastic Bits is a game where you like to score goals.

Formula:

E +=100000∗(myGoals−opponentGoals)

Comment:  No comment. That’s the goal of the game.


Feature: To score goals, the snaffles must approach the goal

Formula:

E -= allsnafflesdistance(snaffle, targetgoal)

Comment: Anybody didn’t have this?


Feature: The wizards should approach the snaffles to grab them.

Formula:

E-=K⋅minallsnafflesdistance(snaffle,wizard0)   (same for wizard 1)

Comment: Special conditions can be used to prevent both wizard from targetting the same snaffle. K should be carefully chosen, otherwise your wizards won’t throw the snaffles because they prefer to keep them close!


Feature: If a wizard holds a snaffle, the snaffle position will likely be very improved one turn later.

Formula:

E += 500∗(numberSnafflesHeldByMyTeam−numberSnafflesHeldByOpponentTeam)

Comment: Without this condition, wizards may hesitate to pick-up snaffles when approaching them from behind, because during one turn the snaffle will be brought back on the wizard’s position.

… and what didn’t

(Yes, that’s the big part!)

Feature: Specialize one wizard to attack, one wizard to defend

Formula:

Sort all remaining snaffles by ascending distance towards my target Goal.

E-=K⋅distance(snaffles[0],wizard0)

E-=K⋅distance(snaffles[snafflesSize−1],wizard1)

Comment: It works, just not as well as the «closest snaffle» heuristic. I didn’t expect a miracle, I did expect an improvement.


Feature: Specialize two wizards to attack if you’re ahead, two wizards to defend otherwise

Formula:

Sort all remaining snaffles by ascending distance towards my target goal.

If you have more goals:

E-=K⋅distance(snaffles[snafflesSize−1],wizard0)

E-=K⋅distance(snaffles[snafflesSize−1],wizard1)

otherwise:

E-=K⋅distance(snaffles[0],wizard0)

E-=K⋅distance(snaffles[0],wizard1)

Comment: Same as before, I didn’t expect a big change, but at least I expected an improvement. Which I didn’t obtain.


Feature: Identify how many snaffles you need to win, and target only those snaffles.

Formula:

Sort all remaining snaffles by ascending distance towards my target goal.

E -= distance(snaffle[initialSnaffleNumber2−opponentScore],targetgoal)

Comment: Felt good, but lost 2-3 points on the leaderboard.


Feature: Identify how many snaffles you need to win, and target only those snaffles. (version 2)

Formula:

Sort all remaining snaffles by ascending distance towards my wizards.

E -= distance(snaffle[initialSnaffleNumber2−opponentScore],wizard0)

E -= distance(snaffle[initialSnaffleNumber2−opponentScore],wizard1)

Comment: It felt ok, but I lost 2-3 points on the leaderboard.


Feature: Identify how many snaffles you need to win, and target only those snaffles. (version 3)

Formula:

E -= distance(centerOfGravity(importantSnaffles),wizard0)

E -= distance(centerOfGravity(importantSnaffles),wizard1)

Comment: This works fine, except when the snaffles are far apart. Then the wizard becomes stuck in between.


Feature: If many snaffles are grouped in the same zone, target them.

Formula:

E- = allsnafflesdistance(snaffle,wizard0) (same for wizard1)

Comment: In a hypothetic case where there are as many snaffles in every direction, this leaves your wizards moving randomly. Also, this doesn’t make any compromise: if you could spend one turn to catch a solo snaffle on the way to a pack of snaffles, this evaluation will not agree to it.


Feature: Some snaffles are more important than others: they have more chances to attract the wizards.

Formula: (beware, it’s a tricky one :))

E -=distance(argminallsnaffles[distance(snaffle,wizard0)+K∗importance(snaffle)],wizard0)

(same for wizard1)

Comment: The importance level can be chosen arbitrarily. I tried to use the distance to my goal, to the opponent’s goal, to the center of the map, to the edge of the map. I thought that this evaluation was really promising, because it allowed compromises to be made. The wizard can always target the less important snaffles if they’re on the way towards the most important ones. I observed no real behavior change and no tangible improvement when implementing this feature.


Feature: Predict the opponent behaviour by asking my AI what it would do if it was the opponent.

Formula: Just run your AI twice, changing the perspective.

Comment: In CSB, this is usually a 5 points increase for any AI’s score, because your trajectory avoids the opponent when necessary. I still don’t understand why, but this code can’t go over TOP20.


Feature: Delay the evaluation until 10 turns later in the future, using dummy bots to represent both my and the opponent’s behavior after the normal «genome simulation» phase.

Formula: Self explanatory.

Comment: Positional advantages are not easy to evaluate after just a few turns of simulation. I had hoped the additional simulation would make for a more robust result. Oh, boy was I wrong! This code didn’t even enter top 20!


Feature: Predict opponent spells before their effect are seen.

Formula: If a wizard does not hold a snaffle, he will always move except if he uses a spell. When I detect that an opponent wizard did not use the “Move” command, I simulate the outcome of all possible (spell, target) combinations. My code considered that the opponent used the spell which had the best result with that evaluation. This prediction method is approximately 90 % accurate and allows my simulation engine to take ongoing spells into account. This is particularly good to avoid wasting Petrificus spells on a snaffle when there is an ongoing Flipendo effect that will push it into the goal anyway.

Comment: I did expect a LOT of improvements from this feature. I still think it should be a very good piece of code. When this code was pushed in the arena, the results were abysmal. I’m still making nightmares about that whole episode…


Feature: Sometimes it’s better to have two well placed snaffles than one goal scored and a badly placed snaffle. The evaluation should allow compromises like this.

Formula: Strongly decrease the coefficient in front of

E +=100000∗(myGoals−opponentGoals)

Comment: I believe it is important to allow compromises such as the one described. However, this one is difficult to implement correctly. The simple way described does not work, and I did not find a suitable replacement. Still looking… any suggestions from the readers ?

Conclusion

Before you ask: the answer is “yes”.

Yes, I did implement and try every single feature described above.

Yes, I did hope that they would improve my ranking.

Yes, each and every one of them failed to do so. Sometimes spectacularly.

Yes, this did leave me a bitter taste at the end of the contest. Similarly to Magus, it is one of my old codes that performs the best, and I don’t understand why.

General feedback

As usual, a very interesting ruleset proposed by Codingame. Congratulations for the originality of the games designed for every contest.

In no particular order:

I particularly appreciate the 1v1 nature of the game. (I have to repeat it since it’s so important)

After reading the rules, I thought that they stroke an interesting balance. “Simple” rules such as in Coders Strike Back give a strong advantage to meta-heuristics. “Complex” rules such as in CodeBusters promote Finite-State-Automata submissions. My expectation at the beginning of the contest was that the two approaches would be similarly prevalent. It turns out meta-heuristics did have a strong advantage, at the cost of having to re-implement an intricate game engine.

Speaking of which: the ruleset was regrettably incomplete, and the game engine bugged during the first few days. It felt worse than other times. Arguably, “The Accountant” contest was the best from that point of view. Improving that part should be a priority for the future contests.

On the new servers: those were perfect!

Hypersonic was a farming game, with top AIs neglecting the opponent. In Fantastic Bits, all my attempts to improve my opponent prediction failed miserably: this puts the game in the same category. I prefer games where the interactions with the opponent are more prevalent (The Great Escape or Coders Strike Back for example). Maybe the key lies in the simpler nature of those games?

Link to pb4608 postmortem in pdf

— pb4608


Thank you all for taking the time to share your strategies to the whole community. See you in February for Ghost in the Cell contest.

Play Fantastic Bits now