The winners of the Platinum Rift contest, Recar and grmel, share with us their insights about their code and strategies.

NB: If you took part in Platinum Rift and wish to share your own strategy with the CodinGame community, it would be a pleasure! Feel free to use the forum πŸ™‚

Recar, (2nd place, C++, Ukraine)

1. In duel mode starting position selected by simulating multiple start positions (about 20) and playing max 30 turns each. If income of one player was greater on 8 it counts as win. So position with most wins counted as best.

2. Predict enemy moves and spawns by own AI on current turn.

3. If we can’t defend cell even don’t try to do it – no sense to waste pods.

4. Move pods to defend own cells is high priority. It is much more profitable than spawn for defend.

5. Count income for each cell and select maximum income for available pods for spawn. Spawn for defense own cells use this way too.

6. If enemy player want spawn(by prediction this turn) the same cell and we use 2 pods for spawn it is 2*platinum income

7. In >2 players game each turn predict max 50 turns to detect if we can’t win. If we can’t then switch to 2nd place strategy.

8. Attack pod blockers. Sometimes 1 pod can block up to 6 defending neighbors. It is not wise to be blocked such way.

 

grmel (3rd place, Python, Russia)

I recently was lucky and took the 3rd place in the Platinum Rift contest. However, I did not use such state of the art methods like minimax or Monte-Carlo tree search due to their high computational complexity. So, my AI was based on very simple approach: each turn it evaluates attractiveness of each zone and then make local decisions (moves PODs and puts new) based on the calculated estimates.Now I will give some implementation details of my AI. Let’s start with the zone evaluating function.
 

Evaluation function

 
The attractiveness of neutral or enemy zones are calculated as follows:

  1. attractiveness = 0.1 + numberOfPlatinumSources – is the most important part. It is the benefit (number of the platinum sources) that we can get on the current turn if we will capture this zone. The constant 0.1 gives some attractiveness to zones without platinum.
  2. Density of the platinum sources of adjacent zones was estimated taking into account who is the owner of the zones: attractiveness += 0.25*(numberOfAdjacentNeutralPlatinumSources + 0.75*numberOfAdjacentEnemyPlatinumSources)/numberOfAdjacentZones.
  3. It is also important to consider how quickly and how many platinum sources we can capture from this zone in the future, therefore: attractiveness += 0.25*numberOfPlatinumSourcesThatICanTakeInNextTwoMove/2.
  4. If possible, it is better not to compete with other players especially in 1vs2 or 1vs3 games, therefore: attractiveness -= 0.5*attractiveness for each not compensated enemy POD around.

Let me give some explanations. Look at Figure 1a, you may think that B is the best zone for POD placing. But if you place POD in zone B (Figure 1b) it takes three turns to capture all three zones. Look at Figure 1c, all three zones will be captured only by one POD in next two turns. All three zones are good for POD placing but A and C are little better.

 

a b c

 


Figure 1. – Two turns look ahead heuristic for zone evaluation

Look at Figure 2, zone A more preferred than B and C. Because in the last two cases we may spend a significant part of our resources on confrontation with purple player. However, the attractiveness of zone B is higher than the attractiveness of C. Zone B is much easier to capture.

 

 


Figure 2. – Better to avoid confrontation with other players

My AI calculate two estimates for its own zones:

  1. The attractiveness of the zone under attack (number of attacking PODs more than number of protecting PODs). It’s just equal to attractiveness.
  2. The attractiveness of the zone in the quiescent state. Here I wondered: what enemy zones can I capture from this position? The attractiveness of captured zones in the quiescent state is usually smaller than the attractiveness of neutral or enemy zones (see Figure 3).

 

 


Figure 3. – The attractiveness of captured zones
Β 

PODs buying and moving

Now let us consider how the PODs are put on the map and move.

PODs buying is very simple. We just usually buy and put on the map as many PODs as we can. Probability of choosing zone for POD placing is proportional to attractiveness power. power determines the strength of the focus on the best zones and it was tailored for each type of games individually (by default is equal to 5). Figure 4 illustrates relationship between probability distribution of zone choosing and the parameter power.

 


Figure 4. – Relationship between probability of zone choosing and the parameter
power

Because we can put a new POD in any zone on the map, there is not much point in long-term planning of POD movements. Therefore first of all we are interested in immediate benefit. In such a way, POD can only examine the attractiveness of adjacent zones and the attractiveness of the zone where it is located. Probability of choosing the next zone for the POD is proportional to its attractiveness raised to the tenth degree.

Note that the designed evaluation function allows my AI to think one move ahead. Look at Figure 5, the attractiveness of zone B is higher than A, since zone B has adjacent neutral zone with platinum source. But if attractiveness values of adjacent zones are equal to zero (the POD is deep inside my territory), then my AI performs BFS and tries to find the nearest neutral or enemy zone (actually the algorithm was a little more complicated, and I tried to compare paths to the zones with and without platinum).

 
Figure 5. – Look ahead effect of the evaluation function

PODs which are surrounded by my own zones move first. The best role that they can play is defense of captured zones. Then the other PODs move. Once all the PODs have made their moves, buying and placing of new PODs is performed. After every action (moving or placing) the attractiveness of the selected zone is decreased by multiplying by a factor that < 1. It lowers probability of choosing the same zones. In addition, here it is necessary to monitor for each my zone the balance of my own and enemy PODs in order to determine what value of the attractiveness to use: the attractiveness under attack or in the quiescent state.

 

First turn

There is one more thing to discuss. That is a first turn. First turn is one of the most important things in Platinum Rift. Therefore, I and many participants of the contest treated it separately.

On the first turn in 1vs3 games my AI tries to capture the poorest continent. The parameter power is temporarily set equal to 3, here we try to cover territory as much as possible. However, if the selected continent is very poor, then I place on the map not all PODs. For example, for the map on Figure 6 my AI tries to capture Eurasia. Since Eurasia about 2 times poorer than the other continents, only five PODs will be purchased and placed.

 

Figure 6. – Not rich Eurasia

On the first turn in 1vs2 games my AI tries to capture the richest continent. The parameter power isΒ  temporarily set equal to 7. Here we focus on the best zones and can place two PODs on the same zone.In 1vs1 games I did not use any special strategy. However, on the first turn the parameter power is set equal to 40, then it is equal to 20. There is a strong focus on capturing the best zones. So two or even three PODs can be placed on the same zone.

On the first turn in all types of games I reduced the probability of choosing zones on small islands, because it is difficult to capture new good zones only by POD moving. However, if after the 4th turn my AI understood that he would lose, he tried to capture small islands in order not to take last place in 1vs2 and 1vs3 games.

 

Conclusion

I think my AI has many drawbacks and there are many ways to improve it, but he showed himself in the contest quite well and took third place. After 6334 played games, I have the following statistics: 1vs1 – 56% of first places and 43% of second places; 1vs2 – 33/44/22; 1vs3 – 29/26/23/21.Platinum Rift has very simple mechanic and many ways to play. It was a great contest.