Why different winrate in symmetrical point?

In the example, why G5 is the best move, but E7, C5, E3 have -0.5 or -0.3. But the points E7, C5, E3 and G7 should have the same winrate (with symmetry of the board).


The AI does not take symmetry into account, so it evaluates all of those moves independently. Then it just so happens that some of the sequences it looks at for E3 turn out worse than those for G5.

If you let an AI think for a long time, the values will eventually become equal. But the automatic review on OGS uses quite few playouts, so some weirdness like this happens.


You can consider all moves which lose less than (or roughly about) 0.5 points being good moves. There is always some margin of error in AI’s evaluation, i guess that techically if you let the bot run for unlimited time, at some point in time it would consider C5 E3 E7 and G5 being absolutely equal with the same follow-ups.


The other thing one can do is run a second ai review, like add another katago review.

It will probably agree with itself on a lot of key moves, but you can see fluctuations of like 0.1 or so iirc (maybe it depends on the board size) and which move turns blue can be different depending on how close they were, and what was played out.

You obviously don’t need to do this for every review to get the key points and ideas of the game, but it can be useful to build some intuition about the numbers for a fixed review strength.

This picture proves than ±0.3 difference doesn’t really matter

hey koba