Win rate and score inconsistency

In win % mode white wins, in score mode black wins at the same position. How is it possible?

At move 46 predicted win % is B+37%, predicted score is B+0.2, estimated score is B+2


1 Like

I’ve noticed something similar in some games.

I think it might be to do with different numbers of playouts in the fast review vs the score estimator.

You can also get different values each time you press the score estimator in some cases.

I’m guessing it’s just to do with it being a capturing race with low playouts.

2 Likes

The life and death situation is still not clear 14 moves after that, so the AI is confused. It needs more playouts.

2 Likes

2 different bots may score estimate same position differently.

If neural net learned % estimation and score estimation too independently, result would be as if we had 2 separate bots.

Score estimator runs each time you press estimate. You can adjust territory in score estimator by clicking on stones.

Katago runs only once. Win rate and predicted score in graph is given by katago. Katago predicted white win and more black score simultaneously. That is strange.


There is only one katago instance in this game. Same bot predict controversial outcome. Look likes AI is hallucinating.

I mean, its possible to make “1” bot that have 2 independent nets for score estimation and for % estimation. Then, it would be identical situation as if we had 2 separate bots.
Kata is likely united, but it may have something like multiple personality disorder. So, technically it may be correct to think about % Kata and score Kata as about 2 different bots.

1 Like

Yes, but

Katago also does the score estimate once the game is over.

But as to why you get different outcomes/estimates

As @jlt is saying, imagine you only let Katago look ahead 5 moves, or the playouts don’t play to the end of the capturing race or explore every line properly, then it won’t get the full picture oh whose winning, and depending on where it stops checking it might get very different answers to whose winning.

As @jlt is saying, imagine you only let Katago look ahead 5 moves, or the playouts don’t play to the end of the capturing race or explore every line properly, then it won’t get the full picture oh whose winning, and depending on where it stops checking it might get very different answers to whose winning.

Really it doesn’t matter which outcome is predicted. Katago may predict white or black win. It is ok. The problem is in interpretation of outcome. Katago predict white win at move 46 explaining it that Black have 0.2 points more. That is inconsistency.

This is not a bug, this is how KataGo works: occasionally (and more likely with low playouts) winrate and score are 2 different metrics and can give different player winning. Lightvector has explained it better than I could before if you search here / r/baduk / lifein19x19.com, but think of it as if you have a small chance of winning +100 and big chance of losing -5, then winrate average will tend to show you losing and score could show you winning depending exactly how that weighted average ends up coming out.

6 Likes

How many playouts does estimate score use?

1 Like

Zero. It is territory calculator. You can adjust its input.

However the score estimator for spectators, or for players after the game is finished, uses Katago, with 1 playout I think.

it looks more accurate for me than AI Sensei, where 50 playouts are used

3 Likes

It wouldn’t surprise me if it was like 50 or 100. I can’t remember exactly but I think the fast review on the whole game after it finishes is something like that.