AI II (payer) analyse shows "wrong" percentage

In this game:

move 51 for black C9 is according to the AI 27% for black, but when analyzed offline with various engines (especially when we play the variant suggested by katago) black is easily winning (99%).

Is the katago analyze so weak? Or is there anything the other engines do not see?
I dont understand. Please help. Thanks to anybody !

1 Like


In a seperate thread we talked about this a little bit. There is some margin of error for the AI. This game must have fallen into that category.

1 Like

I wonder how that “margin of error” is coming into play. Because with my local KataGo engine, on a weak GPU and a 10 year old CPU, I have never seen errors like that. That game was a 9x9 and there weren’t even a lot of variations. I checked the game above and immediately my AI says 99% for black. Something is strange. I can imagine in a complicated 19x19 game with endless fights and unclear status of groups that the AI needs a lot of time to get it right, but in all my games so far there has never been any position where it took more than a second to figure it out. An almost finished 9x9 game should be a piece of cake for it.

Edit: After Eugene’s post I checked again. If I start a fresh instance of Lizzie/Kata, Kata will also take a few thousand playouts to see white 50 as big mistake. So I guess in my first attempt before there were still variations in memory that’s why it immediately showed black is winning. Indeed it seems that position is hard to deal with for Kata at low playouts.

1 Like

nice bunny :3

1 Like

I agree with the OP that if this game were the “test case”, we would conclude that the AI analysis doesn’t have enough playouts to give a good indication of how bad white’s move 50 was.

The kata on my machine takes about 1.5k playouts (I mean, the number in the “visits” display gets to about 1.5k) before it has figured out that white J7 at move 50 is worse than -50%.

This would seem to indicate that:

  • OGS Kata is using less playouts than that
  • This isn’t enough playouts to give a good result

@anoek can you see anything odd about this one? Have we “turned down” playouts recently, or is this one just a wierd case for kata to work out, rather than indicative that the analysis is too weak in general?

2 Likes

@Eugene: I updated my post after reading yours. My engine will also take a few thousand playouts to see the mistake. It probably had the analysis in memory when I wrote the post, that’s why it came up immediately with the correct estimate.

2 Likes

Thank you!

And if I didnt somehow miss it I got no notification about this thread whatsoever…
Where should they pop up? In my notification area? Email ?