I’m very happy about this new feature but I’m also quite confused.
I’m not a supporter (for now) so I can only see the three “Top game changing moves”.
I suppose they should work like this: Leela Zero points out the moments in which a player made a move that altered significantly the win rate of the game.
In my previous understanding on how Go and AI works, that would mean also: a player made a mistake. There’s no way to increase our winning rate but just to decrease it by making wrong moves. That’s what I learned from Leela.
So in this game I can understand the point: move 59 was small. Moves 130 and 131 should focus on fighting in the middle instead of reinforcing along the sides. That’s ok.
In all of these cases the players made moves that weren’t even in the top 6 (A-F) according to LZ.
Nice. That makes sense for me.
But what’s the meaning of these other “Top game changing moves”?
Players actually made the best move (A) or the second best move (B):
Player made move 16 according to LZ best move and also the delta is quite small (3pp)
Players actually made the best move (A)
So what’s wrong?
Best moves are bad moves?
The top moves aren’t that much significant?
Didn’t I just understand the whole thing?
Should I just pay few bucks and look at the whole analysis? (well, that wouldn’t be very flattering for those little “game changing moves”).
What happens is that it guesses the top three using pretty rubbish estimation, then it computes the actual score of those three moves and puts that as the percentages you see. So it had a wrong ‘hunch’.
Well as it got implement yesterday and is just now being tested on live data with feedback, I would not ditch the whole thing just yet. Anoek is tweaking things since yesterday.
I think it is fair to assume there might be some “errors in judgement” since the whole thing is running free of charge, and even the top bots can still sometimes screw up even on full power. but hopefully even those 3 moves should often produce a good places to think about (especially in some major mistakes cases - I guess it can be kind of tough to judge in a game where one just slowly keeps getting behind) once all is polished.
I don’t think it should be taken as a substitute for a good teacher, but as a general “get and idea” thing I am honestly completely loving it. (and not because I am expected to )
I am just saying do not disregard it as garbage right away. Give it a few more chances. Anoek is still looking for the perfect weights/playouts combo, that would be “good enough” (it will never be flawless and should not be expexted to be IMHO) but still managable to run on the servers for free.
It’s not entirely bad that the AI review has obvious flaws.
If an approximative system becomes too good, people tend to rely on it without question and expect it to be perfect. We regularly have these discussions about the score estimator and the dead stones suggestion in scoring.
If the AI was given 5k playouts every time, it would only expand the annoying trend in which even strong players take the AI valuation as gospel.
I could be wrong but I believe it shows the most consequential moves, in terms of changes in the predicted won/lost percentages. Therefore, even in a game where I lost, white may have made a big goof, which I failed to take advantage of. Consequently, the goof momentarily caused a big swing in the percentages and makes it into the top three. A valuable feature, because it shows moments where we overlooked an opportunity.
If your opponent has you over a barrel anyway then a big goof by them may not be sufficient to show up as a change in the %win/loss.
I’m wondering if it would make sense to initially show us the three alternate moves that made the biggest difference to the expected final score rather than the win%? Does that make any kind of sense to anyone?