The Review Meta-topic: How to review?

Ah that’s fair :slight_smile: I mean the odd time it might be fun to see what mistakes 3 years ago looked like compared to now :slight_smile:

At the moment I do any sort of storage, I just play a game, review, try find a couple of takeaways from the game and hope I remember them next time they come up :stuck_out_tongue: probably easier with shapes and tesuji and things, maybe less so for evaluating positions and results of sequences on a whole board. I can imagine have images of the positions is good for that.

Really interesting! Do you use a tag system then, or search by keywords? Or just manually regroup?

Well the idea originally was just to collect all kinds of mistakes and see which ones appared most often - I grouped them just by title-name at that time, so I had very general titles like “low and slow #xy” or "too aggressive #zt.
Now I manually group them into folders, but it’s still the same concept basically - if I notice a certain kind of error I can group into one subcategory I do that and create another subfolder for it.
Then if I feel I got a good grip on the overarching category I either
a) move on to another category or
b) move deeper into one of those subcategories.

For example one big thing I had a couple of years ago was that I just didn’t think enough about groups (more about points) and always lost track of them and how weak they were, so I had to first focus on that - once I got my focus on that it was deeper into that actually evaluating what made them weak/strong and why and from there on when and why I could tenuki my own groups.

Currently a focus of mine is that I noticed I tend to overattack in that I create too heavy shapes for my own and then get counterattacked

Actually here is a situation from the same game I showed earlier


Still was more concerned keeping the opponent separated creating very heavy shape for myself
I notice and collect more and more examples of this in my games until the lesson sinks in :wink:

1 Like

I really like this idea. This looks like a great way to record reviews. I find that I review but do not necessarily learn my lesson. For example, recently I seem to be going for side extensions too early, forgetting the proverb “corners before sides”. But I keep making the same mistake! Maybe if I had a database like this I could look over it before I play to remind myself what I need to improve upon this time. I think I will give it a go :slight_smile:

Last technical, do you use a database of a go software? I mean pure sgf not that practical, or the ogs library too.

It actually motivated me to play more and seriously review my games because it felt more like I was ‘creating something lasting’ I could learn from and revisit again later :slight_smile: and actually track my progress in more than one way (because especially at dan level, rank ups are way too rare to be a good indicator of progress ^^)

And cool, let me know how it goes :wink:

I just review with katago and take screenshots of the positions where I made mistakes - not using any special software other than that personally, but I know others who do - personal style I guess, I like the rather ‘raw’ approach ^^

1 Like

I’ve been beaten twenty times by Dave in long games from February 2018 to September 2020, and received a review from him every time, and I can vouch that they are very good.

It’s amusing to realise that if you were playing Dave over that span of time, you’d think that you were gettng weaker because his wins were getting more and more convincing, but ofc “the scenery is being pulled away from you” as he improves.

2 Likes

Very kind of you to say :slight_smile:
And yeah - in the end it is all about finding out what works for you :wink:
And it’s just as important to find out what DOESN’T work out for you ^^

It may be useful to create joseki with bot instead of just reviewing your game with it.
in actual game decisions of AI may be biased, so its better to recreate corner situation from the game on empty board.
if bot wishes to tenuki, you can add something maximally neutral in other corners

then you will have helpful collection of corner sequences that include moves of your opponents and bot answers. You can memorize it and play in next game, then update collection …


Its better to review only games that you lost - so you will continue to play like human in situations where your moves work. Only moves that doesn’t work will be replaced with AI. And your collection then will include only moves of opponents that are stronger - so these moves may be useful too. Also I think its better to play one more game instead of wasting time on reviewing game that you won anyway.

1 Like

In these days of easily available reviews from superhuman AIs, I think it’s important after a game to not immediately jump in and see what the AI has to say (so disable the AI review on OGS for the first pass through the game), but first do some self-review. This doesn’t need to be particularly in depth, but should at least involve identifying the overall story and key moments of the game, such as “Opening was ok, then I felt I did badly in fight at top though not sure what I did wrong, but then he overplayed on right side and I punished him well and took the lead, then I played the wrong move to live (should have been s18) and he killed me, game over”. Only then check with AI and see how accurate your judgement was: were there big mistakes you didn’t even realise were big mistakes? Were you right your group would have lived with s18 or was that also a misread, or maybe even though you died you still had a chance? Antti Tormanen 1p has also given this advice.

Here’s an example of me doing a self review first and then AI review in Lizzie. Leela Zero Review: Fox Game 2 - YouTube

And some tips on doing effective AI reviews. Interactivity is key. https://www.reddit.com/r/baduk/comments/hwlv7e/how_to_do_effective_ai_reviews/

7 Likes

https://senseis.xmp.net/?HusTeachingMethod


I also found Rubilia/teaching at Sensei's Library

1 Like

It is often hard to see things in a different way. One way to facilitate that is to role play. Janice [Kim] defined the following colored hats:

  • Red hat: what are your feelings about the move?
  • White hat: what will the opponent do? At least two moves ahead
  • Yellow hat: optimist: what is good about this move? what is the best that can happen?
  • Black hat: pessimist: what is bad about this move? what is the worst that can happen?
  • Blue hat: Hyun sae pan (positional judgement). What is the score (count)?
  • Green hat: creative, nay, strange solutions; think outside the box

By considering each of these perspectives for a given move, we can strive to gain the ur-perspective (ur is a Greek word meaning, original or primitive). The goal is to gain a true understanding of the position (and the correct approach to it) by considering the various aspects revealed by the roles.

As found on Kirk’s page Wear Many Hats at Sensei's Library

On the trunk page of the series, What I Learned at Janice Kim’s Workshop, I rediscovered this quote:

“For Argyris and Schön, learning involves the detection and correction of error. Where something goes wrong, it is suggested, an initial port of call for many people is to look for another strategy that will address and work within the governing variables. In other words, goals, values, plans and rules are operationalized rather than questioned. According to Argyris and Schön, this is single-loop learning. An alternative response is to question the governing variables themselves, to subject them to critical scrutiny. This they describe as double-loop learning. Such learning may then lead to an alteration in the governing variables and, thus, a shift in the way in which strategies and consequences are framed.”

I think I discussed this concept here quite a long time ago, but I’ve no idea when or where any more.

I found this advice for SDKs, from the Kerwin series, very interesting.

[When reviewing a professional game,] at the end of each engagement, look at the outcome. You know the outcome of the engagement is even. (Even if the division of spoils was uneven enough to decide a game between pros, in an amateur game it can be considered completely even.) Does it look even to you? If not, reconsider your judgment.

I’d never thought of approaching the review of a professional game this way.

Often I’d be thinking of who I thought got the best of an exchange and why, which is fine – but I wouldn’t check myself like this by taking it on faith that the result was essentially even, and questioning why I considered it otherwise,

I still feel a little uneasy with this method, and in fact I was going to post it in the controversial Go opinions thread.

3 Likes

Its proven that AI plays better than human if AI plays moves which it actually chooses. (And if situation is not too unusual.)

But can we really trust score estimation of random move? (move which AI didn’t even planned to analyse) Especially if you have too few playouts.

there are big bugs , maybe there are also small bugs

So, in AI review if your move shows bigger score than blue move (move that AI would choose), I think its more safe to play blue move next time in similar situation instead of playing your move again. Maybe your move is better, but can buggy AI with few playouts be proof of that? I suggest to ignore the score, ignore the %, and just to look at which move is blue.

It’s bollocks, at least for dan players, based on too much deference to professionals. I prefer Guo Juan’s advice for reviewing pro games: try to find their mistakes (you have their professional opponent to help you).

2 Likes

I think both views are exagerated (and interesting). Finding what a player is looking for, and if it works or not is next step. If it looks like failure then backtracking is good way to search other alternatives.

Me, trying to find a mistake in a professional game:

Move 100: White didn’t defend? Isn’t White’s group dead?
Move 101: What? Why didn’t Black attack?

Move 151: Ah, finally Black is attacking so now White is dying.
Move 162: How did White live miraculously???

8 Likes