I am from another generation. When I started studying Go, there were several books to learn with basic and advanced concepts that, after 20 years, seems obsolete and no longer valid.
Even if I consider some of those books masterpieces, and most of the basic concepts are still valid, it seems to me that today this learning approach is quite disappeared.
Especially after the raising of Korean way of playing and - more evident - after the introduction of AI in the game of Go, it seems that doesn’t exist anymore the concept of “vulgar move” for example. All is permitted on the board because the final objective is… to win the game.
Now the question: if I want to use effectively the AI to improve my Go skills, how should I use it?
Let me be more precise. I am a SDK (at least I should be even if not yet on OGS), and I don’t think that playing against a super-human AI network would help too much because 90% of the moves made by the AI are for me unintelligible.
Would be better - for example - to play against a weaker AI network? And even if I have a such weaker AI network available, how should I interpreter my progress considering that 99% of the time I will be defeated anyway? If I undersstood correctly how AI works, I cannot use the final score to understand if I am progressing in my Go skills. In fact AI do not care at all if it is winning by 5 or by 50 points. The important thing is the accuracy of the winning prediction. If the probability to win is higher but the final score will be closer, no matter for AI. Better the higher probability rather than the difference in the final score.
So, what is the correct method to be used? Maybe some dan player or higher SDK can provide their advise. I’ll appreciate a lot any suggestion from who is used to train himself using AI.
Well, no idea about “correct” way, but figured I would share what I do, and maybe you will find some inspiration.
Seriously doubt it. I claim no deeper understanding, but to me it seems that weaker bots just sometimes do silly mistakes. Which is not much to learn from.
I don’t play bots (apart from my mobile during commute sometimes). I don’t find it fun and I can’t force myself to fully focus on a bot game. Hence - for me personally - it is not a good way to learn.
I use bots to review my normal games though, which I often find very illuminating. Just seeing the graph of the win-estimate can often give a pretty good idea where my direction was wrong (when there is a large drop). Of course I do not fully understand every suggested move/cannot fully read them, but nobody can. But they can very often give you an idea when your move was really bad, and you ponder (or ask) why.
Look for drops larger than say 10% (or pp to be precise) to start with. Lower than that can be so minor differences that we cannot comprehand them at our level. But I would still ask for a human review every now and then as well :). We may not be that strong, but are better at explaining.
I’ve only been on OGS for a short time, and thus have only analysed a few games using AI, but I can say this much:
it’s interesting to study the ups and downs in the graph and to find out which of my (and the opponent’s) moves were mistakes - and what would have been better
it can be confusing to look too closely the AI’s opinion on every single move
but it’s still useful to have a glance at the whole game analysis, because often there are certain points suggested by the AI again and again, as it obviously finds them really important and way bigger than what was actually played, so this is imho a good chance to learn
Most quick and concise advice is: review your games with AI and look for those moves that caused a big change in winrate.
Winrate starts even and then changes. Very good moves can slightly improve your winrate. A 0% move (no change in winrate) is a good one.
A bad move can be -20% up to -90% (when you were ahead and screwed it up completely).
Look for those moves, the bad ones, and try to figure out WHY they were so bad.
If you get it, you have learned something big!
If you don’t, ask for comments here in the forum or someone stronger than you and you will still learn something big, from your own games, correcting your own mistakes.
Actually, I review the games on OGS adopting exactly the suggested approach. If i get a -20% (or more) in a move, probably I made a huge mistake. It is also true that sometime AI considers certain moves “the only moves” and remain fixed on that point of the board until one of the players occupy the spot (even if it is a trivial move that for some reason one deliberately pospone to play). I agree that someone can learn a lot from this.
I decided to support OGS also for this reason. Review all the played games with this analysis is really a plus. The tool is also really well integrated in the interface. I’m happy with it.
Anyway, my original post was intended not for the analysis, but for playing against AI. Since I read often that nowadays Go pro train themselves playing against the AI, I would like to simply understand if this can be instructive also for a non-professional player (and in particular for a SDK player) considering the huge gap in rank.
From your answers, seems to me that it can be quite frustrating playing against AI and you can learn quite nothing. So it is better use AI only for a post analysis.
This is how I do it - I don’t actually play against a given AI myself - I pit two AI against each other like bugs in a jar, and try to improve my reading and analysis skills as try to follow the play, and guess where each next move might come.
I am somewhere between 12k and 15k, so for my level of learning, I find it easier to follow games between a stronger AI and a weaker one. Two strong AIs playing against each other very quickly get into a level of high-dan play that is difficult for me to make sense of. But a strong AI encountering weaker resistance plays moves that are easier for me to follow and read.
For the strong AI I use either this desktop version of Leela (the one trained on human games, rather than Lizzie, which is more tabula-rasa) https://sjeng.org/leela.html
So I open up 2 different windows running these 2 different AI, and enter the same moves by hand between them. I am basically playing a game of Centaur Go against the weaker AI using the stronger AI as training wheels.
So, I make a move - I enter it into COSUMI - COSUMI responds. Before I enter that move into Leela, I take the time to do my own analysis, and try to figure out where the next best move might be.
Then I enter COSUMI’s response into Leela, it automatically responds, and I see if I was correct in my analysis or not. With the desktop version of Leela, (which auto-responds by default), I will often take its responding move back, and look at the heatmap of possible moves it offers me - to see if the move I guessed was at least one of the points on that heatmap. I then also take the time to look at all the alternate heat-map-move possibilities, and evaluate the various strategies involved in playing at those other places. I also take the time to think about why Leela chose the option it did rather than the other possibilities presented.
Then I usually take the move Leela offered, enter it into COSUMI, and get a response - rinse/repeat.
I don’t always play the moves that Leela suggests. At times, I will play one of the alternate options that Leela didn’t play, just to try out other strategies and tactics. Other times, I will get Leela’s help in reading out a variation several moves ahead - just to see how COSUMI might respond (the COSUMI site doesn’t allow one to undo moves, so I can’t try variations there).
I find this to be a very flexible learning tool, and - from the perspective of keeping up my morale and confidence - it feels a lot better to win against a weaker AI (that I can’t consistently beat on my own) with the help of a stronger AI than it does to consistently lose to a stronger AI.
Your mileage may vary - void where prohibited - some cars not for use with some sets
Really interesting your method @tonybe. It requires a lot of effort jumping from the app and the browser I suppose. Maybe also a little bit distracting.
Just to let you know: with Sabaki (https://sabaki.yichuanshen.de/) you can install more than one AI network or bot (Leela Zero, KataGo, GNU Go, …) and assign different AI engines to different players.
So, without making any boring copy-paste you can enjoy a game between two different AI.
Leela Zero project offers all the old networks made in the past so you can also select the appropriate levels of both AI. I don’t know if also KataGo is available with different networks. You can check.
Enabling the realtime analysis you can also see what are the best options available for each turn and rolling with your mouse on one point of the heat-map you can see in realtime the sequence of several moves that AI selected for that specific continuation. You can set in the config file the number of moves that must be shown. Hope this can help.
But my curiosity is: following your method, you are registering real progress in your games?
Hi - for me, it’s pretty easy to ALT-TAB between 2 applications so this is not a problem. I tried installing 2 different AI on Sabaki, and it was fascinating to watch them go at each other, but I couldn’t figure out how to enable heatmaps.
Also, this way I can make alternate decisions than Leela, and try my own variations, to see the result. This method includes the potential for more of my own input, rather than just passively sitting back and watching two AI battle it out.
As re: results - here is a copy/paste of my answer to a similar question posed a few weeks ago in a different thread (hopefully the formatting will work):
Some of those results are quantitative - I ranked up. I’m a slow learner - largely because I mostly play correspondence games. I like playing live games in person, with a board and stones, but on a computer I’ve just never gotten into live or blitz games. The couple of times I’ve tried it, I haven’t done very well. I enjoy correspondence games because I can really take the time to read out options and make a move I feel good about. However, the downside is that my correspondence games usually take weeks or months, and learning comes slowly.
For a long time, I was playing with a friend of mine online, and we both stumbled around from ~20kyu to ~ 17kyu. However, I noticed that my skills were kind of plateau-ing. I wasn’t really learning much, I was making the same mistakes and trying the same strategies. I stumbled onto another player who was stronger than me, and found that I learned a lot more from playing them.
Once I stumbled onto the learning-from-AI technique, I stopped playing other people for over a year, and when I came back, I noticed myself playing better, and I was able to win against stronger opponents. Here’s an older game of mine from when I was 17 or 18 kyu
Here’s a game after I took that year long break to learn stuff:
For me, the main differences between then and now are qualitative. In the past, I constantly felt lost, and didn’t know what to do next. Now that I have gotten better at analyzing the board, the game has taken on more of a narrative and I know where I am in the story. Now, granted, I still get in trouble because there are so many options for what to do at each point in that narrative, and I need to get better at picking the best strategy at each point.
My problem is I discover a couple of new strategies - in my case using cut points to create forcing moves, and using light play to move quickly across the board. Those are great, but I also need to slow down and balance those with some solid connecting moves that create shape.
But - at least now I can understand WHY and how I’m making those mistakes. I look back on my old games, and I find myself cringing realizing that I was making moves where I had no idea what I was doing, and I was handing my opponent an advantage making moves that I thought “looked cool,” or failing to secure my territory and having big groups captured because I couldn’t pay attention to who had sente/gote.
Last year, I played a series of unranked teaching games with my friend soterios. We played something called Centaur Go - where we were both using AI training wheels, and most of the learning involved looking at the AI suggested moves, discussing which strategy each represented, and then having him pick one and play through the consequences.
Once the game was finished, I wrote up a move-by-move analysis of the game which you are welcome to read here. It’s long and dense, but it will give you a sense of the type of analysis involved. If you want to follow along, restart the game from the beginning in analyze mode, go to the top of the comments text, and then increment the moves with the text.
After I came back and played a few games, I ranked all the way up to 13 kyu. Then I lost a few games trying out new stuff/experimenting with new styles, and I’m back to 15 kyu. Just gotta get my nose to the grindstone and apply the stuff I’ve learned better, and maybe I’ll rank back up.
The Leela software I recommended does all that - and there’s an Ubuntu version
I see. Thanks for your detailed report. I’ll take my time to see the games you posted.
Following the same idea, I toggled analysis on Sabaki playing against Leela Zero. The analysis (heat-map) - when activated works also for the opponent (me in this case) and I can see several options and decide which one to play. I can also see the continuation proposed by Leela on each hotspot before to take a decision (even if I never would follow such continuation if I was deciding by myself).
Often, I decide to follow a branch that is not the best in terms of heat-map, or even a place that is not on the heat-map at all.
Honestly speaking, I played only few games with this method and I cannot conclude anything regarding this as learning technique. What I can say is that - since my heat-map take few seconds before it pop-ups after each Leela move - I usually have really short time to think without being influenced, nevertheless, it seems to me that very often Leela zero proposes spots on the map that I would never play (you imagine what about their continuations…).
I should spend some more time on this method and see if I can obtain some benefice.
Interesting - however - to read how others are dealing with this.
At the end of the day, seems that all we have similar issues along the learning ladder.
For me - being able to relate to the moves made a big difference in my learning. As an 18-15kyu, I was looking to figure out the tactics of 12-10kyus. Trying to learn by playing Leela against herself felt very alienating - even with her expert help, I was trying to wear 4 dan shoes I was too small for, and didn’t fully understand many of the moves I was making.
So what I’d do is find an AI closer to my target level, play them a few times, and see where I made mistakes. I would then play that same AI again - but now through Leela’s eyes. I found that in many ways Leela met me halfway because - if you don’t challenge her play-ahead, she gives you obvious answers rather than complicated ones.
At other times, I had to look at the 7 different battles offered by the heatmap, and learn to leave 3 of them alone - as they were more complex strategies than I could relate to.
So yeah, for me, it’s about giving the stronger AI input closer to the type of opponent I’m up against, and putting words and concepts around the strategies it employs to turn those situations in its favor.
I recommend against playing bots for learning/improving. Weaker bots might even be counterproductive, giving the illusion of progression, while possibly just reinforcing bad habits/intuition.
Playing against a stronger bot might avoid that pitfall, but, as you mention, it might be frustrating to always lose. As others have mentioned, AI can be used as an analysis tool (rather than just an opponent). However, a human teacher is far superior than any AI tool simply for the fact that they can communicate with you and explain things.