How to train your brain like an AI

The title is a little misleading. :slight_smile:
A better one would be “Comparing human players thoughts and AI programming and making use of that for avoiding tsumegos” but it ain’t that appealing! :wink:

I’m not gonna say anything but obvious, but I thought about my personal thread of thoughts when I play and compared that to what I know about Go AIs. I think that could be a starting point for some conversations.

AFAIK Go AIs are mainly made of two pieces: a Neural Network (NN) and some sort of MonteCarlo (MC) algorithm.
The MC part is just brute force exploring of branches in the variations tree. It’s a overwhelming task when done on all possible variations, even for a computer.
That’s why the NN is key: it looks at the board and generates a bunch of possible moves, only a few, that will then be passed to the MC, reducing the variations tree to a smaller and manageable size.
Of course, for each move in each variation these two parts are interacting: NN comes out with some possible moves, MC start exploring a branch which brings to a new board configuration and new possible moves by the NN, and so on.

I miss a piece: the evaluation part.
At some point in this process, there’s something that states the hypotetical win rate for both players. This is necessary for pruning the variations tree, removing variations that aren’t useful. IDK if this task is done by one of the above pieces or if it’s on its own.
This isn’t very important for what I’m discussing here. Let’s just think that we (both humans and AIs) just know when a variation is good or not for us.

My point is: when comparing AIs with human thoughts, NN is about instinct and imagining possible reasonable moves, MC is about reading.

Peronally, I heavily rely on instinct. My reading skills are weak and I don’t want to improve them by doing specific training (tsumego). I am old, I am tired, I have almost no time to play, so I have to pick just what’s most important for me: playing. And I have to do that in my spare time (that’s why I only play correspondence online games, making my moves when I’m on the bus, at the toilet, in a waiting room and so on).

When I drop any chance of improving reading, I can only try to improve instinct.
I already know that my imagination about possible moves is weak. I check that every time I try solving a puzzle: when I get to “my strenght puzzles” I mainly fail because I don’t even recognize the solution as a possible move. I just discard it before starting reading it out.

I tried once to solve a simple tsumego by writing down all possible variations. It’s a load of work even for a simple puzzle. Brute force is exausting and frustrating because it brings you to all possible silly and useless variations of which, in our beloved game, there’s a plenty. You get to the solution by excluding everything else, which is a drag!
So instinct is key to prune the tree, but it becomes harmful when you prune the good branch!

My training on that is using OGS AI to review my games.
It’s very easy, because for each move you’re presented with few highlited possible next moves. That’s just it: a handful of best options, already sorted by effectiveness. What I must do then is try to understand the underlying logic (it’s better to play elsewhere, I should place another stone on that group, I should defend, I should attack and so on) and get used to shapes (jumps, extensions, boxes and tables, hane and caps and so on).

I do this in two ways:

  • while I play, I write down in comments (Malkovich) when I’m very doubtful about direction of play. “Where should I play next?” or “Am I safe now? Can I play elsewhere?” or “That group seems unsettled, should I attack it?” is something I leave in my Malko log as a note for my later analysis with the help of AI. Then I can see if I was right or wrong and make use of it in my next game.
  • after the game, I scroll the full game comparing my choices with AI suggestions. Sometimes I’m spot on, sometimes I’m pretty near (trying to achieve the same goal with a less effective move), sometimes I’m completely out (neglecting urgent moves or, more frequently, adding stones where there’s no need to).

That’s my experience so far.
If you want to do the same, you must remember that strong AIs live on the edge: they have a very fighting style and they leave many positions unsettled because they can deeply understand full board balance. I personally like a safer style and most of the time I check for second or third options, when AI evaluates that a safe move is a small error compared to the best one. When there’s a prudent move that is less than 2 points from a “fighting best choice”, I like to choose that instead. At my level (OGS 4k) 2 pts mistakes are very good moves! :smiley:
I prefer points instead of winrate because I can always understand what N points means, while I don’t always understand what a N% change in winrate means. I’m just more comfortable with it and I think that winrate becomes more important when you’re very strong and must work on honing your skills instead of building them fron nowhere! :smiley:

What do you think?
Am I missing something crucial?
Could this piece of advice be useful?

My understanding of how NN based engines work

I think what you’re “missing” in your understanding is that the “evaluation” part and the “instinct” part are actually the same in the neural network.

The neural network’s policy doesn’t just point out “good” moves in a binary sense, it gives out “weights” (which I think are essentially a guess for the winrate and/or score following that move) for all possible moves and then the MC explores the tree with a bias towards the “better” moves.

I think the MC search will occasionally also explore “bad” moves, if given enough computational time.

And the evaluation is essentially just a weighted sum of the explored tree, like an “expected value” calculation, if you know about that.

Of course, this doesn’t really change anything in practice, humans just don’t have that type of instinct, and while our instinct isn’t binary, it is “fuzzy” and can’t be used for a numerical evaluation.

One thing that might help you, if you don’t know about it already, is the “set region of interest” function that the KaTrain software has. That way, you can ask the network “if I really wanted to play locally, what would be the best moves?” or things like that.

Another way to analyze games with AI that I think might be useful, at least to better players than me, is to use your own reading. Well, I don’t know if you can do this with the OGS tools, but with a software like KaTrain or lizzie you can explore the branch of your move to understand why it’s bad – what would the AI opponent do if you played this and then this, etc? Why was your reading bad?

If the AI wanted to tenuki, it’s kind of annoying because it will keep telling you all the local moves are bad, and that’s again where the KaTrain function might help you.

 It’s interesting that you say AIs have a very fight-oriented style, I’ve heard other people say that because AIs prioritize winrate over points, they will often prefer safe moves that ensure winning by one point over risky moves that attempt to win by a lot. I don’t know who’s right, but personally I feel you might be closer to the truth: I believe in the end playing for points and playing to winrate leads AIs to a very similar style of play.
 If you look at the score and winrate graphs that the AI plots, the winrate graph is usually just an “amplified” version of the score one.


Yeah I’ve noticed this also but it’s not quite perfectly true. I think it might depend on the specifics of the game (though it does seem the case for a lot of games and moves), and of course there’s a flattening effect because if a game is already 99% leaning toward one player, if they gain some points it won’t really be visible on the winrate compared to the score graph. So handicap score and winrate graphs look quite different.

I suppose you could have an ai playing that is certain to win but is giving away endgame points (or fractions of a point) in order to win by a 0.5 point margin, and it’s winrate might be increasing the whole time, while it’s score would be decreasing.

They certainly should be intimately tied anyway.

1 Like

The question is: how to improve our evaluation function. Our brain is not an AI. To evaluate a position, the AI uses many megabytes of data and makes millions of elementary operations (even if it doesn’t use Monte Carlo search). A large part of the evaluation consists in determining whether a group is safe. Experience can tell you whether a group probably needs to be defended or not, but I don’t think a human can reliably assess the life and death status of a group without reading.


AI is making life easy but on the other hand, there are many side effects of AIs. As people will totally depend on this technology and the need of people is going to be short, according to me the use of ai going to be a dangerous one for our new generation. :slightly_smiling_face:

Maybe we could learn from Li XuanHao. He is the powerful AI-ish Go player nowadays. And he played so well in a game against Shin JinSeo:

Since November 2020 Michiel Eijkhout (Dutch 5D) is writing a series of articles about the differences in playing style between AI’s and human beings.

Michiel Eijkhout is the author of the book Close Encounters of the Middle Game (see links below).

Unfortunately this series about differences in playing style between AI’s and human being is a) written in Dutch and b) only available for members of the NGoBO (Dutch Go Association).

Michiel Eijkhout is a member of the go club where I play. I will ask him what his plans are with his articles (and if it is possible to also make them available for non members of the NGoBo).

To be continued.


Actually, there are many researches about this in China, Korea, Japan since 2016. But not all of them are in English. I can read a bit Chinese via translators and Sensei Library Go terms. As far as I know, Cui Can (5 Pro Dan) has a very nice paper.

The differences between players and bots?

players misjudge attack_defence and strong_weak:

  1. overestimate influence
  2. underestimate large-scale framework
  3. overestimate side and underestimate center
  4. underestimate 2-x points and 5-x points
  5. overestimate local and underestimate whole board

How to learn AI-ish Go?

Opening and Joseki? Maybe.
Judgment Criteria? Yes!

1 Like

AI suggestions are a bit of help for “inspiration” if you can get what are the consequences and premises of them.
But without reading some sequences i highly doubt anyone will go far. AI will not help that much on this side in my opinion.

1 Like

Thank you all for your replies.
I can’t answer yet but I’ll do it asap.

Michiel Eijkhout’s November 2020 article on go: a short summary in my own words and also added a few things.

If you present a game to AI it will give an opinion about its moves in percentages.
That doesn’t mean much to a go player.
How to interpret these percentages:

  • between 0% and 2%: excellent move
  • between 0% and 5%: good move
  • between 5% and 10%: mediocre move
  • more than 10%: bad move

These percentages are based on a 10+++ dan professional.
For you and me an absurdly high level of playing.

Michiel’s advice is:

  • focus on the -10% moves (the really bad moves)
  • try to find out if there is a pattern in those really bad moves: strategical moves, life and death, etc.
  • so that you know where your weak point is (and try to improve that)

AI’s are fond of territory and not really impressed by influence (because they usually are better fighters and know how to deal with influence and big moyos).

If I have time (and people are interested in it), I can browse through his other articles in his AI & GO series and see if there are more things that are relevant for this topic.

1 Like

percentages may mean different things on different bots
better use score estimate

1 Like

Not possible. The article only provided the percentages.

its possible to use your article only on bot which was used in article

Two years are a lot of time when talking about AI.
My suspicion is that, at that time, katago and the score rating weren’t so easily available.
I can’t remember that precisely.

Score estimation was a big improvement for me: my reviews with AI became more easy and useful

Well, the katago “third major run” ended in June 2020, culminating in the completed training of the d12284 20 block network. I believe it was already publically available then, but at the very least I know for sure that I downloaded lizzie in September 2020 and it came with katago and all its functions.

Then again, it all depends on how long the article took to write, it may be that it was written before katago took traction.

1 Like

Yep, the series started in November 2000, but it still isn’t finished yet.
When I have time, more summaries of this series episodes.

1 Like

Now I got it, thanks!

This looks very nice too. I didn’t know that.
So far I just used Leela, Lizgoban and now only OGS for reviewing my games.

As a subscriber you have Katago exploring also variations. So you can explore a branch, try a different move and look at the results. You just have to wait a few seconds sometimes. It’s done on the fly.

I heard that too… a long time ago. :slight_smile:
My opinion is that AI rarely play “honte” moves since they can evaluate better if a group is too unstable (in which case they just play there to reinforce it) and they can also find moves that do the job while still attacking or putting pressure on the opponent (which is obviously more efficient).
So I think that those moves are absolutely safe and quiet in an AI’s mind, but from my point of view, they’re always attacking!!! :smiley:
They leave unsettled groups all around and go attacking somewhere else and I can barely sense that those attacking moves are actually working also on strenghtening weaker groups from far away.

I don’t think katago would stop attacking when it reaches a half point advantage.
Maybe in the past some earlier AI would’ve done that, but not now. If I play an even game against Katago I lose by more than a hundred points. It just plays where it works. It kills what can be killed.
At least, that’s my experience.

I don’t think that this is actually the case.
The NN has already done its training and when it comes to play it doesn’t compute much: it’s a sort of “black box” that uses the board status as the input and throws out its weights.
I compare that exactly to the human intuition: eyes and brain look at the board (and they also do a lot of work and use a lot of memory to decode that information) but then a few moves come to mind first. Before trying to read anything.
So, what I’m doing now is to use best options from katago as a training for my intuition.
I already know that I miss a lot of good moves. I use katago to help me visualizing them.
At the moment I’m learning to look at a wider range of moves and in a wider part of the board.
Still I have to read, if I want to be sure that a move is working! :smiley:

I play because I find it enjoyable. I’ll never be depending on AI for that.
Quite the opposite: using AI for playing is nonsense (even though someone does it for some reason).

I think this misses a piece: those criteria are right when the winrate is quite even, let’s say between about 30% and 70%. Out of this range winrate evaluation is compressed by its natural limits: it can’t go below 0 or above 100. So when a player is solidly ahead, even by a handful of points, the winrate goes up to 90% and more and it suddenly stops to be meaningful. That player could play a terrible move and his winrate could drop by few pp.
In other words a move that drops your winrate from 90% to 80% is definitely a worst move than one that drops it from 60% to 50%, both having a 10% difference.

When you use points instead of winrate, it all becomes clearer to a human: losing 20 pts when you’re ahead by 100 isn’t that important for the outcome of the game (winrate), but it still is a 20 pts mistake!
And the winrate wouldn’t even notice it!

So, my opinion is that that article is outdated.
It brings me back to my first attempts to use Leela or Lizgoban.
I never looked at winrate since I have points estimation on OGS katago.
But I know for sure that strong players use it to study the earlier phases of the game, when the game is still very even. As an example, I watched some videos from Carlo Metta (one of the strongest italian players and very passionate for studying AI style) and in the fuseki he was looking just for moves that were between 0% and 1%, discarding all the rest.
I find myself comfortable when I lose less than 5 pts with a single move! :smiley:
My opponents usually do similar mistakes compared to the AI.

1 Like

Katago networks can be downloaded here: KataGo - Networks for kata1
The strongest confidently rated network is 166 Mbytes large. Aren’t these data used to evaluate each position?

By the way, I don’t know if you already know this but just to be clearer: one aspect that I was confused by until recently is that engines like katago actually use Monte Carlo tree search both during the training of the Neural Network and during the real time evaluation when you use them.

You’re right that the Neural Nework is “pure instinct”, or at least that activating (or “visiting”, in technical jargon) the Neural Network is often considered akin to “consulting instinct”.

(And in case anyone has this doubt: the “engine” and the “Neural Network” are two different things: the engine uses the NN to make evaluations and decide the move if you ask it to)

But since the instinct can fail, when you ask the engine to evaluate a position, it will start exploring variations at incrementally greater depth, using MCTS while giving a bias to moves that are likely to be good, and every time it goes one further move deep, it will “visit” the NN again, and update the total evaluation using the new information.

In the engine’s settings, you can limit the number of visits per position – and when you set it at a low amount, they say that the engine is “playing on instinct”, see for example here:

(“playouts” is different from “visits”, and I honestly don’t understand exactly how it is defined, but I think the two things should be roughly proportional)

1 Like