It’s a book I would like to understand along with various other books about the endgame (Robert Jasiek’s, Antti’s, even old ones like Ogawa and Davies, or Lee Changho’s books).
I guess similarly I have an interest in Combinatorial game theory, so some knowledge from some books there could help also.

I can imagine one might need to pull in examples and discussions from other places while reading along, and to be honest, unlike other book club ideas, I don’t want to set any precise or even hopeful timeline on it, because for all we know we might not get through it.

Still better try, and try together than not try at all

In another post I can try and summarize a few things from early in the book that I could follow, but also feel free if you also have it and have tried (or better succeeded ) in reading it, to post your own questions, understandings, confusions etc.

I guess we can plan to work somewhat linearly through it, however it almost appears somewhat nonlinear in cases like “I make this definition now, because it will be useful later, trust me” kind of statements, or the discussions of the rules being used being part of an appendix etc. So it might be that one needs to just around a bit anyway.

Edit: Some other maybe useful youtube videos from the AGA.

If I understood Jasiek correctly, his claim is basically that his books are an attempt to interpret Mathematical Go so the methods in there can be used at the table.

which is to figure out if A or B is better as an endgame move.

The (surprising?) answer was that B is sometimes better but A is never better, and the emphasis is placed later in the explanation on “getting the last move is key”. In combinatorial games, this is actually the whole point (winning condition) of a family of games, so maybe that might not be surprising in some ways, but I guess the Go aspect can still be surprising and/or confusing.

The proof that b is sometimes better, is in an example like a symmetrical position:

Basically the top and bottom positions are identical but color swapped (if go back to just before the circled move). In CGT, when you do this, whatever value you can assign to the bottom position, say V, it means that the top position has value -V. So with all else equal (ignoring how that works on the board size), the two positions cancel, to give a value 0 position, which is a second player “win”. Ok win here, is that Black gets to play to last move if white goes first, and in this case it’s more like a draw on points.

So like in other games, in a mirrored position, there’s a strategy where you copy your opponents moves and that you guarantee and equal result and playing last. So this is the argument that b can sometimes be better.

But if Black answers A rather than copying with B they are supposed to lose by 1 point.

White’s strategy is said to be just to keep pushing into Black’s area and only blocking their own territory when necessary (which I suppose means, keeping the captured stones captured).

It’s funny then that “a” is never better, because you’d think that something similar where White plays “a” in the mirror position, Black could again just copy (they can), but I guess the point is that in that case black can do better. This is Figure 1.5

So I suppose the obvious things one might want to understand here are “why are the games played out the way they are” - which might be easier given the follow up sections on how some counting is done with corridors etc (or pulling in counting from elsewhere). Then understand the idea of how this sort of proof using the “difference” game as it’s called is supposed to work.

I think ways you can go about that are by counting the values of the moves each player takes. In a lot of ways both players make the same moves, so the values will cancel each other out, but it still might be useful to count some examples to see the difference.

Or one could move on, come back to it later, or maybe only some people are semi interested in it. It’s up for discussion anyway.

So I guess before answering that sort of thing it might be worth moving on slightly into Chapter 2.1 on fractions.

Basically he shows a bunch of corridor examples, which while undecided have fractional values. It’s summarized in the table of Figure 2.2. (Just ignore the life and death issues in these examples imagine they’re connected to a big live group if you want to)

The way the calculations work, is they look at what happens when white plays or black plays and then take an average. So for the smallest corridor, if Black plays 1 point for black, if white plays 0 points. The average is 1/2, and some sources will call this the expected amount of territory for Black here.

In the next corridor, if Black plays it’s B+2 but if white plays its the first corridor which is B+0.5, and so you again average these two to get 2+0.5/2 = 1.25 or 1 and 1/4. Then you keep going the average of 3 and 1.25 is 2.125 or 2 and 1/8 etc.

I think the one thing books generally do badly here is motivate this calculation. Some books, including a Japanese book I was reading will try to motivate it like “We don’t know who will play here next so it’s like a 50/50 chance either will play and so you average the two future outcomes”. I think Antti’s endgame book comes at it from a more useful or practical motivation, where it turns out that when you add the expected territories you actually wind up with a number which when played really is the territory that Black gets in the game (if it’s the whole game).

when you add up the numbers you get that Black should expect 7 points of territory, and when you play it out correctly in a sense, then this actually does happen to be the case.

So in a way that’s a better motivation, but it’s still mysterious, and it doesn’t fully address what happens if you play the endgame “wrong”. Something myself and @Jon_Ko played about with was trying to see where when you play such an endgame wrong the extra point “appears from”. It’s kind of that you have to make a total mistake of over one point over the course of the sequence. It might not be one single move that loses you a point. Even some slightly wrong move orders (judging by point values) will still lead to B+7.

Anyway, the idea of value of a move is typically in these cases taken as the difference between the score after the move is played and the expected score of the position. So when white or black moves in the half point corridor the value moves up to 1 or down to 0, by 1/2 a point, so the value of the move is 1/2. If white or black moves on the 1+1/4 point corridor it either jumps up to 2 points or down to 1/2 point which means it’s worth 3/4 of a point and so on.

These are precisely the temperature values in the table of 2.4 of Mathematical Go. There however they’re presented as 1-1/2, 1-1/4 etc to show how they increasingly get closer to being worth 1 point but not exactly. Antti points out that the general formula for the value is (2^x-1)/2^x where I guess x is the number of points black can make in the corridor with one move, or the “length of the corridor”. The expected territory I guess is more like (x-1)+(1/2^x).

There’s a comment in Berlekamp about how a corridor that is open on both ends is kind of like a corridor that is one length shorter. The precise reasoning isn’t explained, but the Go players explanation is that both open ends are miai, and if white plays one black can play the other. That’s also reflected in the table.

I don’t want to jump too far ahead yet, so I can leave it there before one tries to read the headache of “Chilling” which is also in the books subtitle

FYI I think you have this diagram a little wrong here. Specifically, the white stones should be completely blocking the corridors.

This seems like an imprecise kind of number to be using and the “average” process isn’t really useful in general. You can use the formula in Section 4.3 for example to get its actual value:

n - 2 + ∫ 2^{1-n}
= 1 + ∫ ¼ .

Whatever that means, I don’t think calling it 1¼ is consistent with the rest of their notation, so I’m confused about why they are using it at this point. I guess the authors are indicating that it’s imprecise by calling it “≈ AREA”?

Or is there more to it? If there is, why don’t they use these kinds of numbers later? I constantly have this uncertainty while trying to get through this book.

Does the book give a way to calculate the value of a move? (As opposed to the value of unclosed territory, like in the corridor example). I would be curious if it reflects the result in the example position:

Does a move at B get a higher (fractional) value than a move at A?

It might also be a ruleset thing. Technically a dame is worth * in Chinese and area scoring rules. It’s an infinitesimal number that has the property *+*=0 because only an odd number of dame are actually worth something. The last dame of the game with the rest of the board completely being equal, would mean the value of the move and the game would be *, and whoever takes it wins. * is also interpreted as a first player win more abstractly in combinatorial games, while 0 is a second player win.

So again I too haven’t made it fully through this book, so I’ll also be trying to figure it out as we go along.

At least what one of the comments mentions is that the idea of chilling the game seems to be to subtract off integer amounts of points from the positions so that only fractional and infinitesimals values are left. Then when you work out the infinitesimal parts they have some notion of heating represented by ∫ to turn the numbers from the chilled game back to numbers in the real game.

They mention for instance that a chilled dame is worth zero but if you heat it up you get *.

So I guess there is some additional infinitesimals attached to these corridors, which one would have to work out. In other games I’ve seen it that it can happen “corridors” like this can alternate having a star attached to them, so it could be 1/2, 1+1/4+*, 2 + 1/8 etc (that’s just example pattern, I’ll have to read further to see if I can understand what the actual pattern should be).

It is consistent with ordinary go books though in some sense, and which are supposed to be the Go players approximation to the right values.

I think the main difference would be additional dame, which in area scoring is a whole board problem (odd or even), and in territory scoring there’s no difference? Again ignoring that the black stones could be cut off and captured It’s just a convenient but inaccurate presentation. The Christmas tree is slightly better I think.

I’m also curious, but the method of calculations I’ve seen for example, also depend on whether the move as part of the sequence is sente.

If the moves are gote there’s some averaging of future positions. If the move is sente, (with some assumptions on what that means), then it’s never an option for one player to get two moves in a row in a position, so there’s nothing to average, you just replace the value higher up in the tree with the value of the result of the sente move.

I’ll come back to this a bit later when I’ve some time to make a diagram or two

In other words, White starting with A is not really better than white starting at B, in the sense of theoretical optimal play.
I’m also curious how the proof goes, that B is never better than A.

At least that part is supposed to be in the book, and I’ve copied a couple of the diagrams to show

It might take a little while and some more machinery to set it up, but maybe some simpler examples will also help first.

I’ll make another post about adding positions together, (expected) values, sente etc with some pictures later today.

The summary above though, is that when calculating which move is better you just play out the game optimally in some sense and see who wins. B was the best first response supposedly in the symmetric game, so it’s sometimes (in some positions) better than A. In that particular case A is supposed to always lose by 1 point given optimal play.

But when A was played by White, copying with A would give a draw, but also it turned out that B was just as good to also give a draw.

I guess I’ll see if there’s a way to unpack this with a simpler example first, comparing two moves we can usually tell which one is better.

The above position has two possible outcomes depending on whether black plays first and captures three white stones, or white plays and saves the three white stones. The local score is written down as 6 when black captures (territory count), three stones and three prisoners, or 0 when white saves as nobody gains any points.

The expected territory of the unsettled positon is the average of these two outcomes (6+0)/2=3. Now when either player makes a move they change the expected territory by 3 points, either black plays the capturing move and gains three more points than expected or white saves and saves 3 more points than expected. So the value of the move is said to be 3 points.

In other contexts, other endgame books, one might just call it a 6 point move, which is the swing or difference between the two outcomes (6-0=6). I think this is supposed to work to compare similar kinds of moves to find out which is bigger, but I think, if I understand, the drawback is that you have to come up with additional adhoc rules like a sente or reverse sente move is like double the value of a gote move.

If we know the move values, I guess you can just compare them and pick the bigger number, but if you didn’t, maybe one could do a similar comparison with the method in the book and double the position with the colours swapped.

The game should be a draw, if all else is somehow equal and these are the only point making moves left.

If white starts and captures the three black stones on the top (a three point move), Black’s only winning move is to copy and similarly capture the three stones

Black can mirror white by playing B, and get a draw, but if Black plays A, it’s also just a draw.

So in this case B isn’t better, and from the previous case A is sometimes better.

I guess in these examples you have symmetric positions and one player has played a move A say, and another plays B, and when you add them together (in the same game) you can try comparing the result to 0 with optimal play to see which move was better.

I think that’s roughly the kind of argument being presented in the book

with these kinds of diagrams. I think an added caveat though is that not all games are directly comparable. The values of games and moves can be a bit “fuzzy”, so it’s not always the case that you get A>B or A=B or A<B. There’s another option A||B which is that A is confused with B. So there’s some range of positions where you can’t exactly say which is better, and some that you can.

I think the main example is that *||0, * is confused with 0. It’s an infinitesimal that’s smaller than all positive numbers, and bigger than all negative numbers, and so is arbitrarily close to zero.

What > etc means, we can dig into, but essentially games where Black (by convention) say always wins whoever moves first are positive in value, and if white always wins no matter who moves first they are negative.

If the second player always wins then the game has value 0

(it makes more sense when the rules are that you want to play last - but I guess in Go this could be like a draw in some ways. I think one also needs to make some rule adjustments to turn go into a proper combinatorial game about moving last)

————-

An additional note:

I think in games that just have normal integer or rational values, every move you make just loses points. So for example, kind of like in Japanese rules, playing in your territory loses a points. Or if you were to make Go more like a combinatorial game, where playing last wins, you might make it so that you can’t pass, and you can either choose to play a move or give a prisoner back to the opponent. That would also disincentivise playing in the opponents territory like in no pass go.

In games like amazons, at the very late stage of the game (kind of like counting in Go, if you filled in the stones)

you have boxed off areas that have an integer number of moves left, and every move in this area reduces your score by one or more. People tend to call this the cold phase of the game, where every move lowers your score. I think when you look at how ordinary numbers are represented in games, this typically happens also.

Whereas in Go, and earlier phases of Amazons etc, there are typically moves that increase your score. I think these are referred to as hot games or positions. Probably they don’t often have usual integral or rational values. Most likely they’re a bit fuzzy. A position could be worth somewhere between 3 and 5 points say, but definitely no more than 5, in the sense if the opponent had 6 points you’d lose no matter what, and you might always beat the opponent if they only have 2 points. So that’s some kind of fuzziness inherent in hot games.

So maybe it has some fuzzy value between 4 and 7 points. It’s hard to know without actually being better at calculating and knowing how to calculate properly The idea would be similar, you know you’ll lose if you give the opponent 8 or more points (if those are the last areas left), and you know you’ll win if you give them 3 or less. But if you give them between 4 and 7, it might depend a bit on the details of the game.

(The depending on the details, I think, comes up a tiny bit when Berlekamp/Wolfe talk about rounding infinitesimals in the first sets of problems they solve - so I kind of expect it to be something similar )

My feeling that this book (I’ve not read it) is practically useless for improving your Go play was only reinforced from a facebook conversation with a CGT expert in which he got the swing value of opening moves wrong by a factor of 2. So he could calculate infinitesimal values of endgame moves, but he thought a move that was worth say 24 points was actually 12 points. And no this wasn’t just an initial mixup not understand deire/swing vs miai values, he had a fundamental misunderstanding which was confirmed through a lengthy discussion.

So by all means read it for the mathematical exercise / curiosity, but it’s not going to make you a better Go player.

Probably just to get better at the endgame, learning Tesuji, and gradually reordering the sizes of common endgame moves, and learning when to play a slightly smaller sente move over a bigger gote etc are the best ways to ordinarily improve endgame. I guess also some more basic counting and estimation too

I’d like to make a little pitch for the book and explain what I’m hoping to get out of it.

The book only covers a small class of Go endgame positions, those where moves are worth about a point. The fun thing about these positions is that you can assign them all precise values in a totally rigorous way. So,

Yes, following the method in the book you can learn how to calculate these values exactly. I think it’s not too hard to do the calculations (though I can’t really do it myself yet). It’s not about any kind of approximation like averaging - the values are precise and encapsulate everything you need to know about the position to find the best move, who will win, etc.

Sometimes the simplest way to express the value turns out to be a little move tree, but in most cases it simplifies to a single quantity. And the really cool thing is that for games that are a combination of these simple positions, you can compute the exact overall value by adding up the individual quantities using a special kind of arithmetic that’s not complicated.

They are really local values - the key to this is that they are not just normal numbers. I suppose the extra types of quantities that you need to consider (which really represent specific move trees) account for most of the ways that different types of positions can interact with each other non-locally.

As a mathematically-minded person, I like being able to precisely analyze some positions that are still complicated enough to have the flavor of Go, so vague concepts like “sente” and “gote” translate into specific mathematical features (I don’t actually understand this yet, but I’m looking forward to it). I also like how it pushes the limit of what can be rigorously analyzed, so that I can appreciate more how impossible it is to do anything totally precise for the more complicated positions that occur earlier in real games.

I do also hope to very rarely pull out these techniques to play a perfect endgame. It’s frustrating to be in a position that I’m pretty sure I should be able to analyze completely, but with a few different active regions, the move tree is just a little too complicated to visualize.

I know in Robert Jasiek’s books though, e.g. his Endgame book 3 Accurate Local Analysis, he shows positions where you might not know if a move is sente but you can estimate the size of the mistake by counting it as gote or sente. It might be off by a few points.

So then I guess it’s worth trying to think of what sente means in counting.

In Antti’s book it’s written like:

a move that ‘forces a response from the opponent’ and ‘that can be responded to with out taking a loss’.

If you do the whole gote move calculation you find that the expected territory of the whole position is 2, but if black is also prepared to always answer A, like it’s sente, then black doesn’t take a loss. They move from a position of value 2 to a position of value 2. They can expect a minimum of 2 points locally.

So in the above you can see the “without taking a loss part” and the “forces a response” is kind of a hypothesis. It has to be at the stage of the game where a 1 point move is actually the biggest, because if not, you’d estimate the above as 1 point gote and then play the bigger move the opponent missed.

There’s another example with two stones to be saved /captured as well, but the whole global context is that it might be sente as long as there isn’t a bigger move elsewhere. In the course of a game, because of the types of moves available, you might not always get to play “your sente” because of having to take a bigger move first and your opponent getting to play it as reverse sente as the next biggest move.

I think for me I’m similar, in that I’d like to understand the things you want to about the book. If possible, though, I would also like to draw some more connections where possible to how Go players in other books calculate, and make connections more generally with combinatorial games and CGT.

For instance, if we look at the smallest corridor again:

The half point you get from averaging, and the values that come out of a program like CGSuite

are probably not completely unrelated. The way I’ve written “x” above, is that if black moves they get 1 point, so they move the game to the value 1. If white moves, at least for Area scoring rules, the game moves to * which is the value of a dame, which could be worth 1 or 0 points depending on the amount of them. Then there’s functions to calculate the Mean and temperature of such a position x={B|W}={1|*}, and both come out as 1/2 which is what shows up in the tables.

If you go up a corridor size, you again see the similar ideas that come from averaging, and subtracting which a go player might use

Black can move to take 2 points, or white can move to the smaller corridor which was {1|*} in game notation. Again you see the mean being 5/4 or 1+1/4 and the temperature being 3/4 (the difference of 2 to 1+1/4).

In this position with Black to play, the best move is F9, followed by White D1 and Black G1. Black ends up with 4 more points than white (assuming territory scoring with 0 komi).

On the other hand if Black plays D1 and White F9, Black only has 1 more point on board.