Book Club: Mathematical Go, Chilling Gets the last point by Elwyn Berlekamp and David Wolfe

Probably Chapter 3 is going to be a fairly rough one to read through, in order to feel confident that everything makes sense.

3.5 is a bunch of CGT stuff, defintions examples etc all crammed into a few pages.

From looking at Siegels CGT books, there were a few interesting things I came across, like an “Archimidean Property for infinitesimals”, where for G infinitesimal you can find n so that -n↑ <G<n↑.

It also says that one can think of ⧾G as a position where right has the ability to make a threat G, but left has the immediate option to cancel it, but also as soon as it’s played.

I guess that also feels like a sente move or a ko threat etc.

Maybe appropriately cooling the game, these sorts of positions can turn in to ⧾G in some cases.

I feel like generally one could read CGT all day and maybe not necessarily get closer to understanding its relation to Go, because it tends to go off in wild tangential directions.

But anyway, at least I can try to post things like the above, where I’ve come across some examples of how to get some intuition for these strange numbers and games.

For example, the defintion of stops: L(G) and R(G) or LS(G) and RS(G) in the Berlekamp book.

The idea seems to be that Games, particularly when they’re a first player win, and they are confused with some numbers, there’s typically an interval of confusion associated with them.

Maybe the easiest example, which is actually an interval is {1|0} .

It’s a number that algebraically behaves like 1/2 but you can show that {1|0}≹ 1/2 so it’s confused with a 1/2. It also has mean 1/2 so in some way it’s kind of like 1/2.

Anyway though with the definition of stops

L(G) = {G if G is a number,
    {max R(GL) where GL are the left options of G.

R(G) ={G if G is a number,
   {min L(GR) where GR are the right options of G.

I don’t exactly have an intuiton for this, other than it’s recursive and the games are getting simpler and simpler so eventually it terminates.

However for {1|0} you get L( {1|0} ) = 1 and R( {1|0} )= 0. Then the intution for this is that these are the edges of the interval of confusion for this game. So it’s confused with games that have values in 0,1. There was a notation like C( {1|0} ) = [ 0,1 ]. I think you have to check that {1|0} ≹ 0 and {1|0} ≹ 1, because the endpoints might be included or not like a half open or open interval etc.

Sometimes you see pictures where games are described like fuzzy clouds on the numberline, or as particular numbers or values.


The incentives I’ve brifely looked at. ΔL(G)=GL-G and ΔR(G)=G-GR. Apparently the incentives for numbers are negative, usually because a number sits in between it’s left and right options. I suppose numbers in the simplest cases are like settled positions, where one player has a clear advantage by a certain number of moves.

The negative incentive is supposed to mean that you probably don’t want to play there until the end of the game, until you have to, since you only stand to lose points by playing there.

There was a funny thing in the book about how incentives kind of depend on the representation of the game, but if you compute it from the unique canonical form you can get unique answers as opposed to “formal” answers.


The chilling and warming is a bit too much for me, especially the proofs. Sometimes I find that the proof in a combinatorial game theory book will just be “Right can win by moving second with a mirror strategy”, but how that applies and proves whatever it was supposed to prove, takes a lot of thinking and unpacking and checking, if you’re not that familiar with this stuff as I wouldn’t really be.

The intuition for cooling, it says is to try to add a tax t for making a move in the game.

Gt={GtL-t | GtR+t}

I guess in the options that kind of makes sense, subracting a value of t from Left’s should be better for Right as Right prefers more negative outcomes. The the opposite for the right options, where adding t makes it more favourable to Left who wants positive valued outcomes.

Beyond that it gets a bit funky, because depending on different values of t the game changes a lot, sometimes it’s a cold game like a number, sometimes it’s still a hot game.

The thermographs seems like a cool idea to keep track of it, plotting the pair L(Gt) and R(Gt) as t varies. So while the game is hot, these are distinct values, and I guess while it’s tepid, and cold they’re maybe like the same single number.

It seems as well that the thermographs are nice in that for a sufficiently large t, L and R of G converge to the mean value m(G). Then it’s mentioned that one can get an algorithm for obtaining the mean and temperature m(G) and t(G).

I have a feeling these will be related to the calculations go players do when estimating sizes of moves by averaging possible futures and subtracting options from expected values etc, at least in some cases or capacities. So I do look forward to trying to understand that a bit better.


I wonder though, is it going to be better to read Chapter 3 alongside Chapter 4, since at least 4 has Go examples, while Chapter 3 as I said is a bit abstract?