This is not quite correct. Not all sente moves represent reversible options. The point is that you do not want the opponent to get all the sente moves, and sometimes want to threaten mutual damage. Indeed, if you look at the difference of your example game with itself, White cannot let Black get both the sente and reverse sente, and 2 is not always preferable to {3||2|0}: from {3||2|0}+{0|-2||-3}, White moves to {2|0}+{0|-2||-3}, and Black can only win by moving to {2|0}+{0|-2}=0.
Reversible options correspond to sente moves where one player can completely reverse any gain the other hoped to get from the sente. These are more like loss-making ko threats. For example, in your original position,
White can also make a move at B.
Then Black can respond at A,
leaving a game equal to 4.
Thus the original game can be more completely represented as {3|{2|0},{4|0}}, but the second right option is reversible, since 4={3|} is always at least as good as {3|{2|0},{4|0}} (Right has fewer options).
We can therefore replace {4|0} with all of White’s 0 options in 4, which results in the removal of {4|0}.
The point is that no matter the global board position (barring ko), Black can always respond to B at A. There may be bigger moves on the board, but then that means White lost points playing at B in the first place and Black has not taken a loss.
RIght I see what you’re saying, we take the position G and -G, the color swap. If white plays the “sente” move to save one stone, black can’t answer as they have to also play the other “sente” move to save their one stone in -G say.
Right so the reversible options business is probably mostly useful just to throw away bad options in the position and simplify the game tree?
Is there a situation in Go where you wouldn’t want to throw away such a reversible option? Is it something involving capturing races I wonder, where you have to throwin stones to eventually reduce an eye space and remove stones as prisoners?
I’m thinking of that since it mentions that typically with the loss taking moves like that you want to continue locally.
(It’s kind of hard to predict how a capture several moves deep, affects values, higher up, and whether they’d be reversible etc, without much experience doing these sorts of calculations)
Reversible options are not always thrown away entirely; rather, you assume the opponent will respond. This results in throwing away the option if their response leaves you with no moves (0 points). This principle is valid in all combinatorial games, including go positions with capturing races requiring throw-ins.
There are positions in Go where you can’t just reverse reversible options: ko fights. The presence of these normally bad moves is important because they provide ko threats, and the opponent may ignore even a loss-making threat to fill the ko.
Probably Chapter 3 is going to be a fairly rough one to read through, in order to feel confident that everything makes sense.
3.5 is a bunch of CGT stuff, defintions examples etc all crammed into a few pages.
From looking at Siegels CGT books, there were a few interesting things I came across, like an “Archimidean Property for infinitesimals”, where for G infinitesimal you can find n so that -n↑ <G<n↑.
It also says that one can think of ⧾G as a position where right has the ability to make a threat G, but left has the immediate option to cancel it, but also as soon as it’s played.
I guess that also feels like a sente move or a ko threat etc.
Maybe appropriately cooling the game, these sorts of positions can turn in to ⧾G in some cases.
I feel like generally one could read CGT all day and maybe not necessarily get closer to understanding its relation to Go, because it tends to go off in wild tangential directions.
But anyway, at least I can try to post things like the above, where I’ve come across some examples of how to get some intuition for these strange numbers and games.
For example, the defintion of stops: L(G) and R(G) or LS(G) and RS(G) in the Berlekamp book.
The idea seems to be that Games, particularly when they’re a first player win, and they are confused with some numbers, there’s typically an interval of confusion associated with them.
Maybe the easiest example, which is actually an interval is {1|0} .
It’s a number that algebraically behaves like 1/2 but you can show that {1|0}≹ 1/2 so it’s confused with a 1/2. It also has mean 1/2 so in some way it’s kind of like 1/2.
Anyway though with the definition of stops
L(G) = {G if G is a number,
{max R(GL) where GL are the left options of G.
R(G) ={G if G is a number,
{min L(GR) where GR are the right options of G.
I don’t exactly have an intuiton for this, other than it’s recursive and the games are getting simpler and simpler so eventually it terminates.
However for {1|0} you get L( {1|0} ) = 1 and R( {1|0} )= 0. Then the intution for this is that these are the edges of the interval of confusion for this game. So it’s confused with games that have values in 0,1. There was a notation like C( {1|0} ) = [ 0,1 ]. I think you have to check that {1|0} ≹ 0 and {1|0} ≹ 1, because the endpoints might be included or not like a half open or open interval etc.
Sometimes you see pictures where games are described like fuzzy clouds on the numberline, or as particular numbers or values.
The incentives I’ve brifely looked at. ΔL(G)=GL-G and ΔR(G)=G-GR. Apparently the incentives for numbers are negative, usually because a number sits in between it’s left and right options. I suppose numbers in the simplest cases are like settled positions, where one player has a clear advantage by a certain number of moves.
The negative incentive is supposed to mean that you probably don’t want to play there until the end of the game, until you have to, since you only stand to lose points by playing there.
There was a funny thing in the book about how incentives kind of depend on the representation of the game, but if you compute it from the unique canonical form you can get unique answers as opposed to “formal” answers.
The chilling and warming is a bit too much for me, especially the proofs. Sometimes I find that the proof in a combinatorial game theory book will just be “Right can win by moving second with a mirror strategy”, but how that applies and proves whatever it was supposed to prove, takes a lot of thinking and unpacking and checking, if you’re not that familiar with this stuff as I wouldn’t really be.
The intuition for cooling, it says is to try to add a tax t for making a move in the game.
Gt={GtL-t | GtR+t}
I guess in the options that kind of makes sense, subracting a value of t from Left’s should be better for Right as Right prefers more negative outcomes. The the opposite for the right options, where adding t makes it more favourable to Left who wants positive valued outcomes.
Beyond that it gets a bit funky, because depending on different values of t the game changes a lot, sometimes it’s a cold game like a number, sometimes it’s still a hot game.
The thermographs seems like a cool idea to keep track of it, plotting the pair L(Gt) and R(Gt) as t varies. So while the game is hot, these are distinct values, and I guess while it’s tepid, and cold they’re maybe like the same single number.
It seems as well that the thermographs are nice in that for a sufficiently large t, L and R of G converge to the mean value m(G). Then it’s mentioned that one can get an algorithm for obtaining the mean and temperature m(G) and t(G).
I have a feeling these will be related to the calculations go players do when estimating sizes of moves by averaging possible futures and subtracting options from expected values etc, at least in some cases or capacities. So I do look forward to trying to understand that a bit better.
I wonder though, is it going to be better to read Chapter 3 alongside Chapter 4, since at least 4 has Go examples, while Chapter 3 as I said is a bit abstract?
I was thinking first let’s talk about how well Go can work as a combinatorial game without bringing up cooling or chilling. I’m not comfortable with those concepts either and would like to see more motivation first. The book really jumps right into it, leaving me feeling a bit lost.
I was reading somewhere that comparing Chinese to Japanese rules might be a similar kind of thing. I’d also like to mention some variations of no-pass Go.
So from what I’ve heard, or read in bits and pieces, you basically want a way for players to manually count their territory by playing moves, like you would in stone counting.
Then if you run out of moves you lose.
So you can make it no pass go, but the problem is that it might be better at times to play in your opponents area rather than your own in the “counting phase” which would deviate a lot from normal go. You can disincentivise that however I believe, by giving players an option to return a prisoner to their opponent rather than play a move.
So if you’re at the stage where both players would normally pass, then you next begin to play in your territory, and if your opponent plays in your territory, when you play to capture the stone you’ll at least not lose anything because you can give back the prisoner as a move sooner or later. Each stone played to avoid playing on your side and ends up captured can be returned, like a pass, so that should balance out.
I guess that also kind of incorporates the meaningfulness of prisoners when it comes to territory scoring.
Then with prisoner return you can also very easily allow for komi, becuase you just give white 6 or 7 prisoners at the start which they can use instead of passing at the end.
I suppose a natural question is whether group tax matters to game results.
There is a discussion of rulesets at the back of Berlekamp’s book, appendices A and B.
I’ll have a quick look over these sections again.
Edits: the mathematical rules, and B.3.5 mention prisoner return.
B.4.4 introduced an idea of earned immortality, which seems to be the way to avoid group tax. Basically you imagine giving the areas some unplayable liberty somewhere at infinity, which allows you to fill in the last two points/eyes of the territory when finishing the game.
So some modification of the capturing rule even that says when a group makes two eyes it officially becomes immortal by gaining a liberty at infinity.
Anyway it’s just some technical trick that you could probably do in a number of ways to say that you can fill in your eyes in counting without consequence.
I think in summary, without worry about all of the usual rules beasts and weird positions where rulesets might score positions differently by a point, or worrying too much about ko and it’s consequences… then yeah I think Go works reasonably ok as a combinatorial game.
I don’t think the issue with ko is that it can’t be defined. At the end of the day, the games are just lists of future positions, so if a position is legal or illegal by a ko rule, then it will or won’t appear as an option in the tree.
I guess the main complication of ko is that suddenly it connects what used to be completely disjoint and possibly settled endgame positions. It turns separate and summable local positions into completely inseparable global positions where you have to count globally the number of ko threats etc
I wonder if this is the OK91 game they mention in the appendix. It should be the right players and date
It’s not a bent four in the corner, but it does look like there’s a ko, that black can’t afford to fix. Like they’re winning by 0.5 as long as they don’t need to resolve the potential ko.
But then the corner dies and gets traded for some other stuff and black loses by 4.5
Chapter 4 I can imagine will also be a bit of a headache probably irrespective of understanding warming or chilling.
It take me a while before I post something about it. Maybe if I think of some other relevant things about chapter 3 or the start of chapter 4 I’ll post what I’m thinking
There’s rumours of a website connected to this forum, where people play a mini game with black and white stones
But jokes aside, I think probably just screenshots of an OGS demo board, and the uploading the image to the forums.
Also thanks for the example!
So as in you prefer answering the hane to the original position, because you get to capture a stone as a follow-up, whereas before you would just normally play a hane?
It feels like point wise it shouldn’t be different? Unless capturing maybe becomes sente at some point. I suppose prefer can also mean “not worse”, so equal or better.
I have heard in some situations, especially near the corner, where even if you plan to tenuki the “sente” hane and connect, that you should still block the hane.
Position A
B B B B W W W W
B - - - - - - W
-----------------
Position B
B B B B W W W W
B - - - B W - W
-----------------
In each position, if white gets to play next:
Position A + white move
B B B B W W W W
B - - W - - - W
-----------------
Position B + white move
B B B B W W W W
B - - W - W - W
(and w captured 1 B stone)
-----------------
In position A + white move, if black blocks, it is locally sente and the expected result is 1 black point and 2 white points, net 1 point for white in value.
In position B + white move, if black blocks, it is locally gote because black only threatens a ko for 1 point. White would typically tenuki since each move in the ko is only 1/3 point. The expected result from here would be an equal chance of white moving first again with 0 black points and 2 white points for net +2 for white, and black moving first and reaching a position with black 1 point and 1/3 point of equity in the ko and 2 white points for net +2/3 for white. Averaging +2 with +2/3 for white gives us 1 and 1/3 point for white value for position B.
So position B + white move is 1/3 point better on average than position A + white move.
This implies that black should not make the 1-2 exchange unless they plan to immediately follow it up with a connection at 3:
B B B B W W W W
B - - 3 1 2 - W
-----------------
If they make only the 1-2 exchange without the followup move 3, the position is just plain worse for them than not making the exchange at all. That’s what’s meant by “reversal” in the game theory terminology.
In position B, connect/capture is gote, so position B is 1/6 point better on average than position A.
(those positions as they stand, not those positions plus white move)
witness position is a board such that, with no captures and 0.5 komi,
Black is winning
but
Black hane-ing would lose to block the hane
.
Before yours, every image I saw seemed to start/end exactly haflway between intersections,
so I thought people were using something better than manual cropping.
So since you guys keep posting all kinds of Go positions, I’d like to talk about:
Go as a combinatorial game (Section 4.1 and Appendix A/B?)
It seems to me that the book doesn’t do a great job of explaining the basics of how we can represent Go as a combinatorial game. Maybe some of it is supposed to be obvious, and maybe some of my discomfort comes from a preference for Chinese rules, while the book mostly uses Japanese.
So I’ll make some attempt here. Does this kind of thing work to draw a game tree?
⚫⚪⚪⚪
⚫➕⚫⚪
⚫⚪⚪⚪
Black ↙ ↘ White
⚫⚪⚪⚪ ⚫⚪⚪⚪
⚫⚫⚫⚪ ⚫⚪⬛⚪
⚫⚪⚪⚪ ⚫⚪⚪⚪
0 -2
It should look like this
To represent Go as a combinatorial game, we consider all the reasonable moves (ignoring for simplicity unnecessarily filling in your own eyes or playing dead stones in the opponent’s territory). We also assume (for now at least) that there is no ko, since otherwise the games would not combine properly, and there might be cycles. Then we get a tree like the one above.
However, instead of just ending like a normal combinatorial game, Go has a score at the end. In this case it’s 0 if Black moves and -2 (2 points for White) if White moves. To take this into account, we put the 0 and -2 combinatorial games at the leaves of the tree.
Specifically, this means if Black moves the game is over, but if White moves, White gets 2 more moves afterward. You could imagine actually playing these moves with some scheme involving prisoner return and eye-filling, but for analysis of the game it’s not necessary.
At least that’s my understanding, but did the book ever really say it like this?
The example above can be written as { 0 | -2 }, which is notably a hot game, meaning that it’s to both players’ advantage to play first. Most Go positions are hot; that’s why we keep playing instead of just passing. Hot games are notoriously hard to analyze since they don’t simplify to numbers or infinitesimals.
This game is particularly NOT equal to its average value of -1.
To see that G ≹ -1, consider G + 1, meaning the combination of this game with a single extra point for Black. Clearly, whoever plays first will win, so G + 1 is not zero.
What if we use Chinese rules? Since stones count, we have to decide which ones to consider, and I think it makes the most sense to measure the score relative to the starting position, assuming all stones were alive. Then Black just gains one additional point by playing first, while White converts a black stone to white territory and gains one stone, getting to -3. So the game becomes { 1 | -3 }, apparently an even hotter game.
Interestingly you can get back to the Japanese game if you impose a “tax” of one point on whoever plays. I’m not sure if this relationship was mentioned in the book, but Sensei’s says it like this:
In general, the difference between Territory and Area scoring is that stones on the board are counted in area scoring but not in territory scoring. Suppose that you play by area scoring but each board play costs one point. You would actually be playing by territory scoring, since each board play results in a stone on the board or captured.
Since we know Japanese and Chinese rules generally have the same optimal play, that suggests taxing away yet another point from each move, which is called chilling. But is there more we should talk about first just with normal unchilled Go games? Can we work out the actual mathematical scores of the reversal example above or something like that?
I think I know what you mean, it’s not like white was expecting any points there, so there was not much incentive for black to play a hane connect themselves - only when it reduces whites territory.