from this site inspired by a very interesting tweet thread

https://joshdata.me/iceberger.html

Stereoscopic acuity, the ability to judge relative distances of different objects, is considerably reduced underwater, and this is affected by the field of vision. (see also the reading )

So is James Galway aboard the international space station.

Let’s continue this discussion in this proper place…

Oh yeah, woops! Can we move those other posts to here?

If you think that’s the case, you’re misunderstanding the game…

The value of the *game* depends heavily on your opponent’s responses… But they have no impact at all on the value of your moves.

So, if you make a move to setup a snapback, and your opponent falls for it, would that move be more valuable to you than if your opponent noticed you were setting up a snapback and played safely?

You build a wall, and are able to drive an invading group towards that wall and capture it, while securing a good “outside” position. Does the wall have more value because your opponent invaded?

Your opponent shoulder-hits one of your stones, and you are able to extend and make your corner more secure. Is your original stone more valuable because it is now stronger and better protecting your corner?

No, actually. It just means they made a mistake that gave you points. The value of the move itself is found in only the moves that could possibly be optimal depending on the overall situation. Think Combinatorial Game Theory’s “canonical form”. If the opponent can play a move for which there is a better local move for *all* global situations, then that move is dominated and not to be considered in value.

No, that value was already there. It was to be gained in either attacking an invasion for profit or securing territory upon the lack of an invasion. And the opponent made the mistake of an invasion that got nothing, which is valuable in and of itself, but cannot be considered in the designation of value when playing the stone (aside from the value of them *not* being able to play that)

Ok, why would this be the case at all? Sure you’re securing the corner more (with 2 stones), but you got shoulder hit, weakening its influence and giving it to the opponent. Not only that but your stones ability to make a stronger corner with adding a stone is already there.

It would be cool if you could use AI to identify and count (ordinal data) different classes of playing mistakes for different ranks, then generate a Pareto Chart of some of the most common types of errors. A personalized chart could be very helpful to DDK players who want to know where they need to improve (or study general patterns in the population to exploit common weaknesses among other players. ). Identifying some “mistakes” for higher levels of play (not counting obvious errors) would be much more difficult.

As noted by others, not really.

This is my understanding of it:

- Value of a sente move is how many points you would gain locally if opponent doesn’t respond and you get to follow through on your threat.
- Value of a gote move is local points difference between the situations of you playing first and opponent playing first.

So, for example:

Let’s say it threatens to capture three stones, so if opponent does not respond you gain 5 points (assuming Japanese rules so 3 points for prisoners and 3 points for territory minus 1 for your stone they capture). Then the value of the move is 5 points in sente. If opponent responds, you do not get the 5 points but you do get the snapback setup stone which might reduce their territory by say one point and you have sente so you can do something else somewhere else. Over the course of a game, all these little one point gains add up. If opponent does not respond, then you can get the 5 points (in gote) but opponent has done something else somewhere else so they will probably gain from that so your net gain is probably less than 5 points.

So, points you gain depends on opponents response but the value of the original move does not.

And this is how I use it in games (primarily in endgame): Always play the most valuable move. If a gote move is worth more than a sente move, then it’s OK to play gote. If a gote move and a sente move have equal value, then play the sente move first. This applies throughout the game but is obviously much easier to calculate in the endgame (or even, this is possible to calculate in the endgame and (nearly) impossible to calculate before the endgame).

Thanks for moving this over.

For the snapback example, if your opponent makes a mistake, and gives you points, could that not be interpreted as increasing the value of your previously played stone(s) that setup the snapback? Go is a very dynamic game. It is impossible to predict with 100% certainty (nobody plays perfectly) how a stone will be used later in the game. A stone might become a ladder breaker 50 moves later, help to setup a better endgame posiiton (allows you to keep sente), provide ko threats, etc.

If you can’t assign a fixed, 100% certain value to a move, I contend a more appropriate view is that moves have a fluid, dynamic value which evolves with the game (and also depend on your opponent’s moves).

I understand it is generally good strategy find the most efficient and solid moves to limit the risk of complications later in the game, but there is a delicate balance between playing too slow and cautious, and overplaying and being punished. If you think you can get away with a slight overplay, you should seize the opportunity.

Higher efficiency can be achieved by making slightly greedy moves in the opening and early middle-game, with the understanding you won’t be able to save everything (playing fast and loose). Determining which stones you will need to sacrifice with 100% certainty, at the time you play them,is not possible because you do not have sufficient information about how your opponent is going to play.

I suppose you could go down the statistics rabbit-hole and try to maximize the value of each play (minimize the risk), but has anyone been able to mathematically prove that playing sequential “best moves” leads to a global optimum? I would not think this would be the case, because perfect play can only be practically achieved in the simplest of situations (L&D and endgame problems).

What are your thoughts on this (in the spirit of friendly debate)?

Well, I think there’s a slight leap here. Sure, you will make mistakes in trying to play as optimally as possible, walking as finely as possible on that line of “aggressive enough, but not enough to be punished”, but I don’t think that considering clear opponent mistakes is the way to go unless very behind and it’s a very difficult sequence to read out in the time given. Go is a very complicated game, and just playing *accurately* is hard enough to read out, much less trying to make calculations about “how likely the opponent is to throw away points here”, and while we often run the risk of our reading being wrong, or our intuition when the reading is not clear being wrong, I find that “trick moves” where you play moves where you have already read out the proper punishment are not actually helpful in reading in most game situations, and don’t really help improvement.

I’m not quite sure what you mean, but by the laws of game theory, all perfect information games can (theoretically) have perfect play determined by backwards induction. The issue is if the game tree is too large, it is not practical to search the entire game tree to prove you’ve found an optimal play.

Here is what I was thinking with the “global optimum” thing.

Consider a magical function called “winrate” that can accurately calculate the probability you’ll win a game for any game position, considering both players play perfectly for the remainder of the game.

If you choose each move so it maximizes the winrate function (“best move” calculated at that time), has it been proven that no situation can arise where playing a “sub-optimal” move can cause the winrate to end up being higher than if you played only “best moves” at some later time in the game? (Kind of like the local maximum vs global maximum of a function).

soo… In go, with non-integer komi, this will always be a 100% or a 0%. There will always be a way one side can force a win with perfect play and you can find it via backwards induction. If there *are* draws, it’ll either be 100%, 0%, or whatever you would write 100% chance of jigo as, in the case where both sides can force a draw (or better if oppo makes a mistake). It’s just the nature of perfect information games.

Granted, this is practically impossible to find, as proving a solution means working through the *entire* game tree of go so as to eliminate all dominated strategies

The main thing to consider is that there is no chance involved, and neither of the players knows anything that the other player could not know.

As a result of this, anything you could think of, is something your opponent could think of as well. Thus, if you make a “losing move”, but hope that your opponent will play a mistake, then your opponent could have exactly the same thoughts as you have: that it is a losing move, and that they shouldn’t make the mistake. This same idea goes equivalently for sequences of moves, so if you’re thinking *mathematically*, then any possible sequence of moves is a sequence that both you and your opponent can consider. Choosing the best one out of this sequence is then a matter of finding that move so that your opponent cannot prevent you from winning. Because of the nature of the game, one of the players must be able to make such a move.

In practice, this won’t work of course, since practical players cannot consider all possible sequences of moves (there’s too many). But mathematically there’s no problem with practical limitations, and thus the magical function must be either 100% accurately predicting that you are the player with a winning strategy, or 0% if your opponent is the one with the winning strategy.

I’d go a step further, and think that for professionals it probably barely matters if they’re thinking out loud, or playing in silence.

Thanks for the explanation. I didn’t consider that the winrate function can only return 2 values 0% and 100% for perfect play (forehead slap). That makes talking about marginally higher or lower winrates for perfect play nonsensical.

If a perfect player is playing a non-perfect player, is there any scenario (corner case) that could cause a perfect player’s winrate to momentarily toggle to 0% ?

Thanks.

I got my head twisted around thinking about calculating probabilities of making non-perfect moves. It’s been a long day.

Thanks for the help.

probably not toggle, but if the perfect player starts out on the disadvantaged side, it will briefly be 0% until the opponent makes a mistake to turn it to 100%, and then by definition there is no way for the imperfect player to force the perfect one back into a 0% state unless the perfect player turns out to not be perfect.

I wouldn’t go that far. Yes, a majority of their reading would be similar, but if they’re thinking out loud, they’re probably mentioning the most interesting parts of reading that are less likely to be noticed by the other, which *would* influence the other person’s judgement (as they cannot mention literally everything they’re reading unless they’re speaking inhumanly fast)

I just got the “nice topic” award for the Go Memes Pedantry thread, which officially confers me the title of Tenured Pedant thank you for your attention