This topic is for all the armchair joseki theorists out there, and also because reliable point estimation is becoming possible thanks to KataGo. I’m curious what y’all think the thresholds should be for the three basic flags in the forthcoming joseki tool (with some dictionary definitions in parens):
- Ideal (perfect; most suitable; best possible)
- Good (adequate; satisfactory)
- Mistake (misguided; wrong; erroneous)
Let’s say you or I have a question about a corner move and run it through a score-sensitive AI to check the output in some test positions. You’re going to see numbers like “+1.9” and “+2.4.” If you’re proposing that move for addition to the dictionary, you’d have to categorize it as ideal, good, mistake, or sometimes trick. That’s what this poll is about, and also just for the theory/fun of it. So what do you think?
How many points must a move lose on average before it is no longer “ideal”?
- Less than 1.0 points
- 1.0 points
- 1.1 to 1.4 points
- 1.5 points
- 1.6 to 1.9 points
- 2.0 points
- More than 2.0 points
How many points must a move lose on average before it is no longer “good”?
- Less than 2.0 points
- 2.0 points
- 2.1 to 2.4 points
- 2.5 points
- 2.6 to 2.9 points
- 3.0 points
- 3.1 to 3.4 points
- 3.5 points
- 3.6 to 3.9 points
- 4.0 points
- 4.1 to 4.4 points
- 4.5 points
- 4.6 to 4.9 points
- 5.0 points
- 5.1 to 5.4 points
- 5.5 points
- 5.6 to 5.9 points
- 6.0 points
- More than 6.0 points
Is a move still “ideal” if it requires a working ladder?
- Yes (josekipedia standard)
- I’d prefer a “requires ladder” flag
I voted, but…
The problem is, most of the (corner) joseki are very dependent on the outcome of the other corners. I can imagine building a wall to be perfectly ideal if it’s towards your own corner and ends in sente, but to end in disaster if the middle stone is left out and the opponent plays there because the one-but-last move wasn’t as sente as you hoped for…
I definitely require a ladder-flag, however.
By the way, to explain my vote; I think an ideal move should not lose any points, whereas a good move should not lose more points than you can reasonably make up in sente during the opening phase (i.e. 5~10 points)
Good thoughts. I think a joseki dictionary should have some basic assumptions to make it maximally useful.
First, the corners are occupied. This is because this is the point from which the first joseki in games usually arises, and otherwise, tenuki / pass would be the best move too often in the dictionary.
Second, no corners have been developed. This is because the joseki sequences are supposed to be joseki in isolation, regardless of context as far as possible.
Otherwise, I agree that more than one test position would be required for an objective analysis.
I’d vote but I have absolutely no experience with KataGo-like bots and don’t know how picky they are when it comes to playing moves they don’t like.
I’ve exchanged a few pages with Eugene about roughly this topic, just that I strongly oppose the idea of value judgements.
In summary, I think the OGS “joseki dictionary” (and even that is subject to much debate) should be as objective and descriptive as possible, steering clear of value judgements like “ideal/good/bad/crude” and especially point estimates, because
- value judgements will (ironically) lose value over time as the level of analysis improves
– which will make maintenance pretty much impossible
- value judgements on local variations devoid of context are presumptuous at best
- point estimates are meaningless unless the whole board position is taken into consideration
I would much rather steer toward what hasn’t been done before: a descriptive (and therefore easy-to-verify) outcome-based system.
If you’d like to read my elaborate opinion on this maximally descriptive and minimally inferential pattern explorer, I’ll edit my conversation with Eugene into a coherent whole.
Thanks for sharing. But isn’t the decision to include or exclude a move itself a value judgment? You can punt that decision to the pros, but that relies on their value judgments, and they’re increasingly adopting AI value judgments, so I don’t see much value in hiding the ball on this.
Besides this, any dictionary worth its salt should identify misplays. It’s not a quantum leap to have this marked by flags for “joseki” and “mistake,” and you’re two thirds of the way there already.
Wouldn’t it be the case that anything that is considered joseki (new or old) is already either good or ideal?
It’s more or less than even for both sides otherwise it’s not joseki right? In some circumstances sure there are slight advantages to some joseki, but does the dictionary plan to include extra side stones or extra fuseki stones so that the points gain loss is estimate-able?
Also are the points supposed to be percentage points to win the game?
Even if you didn’t edit it, I’d love to read that.
I would definitely like to see some more examples of identified misplays and explanations of why they’re bad. Like especially if there’s options to cut that don’t work (it won’t stop some of us kyus tryin to) but it’d be nice to see more rebuttals than josekipedia offers.
I think especially valuable would be if the natural kyu / amateur dan move of choice would be featured as a mistake, whenever it is a mistake. Josekipedia has a “Question” tag, for those moves that seem obviously good to some, but might be terrible until some strong (enough) player gives their value judgement (whatever it might be).
It might even be possible to scrape the first 20 moves of all games that are played to see which moves are commonly played as “kyu joseki”, just to find these common mistakes.
No, it’s supposed to be actual (fractional) point loss in the local position, in general cases where the board is very open. Percentage points is AI fiction.
For example, here’s KataGo estimated point loss at move 1:
- 4-4 (no loss)
- 3-4 (0.4 points)
- 3-3 (0.8 points)
- 4-5, 3-5 (1.2 points)
- 4-6 (1.7 points)
- 3-6 (2.2 points)
- 5-5 (2.9 points)
- tengen (3.0 points)
Most other moves above the second line lose about 3.0 points.
Are the “actual” points from KataGo not AI fiction?
I have to back @smurph here - there is no way to take an AI judgement out of its positional context and apply it to “a joseki”.
I am adding quotes because even the definition of a joseki is not free of context. A move might be joseki in one situation and a mistake in another.
Here is my definition:
- An ideal move is a move that a professional might play in a variety of situations.
- A good move is a move that a professional might play in some limited, specific situation that calls for it.
We should not give up the human element yet. It is valuable because it can be reasoned and verified.
It also allows us to back up the claim to a sequence with a professional game.
So putting a stone anywhere on the board loses points . Reminds me of Wargames where “The only way to win is to not play”
I mean it would be nice to estimate how many points one side or the other loses but in some instances like a 3-3 invasion, one side just gets thickness right?
Does it see playing a 4-4 point and the standard old joseki to be even or just black (say) losing a corner of points and white winning a corner of points?
I guess if I understood the numbers I’d vote on them
smurph and I got onto this topic in a PM exchange when we were originally talking about his suggestion of “display the percentage of different outcomes from each position”.
That conversation led into the topic of “is this thing we are making a joseki dictionary, or something else (maybe broader)?”.
If it’s a joseki dictionary, then the definition of “what is joseki?” matters, and guides definitions like “Ideal vs Good”.
For example, Josekipedia says "Figuring out when to use a Good move type is one of the trickiest parts of Josekipedia. We would have loved to not force this distinction, but the cost would have been too great. Some moves simply cannot be called correct or joseki, and yet cannot be called bad. "
Here the distinction is whethe or not the move is “joseki”.
Personally I struggle with what even is a joseki in these post-alpha-go times. But on the other hand I long for help with some sequences where someone else did the hard work of working out whether they are worth knowing (learning) because in some sense they are established as optimal in a range of situations. That’s why I want some sense, in this collection of sequences, of whether the darn sequence is established as GOOD or not.
Haha, I broke it down further. (The 4-4 point was 0.0 points loss. I just didn’t call it out.)
Yes and no. Unlike win rate, score is part of the game and is quantifiable. If you say that KataGo’s estimate is fiction, I won’t disagree. I’d insist that my own estimate is fiction, too, and pro players’ estimates are fiction, since none of us has solved Go. But since AI has achieved superhuman strength, we know its fiction is superior to our fiction. Sinan Djepov documented 53 AI joseki innovations, and Viktor Lin identified 20 instances where human joseki were flawed. All of these moves were identified by looking at AI value judgments. So to ignore AI value judgments, to me, would ignore the reality of how sequences become josekis in the age of AI. Given that professional players all the way to #1 are spending up to 5 hours a day bending their value judgments towards AI, I feel there is a place for AI in determining what sequences are joseki. Granted, directly transcribing score estimates onto a joseki dictionary is probably the wrong way of doing it.