Auto-score improvements

Hello OGS!

The way we automatically score bot games as well as make stone removal suggestions at the end of the game has, I believe, been improved a lot. Among the improvements are a lot of learnings and theories put together by @Feijoa and @Vsotvep over the past couple of years, so a big thanks to both of them.

The new system should be a lot smarter in our edge cases, should be a lot better at sealing, and should largely avoid highlighting potential invasion points and instead trusting that when both players believe the area to be territory, it is treated as such.

However, with any big update like this I’m sure there are some cases that still aren’t being handled like we’d want, so if you run into problems please share a game link here or directly message it to me and we’ll add it to the test cases and work on further refining the logic.

– anoek


What does this mean? If it means the players end and score with unsealed territories, and the scoring system doesn’t then score that position according to the rules of go, but assumes continued skilled play to close the borders and then scores that later position, then getting better at sealing means doing no sealing at all. Improving OGS' scoring system - #32 by Uberdude


I get the purist perspective, but in practice the end game is already the most confusing and frustrating part of the game for beginners, the more we can help them through the process the better. I feel the list in the link to your post is not reflective of reality, step 4 is not “beginner learns about need to close borders”, I think more often than not it’s “beginner is confused and frustrated about why an area didn’t get scored and quits the game.”, we need to ease them into it where we can.

As for easing them into the game and teaching them what they should be doing, the sealing markings do provide immediate visual feedback to the players drawing attention to areas that should have been sealed by a player, so I think this is a better way to teach them through example rather than berate them with trashing their end game.


we have the whole DDK range for that experience

1 Like

Does this mean the AI will assume less perfect play and allow human factor to be part of the game? As in, if I’m not able to judge that my opponent isn’t sturdily alive and they don’t question my reading skills either, AI respects our limitations? Final result is closer to what we would agree on a real goban without supervision?

If so, I’m very happy with this change.


That’s the idea yes.

Now, we still balance this with removing “obviously” dead stones, so it’s likely not perfect still, but it should be a lot better at respecting player intents. In particular, the biggest problem I would see a lot of is places where one player could cut and this would result in that stone being marked dead when it was obvious that both players considered things settled. That cut would often end up resulting in whole swaths of territory as being considered contested and basically ruin the game if the players didn’t fix it during the stone removal phase. Those kinds of situations should be much less common, and instead the territory should just be marked as territory like the humans expected.


I’m also curious about the details.

Is there possibly a way to run autoscore on a particular position without having to play out the full game with another player? Like if we could create a position in post-game analysis and paste some code snippet into the console to run the algorithm, that would be great.


you can fork any game position, create new user, invite, then pass

90% of the way there, not as nice as your interface by a long shot though.

Within the goban repository, run

./fetch_game_for_autoscore_testing.ts <game_id> from within the scripts directory

That’s going to download the game along with the score estimations and bundle up a .json file. The expectation is that you’d edit the expected output accordingly for a test case.

Drop that file in test/autoscore_test_files.

Once the file is in place then you can run ./test_autoscore.ts <game_id> from within the test directory and that’ll spit out some colorful output of the various steps and whatnot. Not nearly as nice as the autoscore v3 interface, but it sufficed for my purposes :slight_smile: . Note: running ./test_autoscore.ts without a game id will run the autoscore stuff on all of the test cases and verify the output is what we expect for all of them, good for regression testing if you or anyone else wants to play with the autoscore code in there.


Presumably it’s a small step to detect when a game is scored without the borders being sealed, and to show a little pop-up during the scoring phase highlighting the unclosed border and recommending to resume the game and close it.

  • If the players ignore the warning or don’t understand it, then the game is closed according to their intentions.

  • If one of the players listens to the warning, they learn about closing the borders.

Alternatively, if this is too much of “AI assisting before the game is finished”, then one could have such a pop-up happen after the scoring has been agreed to.


a6d396293f9d6af4d7a45fd734e6c635e25af350 (1)


Ok fair, that’s probably a better solution.


Very upset this is not in the meme section :rofl:


It’s actually a crop from a meme that I posted in Nov 2021


How would you do that?

1 Like

One of the major advantages of OGS (for me and my friends who play Go at ~1d level) is that you can do a simple analysis of the game at the end without using Katrain. The system shows you 3 key moves (or all moves if you have a subscription) and you can play the suggested variation and see the balance by clicking on “Estimate score”.
It is also very interesting to watch the game and predict the best moves in difficult positions. It works like this: you come up with a move, then play it, click on “Estimate score” and see if you guessed the best move or not. And one more thing: it’s always interesting to see the real balance of the game by pressing “Estimate score” (it works quite accurately, about the level of Katago with low settings).

Now it doesn’t work at all. Could you please return the functionality of the “Estimate score” button to its previous state? As I understand it, the new improvements may work separately from the score estimate functionality.

1 Like

Good idea. Here are some test cases forked from the compendium thread (board images linked to the actual games):

This one is good now:

Not following the rules of Go, only slightly better than the original:

Surprise invasions still happen - particularly the weird part in the lower-right:

But not always. (I tried with both Black and White to play.)

I do like how there is apparently only one kind of dame marking now. But I’m still curious about what changed with the life/death assessment!


Thanks for providing examples.

I know the idea is to help beginners by avoiding harsh unexpected results, but in my view being harsh is not really the issue as long as the result is clear and consistent.

Those results are not consistent. The score doesn’t follow the rules, and as a beginner I would inevitably be confused. Go is already hard enought to figure out, it’s worse if you have to figure out the quirks of a weird scoring algorythm on top of the rules…

Still, it seems better than before so hurray for that I suppose, but I’m a bit puzzled why we do not take this opportunity to change the system altogether as was recently discussed in another thread (and helpful pop-up may still be included after the game to help understand results, if easy-enough to implement).


Horray for the system getting better at respecting the players when it comes to life and death!

I agree with @Uberdude and @qnpnpmqppnp that the automatic sealing is doing newer players a disservice. The scoring should follow the rules of go. There have been several good suggestions for new UI features to make it clearer what is going on in the scoring. In my opinion this is the correct direction to go in if we want to do more to “ease players into it”. For players that do not yet know the rules of go, we should make the interface as clear and intuitive as possible, to help them learn by doing.

Making the system more complicated in order to try to guess what the players might have wanted to score (even though this scoring breaks the rules of go) is a step in the wrong direction, because this makes go seem more complicated than it is. Playing go is hard, scoring a game of go shouldn’t be!