A compendium of beginners getting discouraged and confused by OGS's new accounts are 12k design

It would be nice to have an absolute ranking system, but as far as I know it hasn’t been invented yet. There’s no “highest rank”, since the bots keep getting better and better. (Maybe there’s a highest displayed rank, but internally the ratings can go arbitrarily high.) To make an absolute system work, you would need to pick a reference player and then inject/remove ranking points from the system dynamically to keep their rank pegged at some chosen value. Or maybe it could be a set of players pegged to fit as much as possible to a set of chosen ranks.

Personally I think it would be fun to peg some final version of GnuGo from the pre-AI era at 10k and make everything relative to that.


I think it has been established that there is such a thing as perfect play, which implies a highest level of play.

Bots don’t play perfectly, but I don’t think that a perfect player would be able to give a very large handicap to the strongest bots. My guess is that the gap between the strongest bots and a perfect player is less than 20 points (a handicap of 2 stones).

Also, the gap between the strongest bots and the strongest humans seems to be about 2-3 stones handicap (roughly 30 points).

If a standard 1d were to be defined as requiring 8-9 stones handicap against the strongest humans for roughly 50% winrate, then one could infer that the gap between that standard 1d and a top bot is about 10-11 ranks, and that the gap between that standard 1d and a perfect player is about 12 ranks.

These inferences were built into the EGF rating system in 2021, where the highest possible EGF rank is now 13d (the underlying Elo rating conversion goes to infinity for EGF 13d), which is supposed to correspond to perfect play. So the EGF rating system already is an absolute ranking system since 2021 (albeit anchored to a hypothetical level of play with 1d being removed from it by about 1.3 points loss per move determined by some aggregative method).

(This topic has been discussed before multiple times, but I don’t really feel like looking them up again)


Too many people in this thread are falling for a common fallacy:

Having brand-new beginners get spanked by 12 kyus is borderline sadistic.


This is getting a bit off topic, but do you know of any mechanism that prevents the EGF rating system from drifting?


Yes, it has a mechanism for that. For each game there is a small bonus added to the ratings of both players, based on the assumption that on average, players gradually improve. This bonus is rating dependent. It is much larger for 30k than it is for 1k (a typical 30k is assumed to improve more quickly than a typical 1k). For dan players the bonus is basically 0. The details can be found at EGF ratings system | E.G.D. - European Go Database

The formula for this bonus (as a function of rating) was determined empirically using the historical data of the EGF rating system.

Another mechanism that also reduces rating deflation, are rating resets triggered by double promotions (but policies for that aren’t really consistent across different EGF member countries).

Without such mechanisms, improving players would gradually deflate EGF ratings.


Would loosely tethering the algorithm to very good and very bad bots make any sense? Of course some people would have to play them.

Or what about establishing a provisional rank by playing a few bot games, maybe 9x9 games? First play a 12 kyu bot, and then the system gives you stronger or weaker bots as it deems appropriate. could be optional: if you don’t want a beginner rank, establish your rank with a few quick bot games.

1 Like

To have to play a bot makes it even worse to me.

Till now, the three entrance points seem to be an interesting evolution to avoid too large distortion. (According to this thread:)

Players will always be able to calibrate their rank then with bots (instead of humans) if they want to, and it’s not necessary to say “they have to”. I’m fine with top level bots but not with low level ones which play a uninspired and tasteless go.


My guess is that it makes more sense to have the tether in the DDK-SDK range, so we could gather good statistics from a lot of balanced matches. And a difficulty with the very good bots might be agreeing on a software version that we would be able to maintain at a constant level of performance for many years despite hardware updates.

1 Like

The truth should be in between those two opposite opinion and the answer can be infered from past debates in older threads.

One opinion (for example) was that it was a neglectible matter to offer fairness from the start because well if you can’t stand suffering you are not made to be a go player. Not my words, but well, can’t say that all the community shared the same opinion.
Edit: spent some time to find some hectic debates in the past, i didn’t and leave it as a memory. Instead I found a lot of interest to find ways to improve and aknowledgement that the system must change. So I won’t go deeper, just appreciating the changes emerging now (hopefully)

but this: “if you don’t want a beginner rank, establish your rank with a few quick bot games”

That will be possible in the proposal (see the other thread)

Yeah, theres been quite many suggestions on making the newly registered users to play some 9x9 game with a bot or solve some tsumegos when they register, but stuff like that will make ogs, and go in general, less accesible…

Prolly worth mentioning, since ogs is currently one of the main online platforms for go (outside CJK), we have hundreds of brand new beginners registering on daily basis. Many of them actually want to start by playing with the weakest bots, which isnt currently possible for ranked games.

And while “playing with a weak bot” is nowdays maybe the most normal thing for beginners to start with, there are also quite many people who hate when they are forced to do something like that. Many people want to learn by “jumping in the deep end” and focus more into social aspects of go.

Also, there are a lot of not-so-new users: some of whom just want to check out what kind of place ogs is before deciding if they want to play here, some who are playing in a tournament or a league which is held on ogs, and people who have been ogs users in the past but have simply forgotten the username they once used to have. For people like them, its good to keep ogs more accessible, not less.


Now if finally that is part of the nearest future, i fear some disapointement as those weakest bots are so unfriendly and spiritless. It’s still very hard to create a valuable weakest bot.

Absolutly, and your post give quite a good view of many aspects on this.
It’s crucial to not become a small family protecting itself (guess things like our fantastic rating system…) and instead welcoming and integrating newcomers.


Theoretically its possible to estimate rank by 1 game: by measuring with AI how bad are your moves. Result of game and and against who doesn’t matter much. There are many moves - it’s more data than result of 5 games.


I don’t think it’s that easy. At least not from a single game, even with some 120 moves to scrutinise.
I remember an OGS forum game where we had to guess a player’s level from 1 game. It was not uncommon for the guesses to be off by several ranks.

Also, when I check my own games, there is a decent amount of variability. In some games my average score loss is twice as big as in some of my other games. And for weaker players, I expect this variability to be even bigger. So I think 1 game is not quite enough. The data is too noisy for that.

If it were easy, I think someone would have already made an app where you can upload a game and then it just outputs your level (expressed as a number of ranks removed from the strong bot used to evaluate your uploaded game).


average = disappearance of data. Neural nets can analyze data better than that, they are able to not ignore number of move: at which move kyu players usually do how big error compared to dan players and so on…
And like with face recognition its possible to look at shapes of patterns, not only at size of error…


That would require training an AI on a large number of games annotated with the level of the players. It seems possible, but someone would need to invest time and money into such an AI development project.


Face recognition isn’t an easy problem either. It exists because a lot of research and money went into it.

1 Like

there are other ways to fit data. not sure neural network or AI would be necessary.

1 Like

neural network AI is necessary to get high quality estimation
but for purposes of just getting starting rank something more easy may be enough