Unofficial OGS rank histogram 2021

As I understand ranks in chat aren’t overall ranks either, it’s the category ranks which are famously lower than overall rank.

The old Elo remnants exist somewhat under codeword “egf”. For Koba’s game T:3183 R:1 (ennuiaboo vs KoBa) we get 742 → 742 / 100 = 7 → 7 + 9 = 16 → 30 - 16 = 14k!

Or the game that was played right after this screenshot A short travel through OGS history - #28 by S_Alexander was taken: correspondence egf=1685.58 (matches the screenshot) → 1685 / 100 = 16 → 16 + 9 = 25 → 30 - 25 = 5k!

Again these are category ranks.


The noobs are swelling because we are sampling more noobs from the nearly infinite universe of potential noobs (see my post on normal versus power laws above). More people are playing Go. It’s a good thing.

1 Like

So they might be more or less accurate for those of us playing in more or less one category (eg mostly correspondence 19x19 or most live 9x9 etc).

I know they won’t be identical but one can imagine them being close ish with a small number of games in other categories just throwing them off a bit.

Ok, so “working with OGS statistics” has turned into “learning Python”, which, although a worthy endeavor, has not let me demonstrate any concrete results (yet). Trolling through OGS code and cogitating on rank/rating issues has led me to some conclusions, and some ideas. I would appreciate some feedback.

  1. There is NOT an infinitely bad Go player. Alpha Go learned from scratch, making random moves. Excluding the pathological case of players trying to lose (which may be wrong, as players may try to manipulate their rating. Hmmm, both players are trying to lose…?)
    1a) OGS currently maps rating->rank such that there is a minimum possible rank
    1b) Rating points at very low ranks don’t mean much, they are “play money”
    1c) Two tailed distributions are exploded - ELO and Glicko are clearly wrong. Note that USCF has noted problems at both ends of the rating scale over the years. Glicko’s range feature is desirable, and should be addressed.
  2. Given there is not an infinitely bad Go player, there is a limit, there is a worst Go player. This limit lets us introduce the possibility of a very simple power law distribution. This theoretical distribution has been proven to be common in “complex” situations, situations involving many actors employing many strategies. Go, anyone?
    2a) A contest between two players can be represented as two samplings from any distribution. Bayes Theorem, and the observation that Black wins->White loses and vice versa, gives us a flexible (unassailable?) probability model.
  3. It is natural to map the minimum OGS rank to the worst possible Go player. Given that fix, one can map expected results to actual results, and estimate the real probability distribution (based on rank, not rating).
  4. It is convenient to have “play money” at the lower ranks (see 1b). An eager noob should not run out of points to bet. The current mapping between rank and rating seems to make this nearly certain.
  5. So rankings are significant (with a theoretical minimum), and ratings are “play money” (at least at the lower end). I propose rating changes be based on “fair” bets, based on rank (not rank differences, the relationships are not linear). What is the most a higher rated player be willing to risk to an unranked/unknown player? 100? Maybe too small. 1000? Surely too big.
  6. New player… Theoretical minimum rank, arbitrary new rating… dunno.
    I suspect that a proper modelling of a ranking based power law will fix perceived problems at high skill levels. I also suspect that adopting the Bayes model and recognizing that there are many bad Go players we’ve never met will fix many perceived problems at lower skill levels.

WRT 5. Could players choose the size of their bets, with insight from their ratings?

One could suggest how many points one is willing to lose… Odds, based on ranking, creates fair ratio. Opponents’ risk choices determine actual rating changes.

I wouldn’t mind fixing ranks to a few stable bots.

Just had a thought that using percentiles, I can set myself a more achievable goal than ‘1 dan’ . . . I’m currently around 6K (65 percentile) and I’d be more than happy to reach 70 then 80 percentile. So my new goal is now 2.9K!


Samples of the mean rank would be normally distributed, but the underlying distribution generally isn’t even if there are a large number of samples.

1 Like

Update… my critique of two tailed distributions is correct, but irrelevant, because Go is not well modelled as a random process. My naive application of Bayes predicted a 10k beating a 9d a third of the time. I continue to tinker.

1 Like

I do not really understand what you are trying to do, but wouldn’t it be better to start from something more realistic and more easily modeled so that you can get some initial parameters that are more reliable?

I mean who knows what the real odds of a 10k beating a 9d are ( some variation of 0.0…0x% ) and how they differ from the odd of a 10k beating an 8d (also some variation of 0.0…0x% ).
Try, for example a 10k beating a 9k one time out of three or 2 times out of 5 and start extrapolating from things that are closer to reality?

1 Like

There is good data on 1-2k differences. I am basing my work on that. “Trying to accomplish”… Well just killing time being geeky. Surely a Go player can appreciate that :wink: