New EGF algorithm implemented

On April 4th, 2021, the EGD changed the rating algorithm. You can read more about it on We will update / rewrite the EGD documentation accordingly soon!

For stronger dan players there is often almost no difference, for the big bulk of players around 1 dan there is a change of bit less than 1 rank upwards, so for example a 1 kyu with 1992 rating points (GoR) has now 2068 GoR and can play as 1 dan, a difference of about 60 to 100 points. For a single or double digit kyus this change can make up to 140 rating points, it depends bit how many tournaments / games he has played.

We also expanded the rank floor from 20 kyu to 30 kyu. The intention for this change is to give beginners an easier start into the world of rated games as the usual case till now was that a beginner not only lost most of his games but also stayed for a long time at 20 kyu.


I want my rating back!

On Reddit some questions came up about it:

I’d be happy to answer questions here as well (when I’m awake).


(Maybe I should be more of a snoop, I knew y’all were knowledgeable and stuff but I hadn’t realized you are important. :flushed: )


Only a little bump for me, not enough to affect anything. Still, the Great Fox awaits us.



Woah cool friends in high places


Just wanna say that i’m glad that its finally out, fantastic job! Lets hope that new 30k floor helps newer players to enjoy tournaments more stay with the game ^^

ps. thanks for the ~30 gors i got ^______^


I’m neither important nor powerful in the EGF. I’m more of a meddler with perhaps some analytical abilities to make some of my points more persuasive.:wink:


I’m not important. Just lazy. I copy pasted bits of the email from the EGF but couldn’t be bothered to change the we to be clear it’s nothing to do with me…
Having said that, the bar to be important is pretty low in European Go I think. President of your national association? Just needs a bit of keenness in most cases I suspect. Joining the team who does all this stuff? If you know the right computer stuff, you’re good to be on board…

In fact I should probably have included this bit for all the clever programmer people here:

"If you are knowledged in the areas of PHP programming, databases, web design or simply want to help update the documentation or help validating tournaments please don’t hesitate to contact us! There is also room for new projects like e.g. having an integrated web application for doing the tournament pairings instead of entering them in an offline java program.

Please, if you know some programmers let them know that we are very interested to have more people in our team!"


I don’t think there’s any programmer left that’s not included in the Go player base. :stuck_out_tongue:


I’d love to work with you guys on the integrated tournament pairings via web application. I’m currently building this now vis BadukClub and I hope to have something available in a few months time.

Not to mention a one-day exhaustive map of go clubs and players around the world. What’s the best way to join efforts with you all?


Nothing to do with me but the email address on the webpage is
I’m sure they’d be delighted to hear from you @Devin_Fraze

1 Like

Ahhh, I see, that text was copied from their email… duh. Thanks for sending me in the right direction!


I think it is a good update.
The only slight problem is that people with quasi recent rank resets (who are improving players and often under-ranked) were left behind, and are now even more under-ranked. (This is only my intuition).

1 Like

I admit that we didn’t really consider that situation, but I understand what you mean. If they recently made a reset from (say) 10k to 8k, their new rating will still be close to 8k after the system update. But at the same time, previously 8k veterans may have gotten a 7k rating from the system update.

So in hindsight, it might have been better for the 10k resetter to have reset to 7k. I don’t think much can be done to fix their rating. Hopefully they will have a relatively easy win in the 1st round of their next tournament and the issue will fix itself.

1 Like

Is there a comparison between the different ranking systems with the same database?

I think the best system is the one with the highest prediction about the winning player.

According to this criterion, it would seem that the Glicko 2 algorithm (used by OGS if I am not mistaken) is the best:

Another comment:

I think an automatic adjustment function should be implemented.

The Gor system was created in 1998 and was very good according to the typical profile of the go player of that time.

However, the internet has changed this typical profile.

Indeed, the volatility parameter does not take enough into account the fact that a player plays few games counted for the European ranking but plays a lot of games on the internet. Often, the EGF ranking is too slow in relation to the real progress of the player. I think the glicko 2 algorithm is better for this.

PS: English is not my mother tongue. So I used to write this message.


Yes, on player pages you can switch between the old and the new database with a link on the page. Try it out on my page if you want: Dave_de Vos | Player card | E.G.D. - European Go Database

Yes, one of the adaptations was to improve the alignment between predicted winning probabilities and observed winning probabilities (between different ratings).

We didn’t see a clear neccessity to change it into a more complicated rating system such as Glicko2 or the AGA rating system. The update even made the EGF rating system slightly simpler (one policy rule became mostly redundant, so we removed it).

The system uses a volatility function (“con”) depending only on the player rating. It doesn’t take into account that some players play a lot on the internet. But how could it possibly know where and how much everyone plays on the internet? The system not clairvoyant. And if it knew somehow, how would it take that information into account?

Everything on the internet tends to be more fleeting and changing more quickly than their real life counterparts, so quicker changes may be more desirable for online ratings and long term rating stability would be less important (people expect more instant gratifcation online).

But I don’t think Glicko is inherently quicker. The EGF rating system could also be made quicker by increasing the volatility, but the drawback would be that EGF ratings would oscillate more. The ratings would become more “noisy” and many players would feel forced to change their declared rank up and down every time they participate in a tournament. Many long time tournament players feel some attachment to their EGF rank and they would prefer it to be fairly stable over time.

So EGF ratings tend to be more official and more focussed on a longer term (potentially spanning multiple decades). Also take into account that a typical EGF player might play only a few 10s of EGF rated games a year, while a typical online player might play 100s or even 1000s of rated internet games a year. So the EGF rating system has to work with much less data than online rating systems and you don’t want single game results to give wild rating swings.

In summary: We made a conscious trade-off between stability and responsiveness of EGF ratings.


(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

Thank you very much for your answer :+1 :

My question was: is there a scientific approach to compare the different ranking systems? (old EGF, new EGF, AGA, Glicko 2, WHR, Asian method server, …)

I understand that it is impossible to take into account the games on the internet. So I wonder if there is an automatic reset/adjustment function in the algorithm.

With the codiv crisis, some players did not participate in the EGF tournament in 2020 and they have progressed a lot on the internet.

Translated with (free version)

1 Like

I don’t know of any formal rating system research and comparison method.
There are occasional informal rating comparison tables that people make, but those are published on go related internet fora, not in scientific papers (as far as I know). You can find some of those informal rating system comparison tables in the OGS fora.

The purpose of this EGF rating system commission was not to research rating systems. It was more like tweaking the EGF rating system to improve its internal consistency. Do do that, we used simulations and data analysis to collect statistics for evaluation of the system and the various tweaks. But that process was more similar to trial and error than a “certified scientific method” (though perhaps trial and error qualifies as a sound scientific method, albeit not the most efficient scientific method).

Yes, a rating reset/adjustment algorithm is part of the EGF rating system and it’s fairly important to keep ratings aligned overall to real life EGF ranks over time (EGF ranks existed long before EGF ratings). Players get to declare their rank when they participate in real life tournaments in Europe. There are 2 situations where their rating gets adjusted to their declared rank.

  1. If they play for the first time and declare their rank as 10k (for example), they automatically get an EGF rating corresponding to that rank.

  2. If they declare a rank 2+ ranks higher than their previous highest declared rank, they automatically get an EGF rating corresponding to thier new rank. This is called a “rating reset”. Depending on your country’s go association and the tournament organisation, you are allowed to decide for yourself to reset your rating, or there is some commission who decides (I don’t know what the latest Belgian promotion policies are).

Such things also happened before the covid crisis (some people improving a lot, without participating in any EGF tournaments).
In that case they can just register for their next tournament with their new rank and trigger a “rating reset” (see above). Depending on their country, they may need permission for that. But perhaps it is sufficient to have a little talk with the tournament organisers beforehand. Anyway, such decisions are outside of the rating system itself.