Though I can’t give a mathematical proof, I do think there is sufficient empirical (statistical) evidence to support my conjectures.
Would you agree that go ranks (i.e. based on handicap game results) have a finite upper limit? I’m quite sure there will be a limit to how many handicap stones a perfect player can give to (say) a top bot. Or do you think it’s possible for a perfect player (i.e. a player that never loses points) to give 9 stones handicap to a strong contender that typically only loses a few points per game?
On the other hand, I think Elo ratings (i.e. based on even game results with perfect komi, possibly 6 under Japanese rules and 7 under Chinese rules) can go arbitrarily high.
When only playing even games with perfect komi, a strong contender that loses a few points per game will lose all the time against a perfect player that never loses points. By chance there may be an occasional jigo (the strong contender happens to also make no mistakes in some games), so in practice the perfect player may not actually reach inifinity Elo rating, but the perfect player’s rating can still become arbitrarily large as they continue playing only even games.
Also, from a dataset of game results annotated with the players’ ranks (which are based on handicap game results that may or may not be in the dataset), one can use logistic regression to determine the Elo gap per rank over the skill range of the players in the dataset. I have done that in 2019 for the games in the EGF tournament games database (about 1 million games, with arguably a higher quality representation of high dan games than OGS data has):
For ~10k players the Elo gap per rank is about 50 points. For 1d players the Elo gap per rank is about 100 points. For pros the Elo gap per rank is about 300 points. Based on the same dataset (I assume), a (French) American mathematician came to similar conclusions in 2016, including the apparent perfect play asymptote at ~13d EGF (see Elo Win Probability Calculator in section Elo per Stone).
Also, the Elo gap between AlphaGo Zero and AlphaGo Lee is already about 1500 points (according to DeepMind), while I think it’s safe to assume that the required handicap between them is probably not more than 2 stones (or do you think AlphaGo Zero could give AlphaGo Lee a much higher handicap and win 50% of the games?). This would mean that the rank gap between them is not more than about 1.5 ranks and the Elo gap per rank is at least about 1000 points around the level of those strong bots.
Based on all of that, I don’t think it’s too speculative to assume that the Elo gap per rank tends to to infinity in the limit of perfect play, while handicaps required (to even up winning chances) between progressively stronger players become smaller and smaller, tending to 0 in the limit of perfect play (note that handicaps can be made fractional by using komi handicap and switching between different komi values by some arbitrary frequency).
I don’t understand what exactly you mean by that. Would you expect a perfect player (which never loses points) to beat a top bot from any initial board position, no matter how unfavourable?
Exactly. There seems to be a wide consensus that strong pros can’t give more than about 3 stones handicap to weak pros (though Shin JinSeo may be exceptional in that he may be able to give 4 stones handicap to weak pros, expecting him to give Lukas Podpera 6 or 7 stones handicap seems too much).
So I think OGS’ rank conversion formula underestimates the Elo gap per rank in the high dan range. Or conversely, it overestimates rank gaps (required handicaps) between players with very high Elo ratings.
Also see 2021 Rating and rank adjustments - #59 by gennan (and there are several other discussions in the forums where this topic came up).
The alternative graphs (see links above) also look smooth and good, and they are based on (probably) higher quality data for high dan ratings.