My friend noticed a surprising fact that some of the tournament player have stayed at the 20k~15k rank for almost 10 years!
I’m not saying I’m above expectable jokes, but after they died out we finally tried to find a serious reason for “how that is possible”.
As an educator and trying to avoid the “It’s the limit of some people” answer. One reason I can think of is that “Because of the environment. They probably never had a chance to meet stronger players nor have access to better go resources.”
This bring me to another group of people who for a long time never have access to stronger players: the strongest players themselves.
Now they do though.
Is there any information regarding the growth of pro players post-AI era? Have there been significant growth now that they have a stronger player readily available 24/7 in their bedroom?
(My researching ability definitely doesn’t get any stronger despite having the Philosopher’s Stone readily available 24/7 in my pocket )
I made some research but it was inconclusive. Spot checking some of the players’ rating graphs, I found no obvious spike between 2015-17. Just for fun, here are some well known players:
Lee Sedol. You expected this breakdown after AlphaGo, didn’t you?
There is constant increase (inflation?!) in rating, but there seems to be some acceleration in the last decade. This might be partially an impact of AI.
Or of rating inflation.
Of which I did find an interesting study on the rating inflation of a strange, inferior game (some “chess” or what). There, the finding was that no, there’s no rating inflation. Players indeed get better in the top league.
Improvement over generations is the natural order for most if not all human endeavors (although we may be close to the end in some, such as the 100 meter dash, based on natural physical limitations). In my lifetime, the two that most astounded me were the breaking of Bob Beamon’s long jump record and Jim Ryun’s high school mile record. In sports, improvement has come from better training methods, better general heath and nutrition, and greater leisure to train. In mind games, the accumulated experience is crucial, as well as an increasing pool of players.
It’s generally recognized that a more fair way to judge past performers is by their comparative record and historical achievements (i.e., innovations). For example, Al Oerter’s four consecutive Olympic gold medals, Lasse Viren’s “Double Double,” Zatopek’s phenomenal dominance throughout his career, Capablanca going undefeated for some 10 years, or Fischer’s 12 consecutive wins in the Candidates Matches. Or think Go Seigen.
I think it’d be worth comparing the elo graph against a graph of number of professional players vs years as well.
I would expect some rating inflation if more players flood into the system - imagine there’s more tournaments/ranked games to accommodate these, where the top still beat the up and coming and their ratings increase due to it.
As I understand it, one of the strengths of WHR is that if you have two populations, A and B, with some non-zero intersection C, the ratings of players in A-not-in-C and B-not-in-C can be compared due to the linking of players in C, even when C is remarkably small relative to the size of A and B. If this understanding is wrong, my conclusion may well be as well.
If, then, one can group historical and modern go players into sets X1, X2, X3, Xn such that each set Xk corresponds to a time period which has a non-zero intersection with Xk-1 (excepting X1), then any two sequential sets can be the “A” and “B” in the first paragraph, and the ratings of the players within them can then be compared. Therefore the ratings of all players between any two of these sets can be compared, by induction, no?
If we assume rating inflation applies, then X1 would not be comparable to Xn. But by the above reasoning (if correct), X1 is comparable to Xn. Therefore the initial assumption is wrong and rating inflation does not apply to WHR.
In my understanding, past ratings are not fixed in WHR. The processing of modern players will affect the ratings of the historical players even before the point that the historical players started to interact with modern players. So yes, WHR allows comparison of modern players with historical players, but the absolute ratings of the whole population can still drift. There is no anchor.
I think there are several effects involved in a younger generation of pros replacing an older generation:
1: reading skills
2: positional judgement
3: new openings / joseki / whole board strategy
1: pros seem to agree that the reading skills of top pros have always been great. Modern top pros are not clearly better at reading than Honinbo Dosaku. AI don’t really outread top pros all the time. AI beat top pros mostly with their superior positional judgement.
But age is a factor when time is limited. Young pros tend to read faster than older players. Also, young players have more stamina to maintain the level of concentration required for many hours. This has nothing to do with AI. I think it has always been the case that top players start declining in reading speed as they become middle aged.
2: this is an area where older pros may have an advantage over younger pros because of experience. I think this was the reason why Shuko could still teach his younger pro students a thing or two even though his advanced age meant that many of his students would beat him in a real match (see 1).
AI have a huge experience compared to any human, because of the millions of training games they played. A human would need to live for a thousand years to gain as much experience as a top AI.
3: this is an area where a newer generation would have a clear advantage. But it is not as big as you might think. When you replay an old top pro game with an AI, the AI would point out some inferior moves by modern theory, but the actual point loss tends to be small. These errors are rarely greater than 1 or 2 points and there are not many of such errors.
I’m quite sure that none of the current top 3 could give Honinbo Dosaku a 2 stone handicap due to the knowledge that has been added in the past 350 years. Perhaps they cannot even beat him with 1 stone handicap (black without komi).
Are you aware of any larger scale exercise that replayed/evaluated historical pro games with top-level AI and evaluated/compared that with contemporary pro games?
The idea would be, that if a player plays more of the AI-optimal moves, and does that consistently, then he/she should be a better player than someone who strays away more frequently from the AI-optimal path.
I believe this method could be also used to answer OP’s question.
Caveat: Jeff Sonas postulates that this method would just measure how “computer-like” the player’s play is - but who cares, if computer-like means 11D+?