Have pro been getting stronger in the AI era?

I’m having a discussion with my bros, starting from this topic: The neverending story uhuhu uhuhu uhuhu

My friend noticed a surprising fact that some of the tournament player have stayed at the 20k~15k rank for almost 10 years!

I’m not saying I’m above expectable jokes, but after they died out we finally tried to find a serious reason for “how that is possible”.

As an educator and trying to avoid the “It’s the limit of some people” answer. One reason I can think of is that “Because of the environment. They probably never had a chance to meet stronger players nor have access to better go resources.”

This bring me to another group of people who for a long time never have access to stronger players: the strongest players themselves.

Now they do though.

Is there any information regarding the growth of pro players post-AI era? Have there been significant growth now that they have a stronger player readily available 24/7 in their bedroom?

(My researching ability definitely doesn’t get any stronger despite having the Philosopher’s Stone readily available 24/7 in my pocket :stuck_out_tongue: )

12 Likes

Regarding the first part:

Or they simply don’t care and just play for fun.

It’s like those that just play ARAM or unranked once every few days, and don’t bother trying to learn the meta or optimal builds, don’t spend time practicing CS, etc.

To some people, Go is just a game that’s meant to be a pastime.

Regarding the Pros:

At the very least, I don’t think they’re getting weaker. Since pros are now using AI moves and sequences, it’s fair to say they’re stronger overall than they were before AlphaGo.

11 Likes

Isn’t it the case that Pros were always getting stronger anyhow?

Each generation of Pros would have wiped out the previous, is my understanding - whole new styles emerged from time to time even before AI brought its new things.

6 Likes

I made some research but it was inconclusive. Spot checking some of the players’ rating graphs, I found no obvious spike between 2015-17. Just for fun, here are some well known players:
Lee Sedol. You expected this breakdown after AlphaGo, didn’t you?

Ke Jie. Hm… not convincing. However, it’s difficult to grow, if your play is already God-like.

Cho Chikun. Maybe he doesn’t own a computer. Retired Gods don’t need it.

However, it is suspicious that the overall rating of top players is steadily growing, as suggested by Eugene:

This seems to be true. By checking the top 3 players by decade, we find this:

There is constant increase (inflation?!) in rating, but there seems to be some acceleration in the last decade. This might be partially an impact of AI.

Or of rating inflation.

Of which I did find an interesting study on the rating inflation of a strange, inferior game (some “chess” or what). There, the finding was that no, there’s no rating inflation. Players indeed get better in the top league.

6 Likes

Improvement over generations is the natural order for most if not all human endeavors (although we may be close to the end in some, such as the 100 meter dash, based on natural physical limitations). In my lifetime, the two that most astounded me were the breaking of Bob Beamon’s long jump record and Jim Ryun’s high school mile record. In sports, improvement has come from better training methods, better general heath and nutrition, and greater leisure to train. In mind games, the accumulated experience is crucial, as well as an increasing pool of players.

It’s generally recognized that a more fair way to judge past performers is by their comparative record and historical achievements (i.e., innovations). For example, Al Oerter’s four consecutive Olympic gold medals, Lasse Viren’s “Double Double,” Zatopek’s phenomenal dominance throughout his career, Capablanca going undefeated for some 10 years, or Fischer’s 12 consecutive wins in the Candidates Matches. Or think Go Seigen.

2 Likes

Well that’s why I put “significant” in the question.

Experts in any field are the one who constantly push the boundary of human’s limit. They will get better over time, by “standing on the shoulder of giants” plus their own effort.

But it is also common agreement that the sole existence of a better player/expert could boost the growth of those around them. I’m just curious as to how much effect does AI bring to the top pros.

2 Likes

I think it’d be worth comparing the elo graph against a graph of number of professional players vs years as well.

I would expect some rating inflation if more players flood into the system - imagine there’s more tournaments/ranked games to accommodate these, where the top still beat the up and coming and their ratings increase due to it.

2 Likes

Isn’t the methodology they use for these sorts of things (WHR) immune to rating inflation?

1 Like

Why do think so? I don’t think Rémi Coulomb made any statements about inflation or deflation in his paper (https://www.remi-coulom.fr/WHR/WHR.pdf)

1 Like

As I understand it, one of the strengths of WHR is that if you have two populations, A and B, with some non-zero intersection C, the ratings of players in A-not-in-C and B-not-in-C can be compared due to the linking of players in C, even when C is remarkably small relative to the size of A and B. If this understanding is wrong, my conclusion may well be as well.

If, then, one can group historical and modern go players into sets X1, X2, X3, Xn such that each set Xk corresponds to a time period which has a non-zero intersection with Xk-1 (excepting X1), then any two sequential sets can be the “A” and “B” in the first paragraph, and the ratings of the players within them can then be compared. Therefore the ratings of all players between any two of these sets can be compared, by induction, no?

If we assume rating inflation applies, then X1 would not be comparable to Xn. But by the above reasoning (if correct), X1 is comparable to Xn. Therefore the initial assumption is wrong and rating inflation does not apply to WHR.

4 Likes

Ah right - I missed that emphasis.

1 Like

In my understanding, past ratings are not fixed in WHR. The processing of modern players will affect the ratings of the historical players even before the point that the historical players started to interact with modern players. So yes, WHR allows comparison of modern players with historical players, but the absolute ratings of the whole population can still drift. There is no anchor.

3 Likes

I think there are several effects involved in a younger generation of pros replacing an older generation:
1: reading skills
2: positional judgement
3: new openings / joseki / whole board strategy

1: pros seem to agree that the reading skills of top pros have always been great. Modern top pros are not clearly better at reading than Honinbo Dosaku. AI don’t really outread top pros all the time. AI beat top pros mostly with their superior positional judgement.
But age is a factor when time is limited. Young pros tend to read faster than older players. Also, young players have more stamina to maintain the level of concentration required for many hours. This has nothing to do with AI. I think it has always been the case that top players start declining in reading speed as they become middle aged.

2: this is an area where older pros may have an advantage over younger pros because of experience. I think this was the reason why Shuko could still teach his younger pro students a thing or two even though his advanced age meant that many of his students would beat him in a real match (see 1).
AI have a huge experience compared to any human, because of the millions of training games they played. A human would need to live for a thousand years to gain as much experience as a top AI.

3: this is an area where a newer generation would have a clear advantage. But it is not as big as you might think. When you replay an old top pro game with an AI, the AI would point out some inferior moves by modern theory, but the actual point loss tends to be small. These errors are rarely greater than 1 or 2 points and there are not many of such errors.
I’m quite sure that none of the current top 3 could give Honinbo Dosaku a 2 stone handicap due to the knowledge that has been added in the past 350 years. Perhaps they cannot even beat him with 1 stone handicap (black without komi).

6 Likes

Are you aware of any larger scale exercise that replayed/evaluated historical pro games with top-level AI and evaluated/compared that with contemporary pro games?

The idea would be, that if a player plays more of the AI-optimal moves, and does that consistently, then he/she should be a better player than someone who strays away more frequently from the AI-optimal path.

I believe this method could be also used to answer OP’s question.

Caveat: Jeff Sonas postulates that this method would just measure how “computer-like” the player’s play is - but who cares, if computer-like means 11D+?

1 Like

I don’t know. I’m also not sure how to quantify that on a strength scale.

Also, we don’t win games by our best moves, we lose games by our worst moves. So perhaps playing strength would be more accurately determined by how bad a player’s worst moves are.

6 Likes

That’s actually such a nice quote, I thought I’d highlight it again.

4 Likes

https://arxiv.org/abs/2311.11388

image

image

12 Likes

also:
https://www.pnas.org/doi/epdf/10.1073/pnas.2214840120

More graphs

image
image
image
image


image

image


image

image

3 Likes

If I understand the methods correctly, one could argue what is measured is how much AI-like people have been playing. Of course more AI-like probably IS better, but how can we be sure that a player who plays more AI-like on average would actually beat the less AI-like human player. Maybe they are just better at mimicking to a certain degree without really understanding what they are doing. I don’t really believe that, just playing devil’s advocate.

8 Likes

It’s also the first thing that comes to mind when seeing them use the word “quality”, since that sounds like a hard to define quantity, while AI-like seems much easier.

4 Likes