If the current Shin Jinseo were to play the Lee Sedol version of AlphaGo, what do you think the score will be?

Current Shin Jinseo VS AlphaGo Lee Sedol.

5-game match. What will the result be?

By AlphaGo Lee Sedol, I mean the exact version that Lee Sedol played in 2016 without any improvements.

  • Shin 5-0 AlphaGo
  • Shin 4-1 AlphaGo
  • Shin 3-2 AlphaGo
  • Shin 2-3 AlphaGo
  • Shin 1-4 AlphaGo
  • Shin 0-5 AlphaGo
0 voters
4 Likes

By the way in case anyone hasn’t heard of it yet, this may become a reality, though nothing seems to be confirmed at the moment. I guess there may also be some difficulties in configuring the exact AlphaGo that played Lee Sedol.

5 Likes

And an article

1 Like

Arent there some special positions Ai’s have big trouble with? I remember some pro saying that it would be no trouble for them beating a computer abusing some variants.

I think you are talking about the looping blunder.

AIs are not good at recognising loops and there’s a high chance that AlphaGo has this problem too.

But I don’t think people would want to watch SJS exploiting the bug of the AI in such a way to win, haha. (Though you can also say that Lee Sedol exploited a bug, but that was through normal play and not intentional)

2 Likes

Also even sillier if you tried to abuse the loop exploit and it didn’t work.

Like even when you see players that are good at setting up the exploit, they still don’t have 100% success rate.

2 Likes

As of now we’re predicting on average, that Shin Jin-seo will win exactly 2.5 of 5 games.

3 Likes

Well maybe one of the games will end up in a draw due to triple ko or quadruple ko. AIs are not good at those things either.

1 Like

IIRC for Korean competition rules, if there is a draw or no-winner, they would need to use the remaining time to start a new game until a winner is determined.

5 Likes

I’ve posted and asked people who know Aja Huang, and the latest news coming back is that Aja said he hasn’t seen an official invitation from KBA, and decided if they would cooperate with KBA yet. And we would likely know more and some concrete news early next year.

As to Aja’s personal opinion about AlphaGo Lee version vs SJS, he still have confident for AlphaGo Lee version, but not as much as when it was against Lee Sedol. (so likely in the camp of AlphaGo Lee 3-2 or 4-1 vs SJS? but 2-3 is not out of the question?)

2 Likes

Also, we have Shin Jinseo’s own opinion (although from last year)

Basically, SJS himself thinks that if he can find the weakness (it is practice enough), he would win 5-0, or lose 0-5 if he cannot. So, not really sure if SJS learn about the circular loop bug, he won’t use it.

2 Likes

I think in one of the interviews someone asked him the question and he mentioned that he will do an extreme prediction of winning by 3-2.

What’s the point to win an old version of alphago? Do you think they will bet again a million dollars?

1 Like

It would be interesting to see how much humans have improved with the help of publicly available AI.

9 Likes

And AlphaGo Lee version actually is supervised trained, and it didn’t play 3-3 joseki (3-3 variations actually appeared after the AlphaGo master 60 games). It would be interesting to know how it would behave facing new josekies developed with the help of later self-trained AIs that appeared very frequently in human pro games now.

3 Likes

Shin 2-3 AlphaGo was winning at one point but Shin 3-2 AlphaGo took over!

1 Like

And AlphaGo Lee version actually is supervised trained, and it didn’t play 3-3 joseki (3-3 variations actually appeared after the AlphaGo master 60 games). It would be interesting to know how it would behave facing new josekies developed with the help of later self-trained AIs that appeared very frequently in human pro games now.

Yeah, this came up on reddit earlier too and I commented there:

Even though Shin Jinseo is stronger and players today have the benefit of learning from modern AI, I wonder if there might be one aspect that significantly adds to the challenge: AlphaGo Lee, by indications from the game records we have, mostly does *not* play the kind of “AI-style” that strong modern bots play - early 3-3 invasions, heavy emphasis on certain corner patterns and modern joseki - and instead plays some of the joseki that were more popular among pros pre-AI-era, with a bit more emphasis on side extensions, etc.

This is presumably because AlphaGo Lee is only some steps of reinforcement-learning-improved from AlphaGoFan and earlier versions that leveraged supervised learning from human games (as we see from DeepMind’s earlier papers), giving neural nets significantly biased towards all the popular pre-AI-era joseki that humans played.

But, in other respects, e.g. fighting strength and judgment, AlphaGo Lee was stronger than all human of the time, and a partial step towards that of the strength of modern bots. I would be curious if it would pose a challenge where Shin Jinseo might not have any practice or familiarity with any opponent quite like this - still playing a pre-AI-era-human resembling style and leading the game into some opening patterns or joseki that today are rare and no longer as deeply practiced or studied, but playing them with a strength and accuracy well above all the humans who used to play them all the time.

If this were actually going to happen and I were SJS, I might consider trying to have someone set me up some decently strong bots but heavily biased towards pre-AlphaZero human pro supervised learning in the opening for sparring practice, or analyze/practice with modern bots but starting from various forced openings specifically taken from pre-AlphaZero pro play.

3 Likes

@hexahedron what are your thoughts on whether AlphaGo Lee would also suffer from the cyclic groups weakness KataGo naturally did? Perhaps the Monte Carlo rollouts that early version used which later Zero-style bots dispensed with would ameliorate it? Or is it inherent in their shared neural network architecture?

1 Like

It would depend on how exactly they’re mixed and weighted. The neural net is most likely very vulnerable because as far as I know every neural net of every bot anyone has taken the time to test and inspect that wasn’t specifically trained against it has heavy misevaluations there (including even neural nets of different architectures - vision transformer instead of CNN). But yes full-game MCTS rollouts would help a ton, because they’re going to play out all the liberty filling and see that running out of liberties will result in the cyclic group’s death.

Full game MCTS rollouts might not always help on the finer points though if you want accurate evaluation of cyclic group situations. Generally those rollouts are still local-pattern-biased, and for example the cyclic group can often escape trouble by making two-headed-dragon-life, but doing that sometimes requires playing shapes you normally don’t see in life-and-death fights, where you are trying to fight to form a “false” eye. (I would guess that finer points aside, they’ll be sufficient to usually spot trouble on the horizon as long as they have a non trivial fraction of the evaluation weight).

3 Likes