Thought Experiment

I proposed this question to my friend and I couldn’t get a decent response from him so thought I would ask here as my first ever thread :grin:

The premise is similar to the thought experiment with the chimps writing the works of Shakespeare given enough time sat in front of a typewriter

If AlphaZero repeatedly played a regular 30kyu bot in a normal 19x19 game, there won’t be any computer malfunctions on either side, would the 30kyu bot ever win? If yes how?


It depends on some assumptions, but I would say yes, given infinite time and a 30-kyu bot that had a sufficient degree of randomness that it had some non-zero chance of getting extraordinarily lucky to play every move perfectly by pure chance.

However, like the monkeys typing Shakespeare, there is no practical chance of this ever happening within the lifespan of the universe, even assuming trillions of immortal monkeys each taping trillions of keys per second from the big bang to the heat death.

Yet, in practice, if we take away the assumption of no computer malfunctions, then I think the most likely way for AlphaZero to lose against a very weak bot would be simply due to a software bug or hardware malfunction. A team managing to write software without bugs would be a far more impressive (and perhaps practically unobtainable) achievement than a mere superhuman Go bot.


Fun thing is: human beginners often play worse than random generator, because they don’t play randomly, they systematically trying to reach wrong goal with wrong tools.

(if they learned rules properly, they will win in endgame, but properly playing against random opening may be hard even for experienced players.)


A very rough estimate: suppose that at each move, the weak bot has 1% chance of making a move that is good enough, and that the game lasts 200 moves (100 for each side). Then after about 10200 games (=(102)100), the weak bot will win.

The estimate is very rough, since if we replace 1% by 0.5% in the above, then the estimate becomes 10230 games.

The number is very big, but considerably smaller than the number of times the chimp has to type before writing a work of Shakespeare (this is not surprising since the number of moves in a go game is much less than the number of letters in a work of Shakespeare).


My assumption is that a 30k bot isn’t a bot that plays randomly. Instead it’s a bot which plays badly.

So: no, there’s no chance that it’ll win against a strong bot, even with infinite games.



So, a bit OT but how strong is a random bot? Better than 30k it seems. How much better?

1 Like

If we tackle it as a question of probability theory then, provided the 30kyu Bot is sufficiently non-deterministic such that the chance that it plays a perfect game is non-zero (which is the case for example if it chooses any move randomly and uniformely distributed), then the answer is almost surely yes.

If we tackle the question as a decision problem, then I fear that the answer cannot be computed. One would need to conduct the experiment, which may take an infinite amount of time.


On the contrary, I think that a random bot is much worse than 30k. My guess would be something around 60k. The first versions of LeelaZero were weaker than idiotbot on KGS.


Is that even a thing??? :rofl:

From the games I’ve seen, every human player wins against idiotbot. That bot plays almost randomly, except that it gets out of an atari and know how to capture a group in atari. I don’t know if it has other skills, I didn’t play enough against it (too boring).


I remember, I analysed with Leela Zero games of beginners against 100% random opening and they were behind. Only in midgame/endgame, beginner people strike back.
Because Go works like this: you can choose chaotically where to start creating groups, but you cannot complete creating group by chaos only.

Bot that plays only first 100 moves randomly, would be much stronger than 30k.


You need to find some challenge, like not killing any of its stones

Thanks for the responses :slight_smile:

I purposely didn’t want to say “infinity” as that adds a bias to an extent, even when the question is something impossible with it being infinity…if it never happens infinity never ends so it would never be a definitive yes or no lol.

Arguments for 30 kyu bot beating AlphaZero

AlphaZero isn’t entirely perfect yet, so there would be Billions of billions of end board positions that could beat it

If the 30kyu bot plays so badly/random that the moves actually help it win later in the game, having the stones in the perfect spots for mid/endgame before reaching that point

Interestingly AlphaZero being self learning could end up destroying itself by inputting so many bad move responses that it might prioritise them and becomes a 25 kyu itself therefore becomes beatable

Counter arguments I’ve come up with why AlphaZero could be unbeaten in this matchup

One bot is playing to win and one is playing purposely comprising moves every so often so they have the same goal (30 kyu bot playing that it would lose to a 20kyu player)

As they are bots wouldn’t they have a limited amount of pre programmed moves to play in any given situation so if it doesn’t win within say 10 million games it may never win as the games will be repeats

AlphaZero would be getting infinitely better while the 30 kyu would stay the same standard and resigning after 70 moves each game

When talking about stuff like monkeys eventually typing out the works of Shakespeare or random play eventually beating a super-strong AI, it really only becomes feasible at timescales that far exceeds the lifetime of the universe. As a thought experiment, if we imagine an infinite sequence of trials, where each trial has an incredibly small, but non-zero, probability of success, then success will eventually happen. In fact, success will happen infinitely often over the infinite sequence (even while still only occurring in a vanishing fraction of the cases).

If you want to talk about whether something could happen in a finite number of trials, then the key question is figuring out the likelihood (which would probably be very difficult to approximate, without some very strong assumptions) or to determine if that likelihood is exactly zero.

Note that if you want to show that something is impossible (probability equal to zero) in one trial, then it is the same as showing that it is impossible in infinite independent and identical trials.

When reasoning about the infinite, of course one cannot exhaustively try everything, but instead one needs to apply reasoning to determine whether it is possible with non-zero probability, or it could never happen. For example, I could say that one trying to multiply randomly chosen odd numbers together will never get an even number, even if one tries infinitely often.

I think you are misunderstanding how AlphaZero works. AlphaZero was trained by having it play against itself, however, this training process aims to search for the best paths through the game tree. The training aims to explore the best moves and the learning process could effectively discard bad examples. Also, AlphaZero does not need to continuously learn from all of the games that it plays, so while having it play against a weak bot, it could simply ignore those games.

It seems that you are implying certain assumptions about the 30 kyu bot. That part of the question is the most ambiguous part. This question crucially depends on the assumptions about what a 30 kyu bot means. A bot that plays completely random moves would have some non-zero chance of beating AlphaZero just by being lucky. A 30-kyu bot that essentially runs another copy of AlphaZero, but then picks deliberately weak moves could possibly be argued to not have any chance of winning.

Bots do not have to be deterministic. Preprogramming moves is not an effective way to create any bot to handle any given situation. For any given position, the bot might pick their move with some degree of randomness. Even AlphaZero relies on randomness in its “reading ahead”, which affects its evaluations and move choices.

In practical terms, AlphaZero’s skill is limited by the size of its networks, and ultimately the finite capacity of the computing system that it runs on. Conceptually, there is the skill ceiling of perfect play, but it does not seem feasible that any practical computing system could achieve that.


Is it possible to build a bot that purposely mimicking human 30k players? Would that bot be better or worse than random bot?

And what constitute a human 30k players, though?

I think a random bot would be much weaker than some other bot that could achieve a 30 kyu rating, and while random play could theoretically beat the 30 kyu or even a much strong player, by pure luck, it would almost never happen in practice.

As an analogy, imagine a game where one tries to pick the largest number, and three players that have different strategies:

  1. Player A rolls 100 six-sided dice and adds those up.
  2. Player B flips a coin that says 500 on one side and 550 on the other.
  3. Player C always just picks 599.

Player A is very likely to lose to player B, and would almost always lose to player C, but has a non-zero probability of actually beating either. However, Player B would never win against player C.


By the way, if AlphaZero has a rating of, say, 3300 vs our weak bot around 500, I believe that the glicko system estimates the win probability at about 1 in 10^((3300-500)/400) = 10 million.

1 Like

By the way, if the three players’ abilities instead behaved like intransitive dice, then one could not even put them in any sort of meaningfully ranked order.

1 Like

If it is a human pro (in their normal competition condition) against a human 30k, I would say it is no chance at all.

Human seems to be a lot predictable once they learn some basics, and I doubt any human player, even beginners, is going to play truly random moves.

1 Like

Even human players lose against a random bot playing a perfect game, which has a non-zero chance and is thus almost certainly bound to happen in an infinite sequence of games (which exists in theory only, as far as I know).

That being said for any practical purpose the chance is neglectable, and it’s “practically” impossible that a 30kyu bot wins against a human pro.