When talking about stuff like monkeys eventually typing out the works of Shakespeare or random play eventually beating a super-strong AI, it really only becomes feasible at timescales that far exceeds the lifetime of the universe. As a thought experiment, if we imagine an infinite sequence of trials, where each trial has an incredibly small, but non-zero, probability of success, then success will eventually happen. In fact, success will happen infinitely often over the infinite sequence (even while still only occurring in a vanishing fraction of the cases).

If you want to talk about whether something could happen in a finite number of trials, then the key question is figuring out the likelihood (which would probably be very difficult to approximate, without some very strong assumptions) or to determine if that likelihood is exactly zero.

Note that if you want to show that something is impossible (probability equal to zero) in one trial, then it is the same as showing that it is impossible in infinite independent and identical trials.

When reasoning about the infinite, of course one cannot exhaustively try everything, but instead one needs to apply reasoning to determine whether it is possible with non-zero probability, or it could never happen. For example, I could say that one trying to multiply randomly chosen odd numbers together will never get an even number, even if one tries infinitely often.

I think you are misunderstanding how AlphaZero works. AlphaZero was trained by having it play against itself, however, this training process aims to search for the best paths through the game tree. The training aims to explore the best moves and the learning process could effectively discard bad examples. Also, AlphaZero does not need to continuously learn from all of the games that it plays, so while having it play against a weak bot, it could simply ignore those games.

It seems that you are implying certain assumptions about the 30 kyu bot. That part of the question is the most ambiguous part. This question crucially depends on the assumptions about what a 30 kyu bot means. A bot that plays completely random moves would have some non-zero chance of beating AlphaZero just by being lucky. A 30-kyu bot that essentially runs another copy of AlphaZero, but then picks deliberately weak moves could possibly be argued to not have any chance of winning.

Bots do not have to be deterministic. Preprogramming moves is not an effective way to create any bot to handle any given situation. For any given position, the bot might pick their move with some degree of randomness. Even AlphaZero relies on randomness in its “reading ahead”, which affects its evaluations and move choices.

In practical terms, AlphaZero’s skill is limited by the size of its networks, and ultimately the finite capacity of the computing system that it runs on. Conceptually, there is the skill ceiling of perfect play, but it does not seem feasible that any practical computing system could achieve that.