Pretty self explanatory question. Can a strong amateur beat it? Like 6d?
From the support page (Play Go at online-go.com! | OGS):
I’ve heard it said that amateur 7 Dan is about begining professional level. So an amateur 6 Dan could beat the AI… sometimes… but it would be a tough game
The AI uses 400 playouts per move. I don’t know if it is superhuman but I guess it is at least strong pro level. My version of Katago which is weaker, only 15 blocks instead of 20 blocks on the OGS site, can detect many mistakes by “weak” pros.
400 playouts means that it doesn’t read very much, so it plays mostly by intuition. So it can make mistakes in complicated situations, like positions with many unsettled groups and a big ko fight along the side.
Maybe it’s similar to a top pro playing under 10 seconds byoyomi the whole game. Very strong, but under that kind of time pressure, sometimes they blunder.
much less often than any kyu under no time pressure
Yes, I think you need to be a high dan amateur to have some chance of winning at 400 playouts. Maybe a bit weaker if you specialize in beating KataGo at low playouts (finding the type of opening it might be weaker at).
Complicated positions which confuse the bot at 400 playouts are not so frequent, so I believe that the bot has a high winrate against a weak pro. However I can’t test that hypothesis since my computer is not powerful enough. One way to determine the strength of the bot would be to look at a pro game like
and analyze with a strong bot (say 40 blocks and 2000 playouts), determine how many points the humans lose on average compared to the strong bot, and how many points the weaker bot (20 blocks and 400 playouts) lose on average compared to the strong bot.
kata-one-playout (so basically no reading at all) has a mid to high dan rating on OGS. I expect that one to be significantly weaker than KataGo with 400 playouts.
What exactly is a playout? It plays the game out till the end? Because that’s a lot of reading.
A playout is sort of the smallest unit of reading for AI. In earlier versions it was a single random-ish game branch generated until the end of the game (so not a whole mini-max-ed tree or something like that) and used as a single data point for evaluating the current position.
If for instance the AI just reads two sequences A-B-C and D-E-F-G-H, that’s 8 playouts.
You can perhaps compare it to the term “nodes” used as a fundamental calculation unit of chess engines. The derived “nodes per second” is used to express chess engine speeds.
However, AI typically use much more calculations per playout than a chess engines use per node. That makes a single playout much slower than a single node calculation, but the result is much more accurate, especially for modern AI.
So you typically see millions to billions of nodes calculated per move by a chess engine, but go AI typically use only thousands to millions of playouts per move.
Neural network is artificial intuition. It instantly feels that move A is maybe the best move, move B is maybe slightly worse, but not sure. AI uses candidates suggested by neural network. Runs simulations after move A and simulations after move B. Then by result of simulations it chooses really better move.
If playout is only 1, it just chooses what neural network likes.
You can choose between results of 2 simulations, but there is nothing to compare if simulation is only 1, so it doesn’t matter how long it is.