I frequently see talk of how, prior to AlphaGo, Go AIs were notoriously easy for any experienced player. While that is likely true, for almost any beginner, even the lowest level Go AI is very difficult.
I think the next frontier of AI game-related development shouldn’t be pursuing further dominance. Rather, I hope it’ll focus on the more subtle task of providing a realistic, human-like challenge across the skill spectrum
Elo ratings are not comparable between different setups with parameters, let alone between games. I believe on OGS the 50th percentile is somewhere around 10k.
In any case, I’ve developed my share of weaker AIs to cater to high ddk to low dan players, and it is very difficult to balance making ‘sensible’ moves and being weaker.
The current 18k AI in KaTrain works something like:
Choose 8 random points on the board.
Pick the one katago likes best according to it’s policy network, without any reading.
Play that move.
If there is something super urgent / the policy is highly concentrated in 1 move, play that instead.
Sorry for not answering your question, but I think you should know that, as @sanderl said, the ELO system is relative: some chess sites have “harsher” ratings than others, but the difference between chess and go in this regard cannot even been compared. Furthermore, OGS does not even use an ELO system but a rating system called Glicko. So you cannot compare these numbers in any sense.
I am ~13kyu now, but I was more in the 15-20kyu range when I started learning by trying to play against the Leela AI with the heatmap move-suggestions turned on. What I found is that the AI wanted to play at a much higher level than I could relate to (it defaults to ~4 dan out of the box with no training), and that made it difficult for me to pull out concrete strategy tips that I could bring to my unassisted games.
The big change for me was learning to pit two AI of different strengths against each other. So, for my Opponent AI I like to use COSUMI because it plays around 9-11kyu https://www.cosumi.net/en/
I load up a game, and then I use some flavor of Leela as the “training wheels” to show me a better way to respond (either the desktop version I linked above, or this browser version, which only shows “best move” and not a heat map of all possible good moves): http://leela-one-playout.herokuapp.com/
That way, Leela is helping me respond to moves that I might encounter in a game that is closer to my skill level, and shows me ways to handle situations that I can learn from and take into unassisted games.
I’m really curious about KaTrain and will have to take the time/effort to install that eventually. It looks like a much more information-rich tool for the type of learning I’m trying to do.
@tonybe If you’re running windows it’s simply a matter of downloading the .exe here and running it.
I think you’d enjoy the teaching mode, it’s meant to give you that extra edge against opponents 4-6 stones stronger than you.
Well, actually Glicko-1 and Glicko-2 were designed with the ELO scale in mind, so that they were explicitly comparable and familiar to ELO ratings. In fact, the model behind Glicko is said to be a generalization of ELO, with the main difference that it tracks the confidence interval of each player’s rating