The Korea Baduk Association recently released a new app called Legend of Baduk!
It’s designed to help beginners reach up to 15-kyu, and it’s available in both Korean and English.
The app is structured like a game, you have to clear stages to move up, learning Go as you go. It also offers training features and even lets you play against an AI designed for beginners.
I tried it for about an hour today, and I think it’s a great tool for teaching newcomers and kids. You can find it on both the Google Play Store and the App Store!
Unfortunately, due to initial budget constraints, the app was not built with localization features in mind. The app was built with limited budget and still very buggy. I assume we - KBA - will not work on localization. However, we are very happy to see your interest in our app. We will certainly make localization a top priority for future contents, if our budget allows.
I’ve tested the play against AI feature, and noticed that during the game, it is just running locally (when internet connection is off), only when click counting, it would “sync” to the server (I think just to store game states, and items). I wonder if it is possible to isolate the play against AI feature and make it an independent app (or even a paid app feature not even need for all ranks). I feel the practice of a whole game might be more helpful for beginners, instead of gatekeeping it with limited AI games per day (or purchase).
In short, unfortunately, we cannot achieve that at the moment.
Only those stages for 30~21 Kyu are ran locally. But counting actually requires internet connection.
Since we do not have our own development team, nor we have much experience on developing Go features, we do not have our own library or whatsoever to help counting. AI from 20 Kyu and counting feature runs on our server(on the computer just next to me ).
Well, counting, if the user can exactly judge and select dead stones, is surely an easy feature to make but we did not expect beginners(our main target) can achieve that. Also, it would be inappropriate if users could select dead stones as it can be abused. So we allowed users to select whenever(Still, there is a restriction for minimum move count) they want to count but rely on AI’s judgement for counting.
Limiting AI games per day was necessary as running game server burns our money
From a continuing service and running perspective, I can see the reasoning for locking the feature behind ads with daily limits. But from prompting Go learning and education perspective, especially with joining with KBA assuming non-profit, it would probably hurt more for student’s applying what they learn and actually finishing games themselves as a sense of accomplishment, and not just a glorified tsumego UI. (as someone teaching Go, I can see the shadow of Go textbook and chapters as levels, with highly repeated quiz/homeworks at the end of sessions, which we usually would have a more varied quiz pool than the teaching material pool)
Although, if I’d design myself, I’d spread the counting and server load for TPK games on their device locally (it would also reduce the bandwidth and server loading cost), and there are several ways to achieve that easily, one is to use atari Go rules (not just the first capture, but comparing which side has more capture, and you can use one IF statement to determine the game result), another one is to use a PureGo rule (those who has more stones on the board wins), and it can be achieve with two nested FOR loop to determine the result. Or Tromp-Taylor Rules , with students asked to fill the boundary and capture dead stones manually before counting (not marked dead stones, which would result similarly to Atari Go where TPK bots just keep throw in at any open intersection anyway).
Just my 2 cents: when the user fails a tsumego, I think it would be nice to wait for a few seconds before restarting the position. That will help the user to understand why he/she failed.
I guess it would be an inexpensive feature to implement.
I know I am stronger than the intended target, but I run into a bug that the puffer fish simply refuse to play and the timer wasn’t moving at all. And the game just stuck. Restarting the app doesn’t work and it just resume and stuck in the same place.
Thank you for the report. We will investigate it.
It would be a great help if you could clarify the problem further; When you are stuck, does counting or other buttons like Setting(top right) and going back (top left) works?
This indeed is an inexpensive feature as we just have to adjust seconds.. but I cannot really guarantee it will be implemented. Our team probably has discussed this matter before. One or two seconds can make users feel very different about the app. Still, we will gather all the feedback and discuss what would be appropriate.
Just out of curiosity, may I ask what kind of Go AIs are running on your servers? I doubt it is as simple as like gnugo, which is actually stronger than 15k, and from screenshots of others it doesn’t seem to mindlessly invade, which a lot of the earlier weaker bots did. Is it katago supervised trained human model (which makes sense to start with 20k from servers since it is the lowest setting)? What about other locally run Go AIs below 21k?
The Kata-human model is not comparable at its lowest rank setting, it performed very unevenly across different stages and skills (like super poor opening, but fairly good fighting, and strange endgames), and probably only started to match around 5k to 10k. So I supposed the 15k setting is actually fairly close to 13k to make its performance reasonable?
I did notice the lowest ranking bots (didn’t have the time to climb the ladder) that it’s almost like playing randomly after a certain stage. Although, the “first generation” of Go playing programs were very similar, with very simple rule-based programming (like detecting atari and string length, etc, and rudimentary eyes detection), while the second generation of Go programs started to include local “pattern matching” and opening databases to be able to play larger board (and they often need handicap stones to play, and around 20k+ strength in general). Before other methods were tried like hashing for different board position patterns for tree search, or early MC, and neural networks all around the 1980s, and we started to get programs like gnugo that can reach SDK level plays. There were quite a wide range of programs back then (but most probably very hard to convert and run on modern systems or even find their source codes)