AI and bigger boards

I am surprised that no professional of the game launched a challenge to Alphago on a bigger board. It’s a specific of the go game which is not used, and according to assumptions around 22x22 would already be enough to give chances to the human player.

There could be some good money for him to win at least. It could be refreshing for us too to see that humans can still get the win

1 Like

Which assumptions are those?

I don’t expect that a relatively small increase in board size will affect AI like KataGo much. Even with lower playouts, AI are very strong. AI don’t suffer as much from combinatoric explosion as classic alpha-beta engines would.

3 Likes

I meet that assumption a few times already from Chinese players on websites but yes maybe first step is to determine at which size it would change.
I’ll try to find one of this posts btw.

But hard, I had a moment in WeChat referring to an article in 163.com but link just link to today news. So I took screenshots of that article, can’t do better




1 Like

there’s a public katago available at 29x29, and an older version of katrain included a 52x52 one. it’s not that hard to recompile with this, but since it’s not been trained on it, things get pretty weird. Games also get awfully long.

3 Likes

Increasing the board size will not allow humans to become better than computers again. KataGo’s net right now is probably superhuman up though board sizes in the low 20s despite zero training on those board sizes, and goes into probably 30s before the net starts to diverge majorly from reality.

But actually training on those board sizes or larger ones would quickly anchor the neural net to the reality and improve the accuracy rapidly. All the low-level features in the net are already there, the net mostly needs to learn to anchor in the right proportions to match the data, and then through training, develop a bit more capacity for handling even larger groups and dragons.

For example 39x39 training would at worst probably only take a factor of at most 64-256ish longer training to generate data of comparable quality (4x longer game, 4x more expensive to evaluate the net, need 4-16x more visits to compensate the larger board area). But current bot runs are easily 10x past where they first could challenge top pros if not already 64x, so at least 64x is not a difficult goal. And in reality, it’s going to be a lot less than that since you can rely on knowledge transfer from smaller boards - not unlike how humans would rely on that, because humans wouldn’t be specifically experienced at the larger size either.

3 Likes

It’s worth noting - I’m pretty sure merely a week of training with gradually escalating the board size, 25,30, 35, 40, 45… would immediately make the net no longer weird at 52x52.

Internally, it still should be computing almost all the same superhuman-quality move judgments and features, they just get washed out because the small number of things that control “how does this scale with board size” wiggle around in uncontrolled ways at that board size and take on random extreme values - the net has zero incentive to have them curve in controlled ways because it is never asked in training to do so. Even small amounts of training data should quickly “set those knobs” to reasonable values and let the rest of the extremely-strong shape knowledge and judgement shine through.

Getting used to parsing larger dragons though would take a bit longer, of course.

2 Likes

Thanks for that info… I find it interesting.

1 Like