If you want to show the suggestions of both networks at the same time on one Goban, this would be possible. I don’t know of any toolchain to do it through.
If you want to combine the two to a stronger AI, there are several problems. You can use both to explore the game tree by exploring the best moves of both AIs. But if it comes to decide which move is the most promising of the explored moves, you have to come up with a decision mechanism based on two estimated win rates (one for each AI) of which you don’t know which is more accurate.
-
You could go for the best move of the stronger net. This would probably result in a weaker AI because you spent some playouts for branches with lower win probability (following suggestions of the weaker AI which turned out to be not promissing in the eyes of the stronger one)
-
You combine both winrates to a combined winrate.
- One possibility would be to choose always the higher winrate, hoping the lower value was wrong.
- You use the lower winrate, just to be save. Here you follow the more conservative estimation. You wouldn’t get the best move, but only a good move, both AIs can agree on to be not too bad.
- You walk some middle ground by averaging both winrates with any function you think would work best, but this wouldn’t lead to better moves, too.
The problem with combining 2 AIs is that they aren’t good at reasoning. They tell you this move is good and that move is bad, but they cannot tell you why they think so. Even less they have a way to convince the other AI why they think this possition is good/bad.
Even humans could have a hard time to find a common ground (This move is the best because it increases our influence/territory). Just imagine combining two human go players. Like the two AIs they have to use the same brain (the AIs share the same CPU, RAM, GPU, …). Neither of the combined humans can play at their full potential because some thinking time goes to the other.