ELF OpenGo weirdness

If you want to show the suggestions of both networks at the same time on one Goban, this would be possible. I don’t know of any toolchain to do it through.

If you want to combine the two to a stronger AI, there are several problems. You can use both to explore the game tree by exploring the best moves of both AIs. But if it comes to decide which move is the most promising of the explored moves, you have to come up with a decision mechanism based on two estimated win rates (one for each AI) of which you don’t know which is more accurate.

  1. You could go for the best move of the stronger net. This would probably result in a weaker AI because you spent some playouts for branches with lower win probability (following suggestions of the weaker AI which turned out to be not promissing in the eyes of the stronger one)

  2. You combine both winrates to a combined winrate.

    1. One possibility would be to choose always the higher winrate, hoping the lower value was wrong.
    2. You use the lower winrate, just to be save. Here you follow the more conservative estimation. You wouldn’t get the best move, but only a good move, both AIs can agree on to be not too bad.
    3. You walk some middle ground by averaging both winrates with any function you think would work best, but this wouldn’t lead to better moves, too.

    The problem with combining 2 AIs is that they aren’t good at reasoning. They tell you this move is good and that move is bad, but they cannot tell you why they think so. Even less they have a way to convince the other AI why they think this possition is good/bad.

    Even humans could have a hard time to find a common ground (This move is the best because it increases our influence/territory). Just imagine combining two human go players. Like the two AIs they have to use the same brain (the AIs share the same CPU, RAM, GPU, …). Neither of the combined humans can play at their full potential because some thinking time goes to the other.

1 Like

Golois bot, apparently using ELF v2 weights, played diagonal approach in a Fuzhao Go AI tournament game today:



I’m not sure ELF v2 weights are actually stronger than LZ’s own latest networks.

I played out the second position to the end with KataGo on ZBaduk. At the given position, it sees black as having 35 % (vs just 29 % by LeeLa Zero).

I always played KataGo’s favourite move once it had at least 12,000 playouts and was preferred to its second-favourite move by at least 0.3 pp decision and 0.3 pp winrate (which often meant much higher playouts, up to 1.7 million in one case).

ZBaduk screenshot: https://i.ibb.co/Xtmyt2z/Screen-Shot-2020-01-14-at-10-10-12-AM.png

Do current ELF OpenGo v2 weights still produce similarly weird fuseki?


Anton Christenson, EGF 3d did a 65-minute YouTube review of this game!

I’ve added all of his (commented) variations, plus more KataGo variations, to the OGS review as well: