Dark Go Bug?

According to the DarkGo website, it is a plain neural network inspired by the (first) AlphaGo paper.

  • There is no search, which means that the bot only has one shot at evaluating the moves on the current board. It does not look ahead at what might be the outcome.
  • It is not explicitly mentioned, but I guess the network was probably trained on professional games and possibly against itself, or maybe not.

During training, it probably observed the players drawing out a laddered stone if the ladder was broken, or leaving it when it was captured. It would have to learn the connection between a ladder breaker existing somewhere and saving the stone being a viable move, without any good way to grasp the concept of a ladder (a forcing sequence).

Even with the ubiquitous tree search implemented in many Go playing programs, long ladders and capture races are somewhat of a blind spot to them, because the deciding move, which reveals the value of the first stone in the sequence, is so far off that it can fall beyond the horizon.

I believe that the authors usually put some measures in place to mitigate that specific problem, such as giving priority to local responses in the search tree and quickly exploring the forcing sequences to a great depth. DarkGo does not have any crutches like that.

What you see is not a bug, but an inherent design limitation that cannot be fixed.

4 Likes