Leela Zero progress thread

Yeah, playout is very close to simulation. In playout procedure LZ tries a sequence of moves from root position and evaluates the result of the sequence.

With new fancy neural networks as they get smarter and larger, they don’t need as many playouts to evaluate the position correctly, they just “know”.

Just play around with LZ. If you see win% changes significantly with new playouts then it clearly haven’t decided yet. If it stays more or less the same then what’s the point? Yeah, it might change its mind after a billion of playouts but is the waiting really worth it?

1 Like

I found this in the Leela Zero Readme but I can’t understand fully:

Using MCTS (but without Monte Carlo playouts)

Uses the tree search but without checking each branch until the end of the game?

I would read the AlphaGo Zero paper, that’s very elaborate on the process, and as far as I know what LeelaZero uses as well.

It does play each branch until the game has been decided, but instead of random play it uses a NN to find the next move, and it uses another network to judge when the game is over.

1 Like

Monte Carlo means that it involves some sort of randomisation.

3 Likes

Last time I checked it doesn’t play each branch until game is decided. And it uses one network to find both probabilities of the possible moves and value (win%). Just to clarify.

Does anybody know the command line options used for the 400 game validations? In particular, if these run on one thread without randomization, why do they not play the exact same game 400 times over? There must be a random element, but I do not know what it is.

Leela Zero does not do any monte carlo rollouts. Instead of rolling out a game to the end N times and then looking at the resulting winning percentage, it just asks the validation head of the network what it expects the winrate to be from the current position. It expands the tree one move at a time, always expanding from positions where winrate is high and previous exploration is low. Every time a position is expanded by one move, there is one call to the network. This is called UCT search (upper confidence bound for trees). Alphago zero works the same way. I do not think Monte Carlo Tree Search is a good name for this algorithm, given that it does no random rollouts at all.

4 Likes

FYI, CloudyGo just updated, providing third-party verification that LeelaZero is still improving.

3 Likes

Anyone’s familiar with this program?

No.

1 Like

LeelaZero continues improving. See https://cloudygo.com/leela-zero-eval/eval-graphs

6 Likes

How does Leela Zero compare to other top bots?

I think she’s in top 8 but not top 4

2 Likes
9 Likes

I wonder what will happen to LGZ project now that:

  1. gcp has quitted the project
  2. Katago has emerged as very (even more (?) sucessfull.
    Does that mean that many volunteers who have spent time/energy to develop AGZ up till now will now switch to Katago ?
1 Like

Perhaps, I’m wrong about this, but I think there is a fundamental difference between how Leela Zero and KataGo are being trained:

  1. Leela Zero involves a distributed training effort, where the community contributes to the training.
  2. KataGo is being trained by its developer, with computing resources and funding provided by a corporate sponsor.

However, perhaps in the future, item 2 might change. Since everything is open-source and the network weights are released, I suppose the community could in principle perform distributed training of KataGo, however that might be unlikely without initiative, organization, and support by the developer.

Both bots are open-source, so the community can and does contribute to the coding effort in both projects (although currently, the development of KataGo is almost entirely the work of one developer).

Since KataGo provides a broader feature set (i.e., support for custom rules and settings, territory and score estimation, different playing styles) and claims to be more efficient for training, I do hope that more attention and community effort is drawn toward that project.

5 Likes

You might want to take a look at this GitHub thread

2 Likes

Thank for your answer, Yebellz.
I did not know KataGo was developed by only one person.
I am impressed that he has better performed than a whole team of volunteers (cf. LGZ project)…and in less time. Congratulations to him.
I hope it is not so, but I would be rather demotivated if I was one of the LGZ project volunteers…

1 Like

@Couchi There’s an expression for that: “standing on the shoulders of giants”.

Sure the next generation does better, or else you wouldn’t even hear about it. It doesn’t diminish the accomplishments of the previous generations.

6 Likes

The Leela Zero team has a lot to be proud of. They were the first to deliver superhuman Go AI through an open-source, community-driven project. While we saw tech titans, like Google, pour enormous resources into developing and training strong AI, the Leela Zero project proved that the community, working together, could achieve this as well.

Leela Zero is and forever will be an amazing chapter in the history of Go. Although it feels sad that it may be approaching an end, we should all be so glad that it happened.

10 Likes