Leela Zero progress thread

More trouble possibly on the horizon. In additional to ELF games quickly running out (hitting self imposed max of quarter million), there is also the case that apparently the vast majority (70%+) of the training games all come from one single client, the same student who is using his university supercomputer since it is now the summer and no other students are using it for other projects and he gets to basically use it for LZ training to his hearts content. But just like Google throttled its own colab when the Chess zero guys started using it in earnest, the university has the right to put a stop to this at any time without any notice, plus once summer ends it doesnā€™t seem he will be able to anywhere near sustain current levels of outputā€¦


But I guess lowering gating to 52% ā€œis not an optionā€, even though mathematically it has been proven that at 52% it is more effective than a 55% gating, and this is even more true when considering that 15block is already saturated and the entire LZ project is already at such level that it is unreasonable to continue to expect a 5% improvement per each quanta jump of new network and more granular captures must be accepted. In other words, when youā€™ve been bodybuilding for a decade, all the noob gains are gone and if you did a ground hog day of snapshotting back every time you couldnā€™t make a 5% improvement in strength or mass per week, then youā€™d never improve at all anymore. Its way past time to look at the reality and accept that gone are the days of adding 2 pounds of muscle mass per week and accept that it will be only a fraction of it per week. The problem with gating at 55% is that anything lower isnā€™t even given a chance to see the light of day and weā€™ll simply be stuck here forever.

Case in point, its been over ten days and NOTHING. Just tonight a new hopeful was on the horizon but alas didnā€™t make it.

This is the end of LZ folks.

1 Like

The end is niiiiiiiiiiigh :open_mouth:
Time to start rioting and looting I guess

2 Likes

The creator of the project (gcp) seems to be reluctant to move to a bigger network for some reason I donā€™t really understand (maybe it would make training slower and that might discourage users?). So heā€™s sticking with 192x15 for now, but thatā€™s already giving diminishing returns. And of course thereā€™s no chance to beat ELF with 192x15 because ELF is a maxed 224x20. So LZ is boring right now.

1 Like

Isnā€™t that because experimental 20 block nets are barely better than 15 blocks?

1 Like

Iā€™m not really sure.

If I understood this thread right, one user has been training 256x20 networks on his own hardware and his latest network is stronger than the current best 192x15 (with less training effort).

Thereā€™s also the issue of time parity. Larger nets are much slower, so if you are playing with normal time controls on consumer-level PCs (without fancy high-end GPUs), the smaller net might be strongerā€¦ Maybe thatā€™s why they are sticking with 192x15.

2 Likes

An update courtesy of gcp:

4 Likes

For anyone curious, LZ has promoted two networks in quick succession, and appears to be advancing rapidly again. The amount of visits it took to promote was substantially longer than the preceding promotions, but similar to whatā€™s been seen in the past when a network starts to saturate for its given block size.

What likely caused this hiccup (my conjecture, based on GCPā€™s posts), is that the number of ELF games used for training steadily decreased as the network added progressively more self-play games, as a result of the limited number of games used in training, and the fact that the most recent games were used. A fix has been added to ensure that the ELF games stay in the training window, which should help the 15x192 networks continue to progress quickly for a bit longer, yet. This isnā€™t the first time a bug like this has stopped progress (temporarily), and likely wonā€™t be the last. However, they keep getting fixed, and LZ keeps getting better. Iā€™ve been enjoying watching the progress, and I, for one, applaud the efforts of everyone involved and look forward to seeing just how much stronger the LZ project can get.

tldr: LZ is not dead. Long live LZ.

7 Likes

Also despite alarmistic doomsday prophecies, LZ chess has now returned to strength before the dive more or less.

7 Likes

Surely you both are not suggesting that alarmistic doomsayers could beā€¦ wrong :open_mouth:

5 Likes

I love the optimism. Somehow I think thereā€™s some comparison with this:

8 Likes

Leela Zero placed 2nd in the preliminary round of the Tencent World AI Go Competition held on June 23-24, 2018. LZ won 6/7 games, losing only to Fine Art. LZ won games against Elf OpenGo, Golaxy, and AQ.

jpg%20large

Rumor has it Fine Art used an 80-block zero version. LZ used an unofficial 40-block version with 6x1080ti. Elf OpenGo used only 1 GPU.

Here is the game LZ lost to Fine Art:

More game records: https://lifein19x19.com/viewtopic.php?p=233215#p233215

4 Likes

Really? Seems to me if you want to have a bot tournament you should enforce all engines use the same hardwareā€¦ otherwise itā€™s more of a tournament to see who can afford the best hardware?

3 Likes

Interesting rumors about Leela Chess Zero:

https://www.reddit.com/r/chess/comments/8vz6b1/prediction_leela_version_test10_will_win_tcec_13/

4 Likes

An experimental build of LZ can now handle high handicap. (See github.) It adjusts komi dynamically such that LZā€™s win rate remains between 10% and 50% until it catches up.

This build scored some high handicap wins against dan players on KGS. (See petgo3 game archive.)

8 Likes

Leela Zero just upgraded its network size to 20x256 from 15x192. The promotion match was a disappointing 45.5% versus the best 15x192 network, but the new network will have more capacity for improvement.

http://zero.sjeng.org/

1 Like

If the best 20x256 network we have is not better than the best 15x192 network that we have, how do we justify the belief that it has more capacity for improvement if it cannot even match skill currently?

Might have been a mistake, because after 3 matches of the 20x256 network, itā€™s now back to the 192x15 network that the 20x256 one lost against.

1 Like

The three current matches are a variety of new 20x256 networks versus the old 15x192 but theyā€™re still trying to train up a strong 20x network

This 62b5417b network seems to be a beast. 72% vs strongest 15b, 80% vs next strongest 20b. :o

O wait nvm itā€™s Facebookā€™s OpenGo. XD

2 Likes

elf neural net v2

2 Likes