I’m not a AI expert, so this is how I understand it.
Simulations in leela and playouts in LZ are approximately the same. (A playout is a unique game state)
The reason why LZ only needs some thousands of playouts (1600 for selfplay training) and leela needs playouts in the tens of thousands is they work quiet differently.
leela relays heavily on MCTS (Monte Carlo Tree Search). Its positional judgement is rather bad so it has to simulate (read) the probable variations to the end of the game (or at least very deep) to get a good estimation. Therefor leela needs quiet many playouts to get good estimations. Luckily playouts in leela are quiet fast, so you get many of them in a reasonable time.
LZs positional judgement is much better. It uses a NN (neuronal network) which can reliable evaluate where to play and who is ahead. For a given board position (actially the last 8 or so positions because ko) the NN gives you 362 (19x19 + pass) move probabilities and a win probability. For a human we would call this probably positional judgement.
LZ now does something similar to MCTS by playing out the most probable variations, evaluating the NN for every board position on the way. Let’s call this reading.
Since LZ has a good estimation of who is ahead along the way, it hasn’t to read as deep as leela to get an estimation of the variation is good.
Since the evaluation of a board position by the NN take much computation power, LZ has to spend much more time on one playout then leela, the total number of playouts is much lower.
In the end it’s about quality and quantity. And it’s seems that quality is the way to go right now.