Oh, and it looks like you’re hiding the number of playouts. I strongly recommend you not do that. The number of playouts that a bot invests in a move should carry as much if not more weight as the winrate or score reported for the move, in terms of understanding the bot’s “preferences” between its top few suggestions. And it also lets you judge when a move has too few playouts for its evaluation to be trustworthy.
Mainly for its top few moves - if a bot only plays 2 playouts into A and 3 into B while other moves have thousands, it doesn’t mean much, it just means it doesn’t like either of the moves but it doesn’t tell you which one is “worse”. And with that few playouts, neither evaluation will be trustworthy anyways (you’ll need to instead play each move actually on the board physically and see what happens to the winrate, to judge them).
But among the top few moves, it’s a clear signal of the bot’s own preference, and you can take it account in addition to the winrate and score. Indeed, when you try to make an engine play a real game (rather than just analyze), playouts is such a strong signal that get the strongest actual play by starting with “play the move with the highest playouts” rather than “play the move with the highest winrate”.
This is because playouts also takes into account the bot’s “intuition” beyond just the judged score/winrate, and also because it’s possible for low-playout moves to be noisy and get high winrates just by “chance”, such that if they were evaluated deeper, the winrate could actually fall way back down (which is precisely why you shouldn’t trust evaluations with low playouts, but rather actually play out those moves if you want to find out just how bad they are).