Unfortunatley asking an LLM isn’t a way to provide greater objectivity. LLM’s greatly vary in how often they hallucinate based both on the task at hand and how they’re prompted.
One can bias them based on the initial prompt even (not saying you did it here, but it’s a point to be aware of).
Generally I believe we’re pushing toward blocking these kinds of AI summary posts that don’t add much to the discussion.
I will defer to other moderators for that decision, since I’m actively involved in the conversation here and I don’t want to come across like I’m just censoring something for the sake of it.
Basically though, if you want to summarise or sum up the posts, do it yourself.
We can’t generally trust an LLM to accurately say 66% of something has some tone when it generally can’t even correctly count the letters in a word.