That may or may not be true. We had a lot of publicity some time back about the biases of AI based on its programming and based upon the prompts (similar to the bias in polls based upon the questions).
Consequently, the first problem in any product of its use is whether it is objective in that particular case and to what degree if so. And even more problematic: what are the protocols for deciding?
what Iām saying is:
if our real wish is to have a lot of human to human interaction, then it absolutely doesnāt matter if AI is very smart or very stupid
we should say our wishes directly, like SomeGoGuy ,
donāt say possibly incorrect things about AI instead of it
Certainly. But if you and I are debating some point on this forum, then unlike either of us chatgpt has no prior interest in supporting or refuting the point. It has all sorts of other biases of course, but it has no horse in the race, so to speak.
But part of human interaction is talking about outside world. AI became part of outside world.
And Iām saying: some of things that some people think AI canāt do now, AI actually already can. They didnāt try the best models, they didnāt read how they work.
Some things AI indeed still canāt do, but it will in the future.
Yes, it is itself a horse in the race, dragged in by one of the people in the discussion. And since computers command more reverence than people, that is a tactical move.
But how would you use it to support an argument? This seems to cover a lot of ground.
Iād argue it would be bad to quote it either way; if you have doubled checked the evidence against some other source just quote the other source.
But does it need to be posted on the forums? If anyone is in need of a summary they can easily get it themselves. Not allowing these posts dosnāt have much of an impact on those who want one.
With conversations between humans the question of accurately representing another user may arise. When I see the AI summaries brought out, instead of helping to resolve that question, the question of whether or not the AI itself is making mistakes and misrepresenting someone arises.
Look at your recent use of a summary in the thread Do we really need a chess.com for Go? It didnāt really help the conversation, instead it started conversations about the reliability of the AI:
But, as some users have asked, how can we trust the AI? As you said above, the basis for trusting it is that it aligns with your own personal assessment of the thread. So your own personal assesment of the thread is supported by the AI, and the AIās claims were trustworthy because they aligned with your own personal assessment. This is circular.
I point this out because this is an example of exactly why we shouldnāt allow AI summary posts. Your point ended up being circular, cut out the middle man and you could have done the same thing AI free. With AI the point you made was no better, nor no worse, than if you hadnāt used it at all. In that sense it was superfluous, but worse it started this whole side discussion about the trustworthiness of AI.
This is the kind of thing we are going to deal with if we allow AI summaries to be posted. We will keep having discussions about whether or not someone is being misrepresented or some other factual claim is right or wrong. We may already have this issue with other humans, but this kind of use just exacerbates the issue. Instead of arguing that a human is misrepresenting the situation, weāll have arguments about whether a human is misrepresenting the situation AND arguments about whether the AI is as well.
Well thatās more so because it has come up before and was in discussion already.
If the community voted (and Iām not sure this vote covers it) to allow AI summaries and things, well I wonāt be bringing it up, except maybe to point to the result of the vote if thereās disagreement.
But I think I see what you and Conrad are saying about how one might call it an outside perspective or unbiased but it might be more likely to be posted when itās favourable or agrees with the poster (not commenting on any user or post). That and how it can come across authoritative.
I wouldnāt like it, but I wouldnāt bring up the fact that I want it banned either. But if the AI makes a claim you think is false, would you push back against that?
I donāt think anyoneās expecting that level of formality. Just a simple āTranslated by DeepLā or āTouched up with Google Translateā would suffice for āLabeled Clearlyā, imo, definitely no need to label individual words
100%. Also a good point that we shouldnāt fault people for typos; this still counts as clearly labeled imo
I think some early patterns which are coming out, is that there is wide agreement that AI is acceptable for translation (though anyone reading the thread could have predicted that), and regardless of other opinions, most people think AI should not be unlabeled (with a possible exception for translation)
Itās so textbook the Project and Portfolio Management product we make at my job includes a Risk Register with these fields as part of the default install of useful stuff youāll probably want.
This is a big part of why I generally label it as such on the occasions I do it, though I also try to mitigate the risk by backtranslating it to see if the meaning carried through
I marked āotherā as this seems a little under specified. Like Iād want to allow AI within the parameters of the thread, but not necessarily āall use of AIā
Like a thread called āI generated images based on baduk/weiqi/igoā might be filled with AI images, but perhaps not AI commentary.
However, a thread titled āWhat does the community want regarding AIā is āExplicitly about AIā but should follow the standard rules of the forums.
I would not consider that non-trivial, as itās just part of getting the AI output. I mean actual content independent of AI. Actual contribution to the discussion