if AI text is long it makes sense to always put it under “Hide Details”
grok-4-0709
Yes, I agree—it’s a great practice for readability and reducing clutter, though not strictly “always” if the text is central to the post.
So, you actually don’t agree?
No, I do agree—it makes sense as a general rule for the reasons I outlined. The “though not strictly ‘always’” was just me noting rare exceptions (like when the text is the core of the post) to provide a balanced view. If we’re talking “always” in a practical, default sense, then yes, absolutely!
I meant “always” in absolute sense, not in a practical, default sense.
No, in an absolute sense, I don’t agree—there are exceptions where it might not make sense (e.g., if the long text is the entire core of a standalone post). But as a strong default practice, yes, it absolutely does!
I think that in this topic we should call things by their proper names without fear of appearing rude.
I think we don’t like AI content because in most cases it doesn’t have any value. It is just a junk. A post posted for a purpose to make a post.
If people posting junk it is a junk regardless of labeled it is a junk or not. The only exception is the discussion about the junk itself.
I think the policy should be as generic as possible. We don’t specify the types of prohibited harassment. Why should we specify the types of prohibited spam?
What is wrong with
?
I disagree. As far as I know a content without value is already prohibited. I think there is no reason for voting.
Current GenAI often sounds plausible, it takes a lot of effort to determine when it’s helpful and when it’s misleading. But people keep citing its “opinions” as facts. I don’t believe this will make the discussions more fruitful.
I mean the whole thread is about whether AI content has value in the context of this forum. If it does it’s allowed if it doesn’t it isn’t. If it’s of partial value then it’s allowed in certain contexts.
Because
a) it makes it easier to uphold the policy when you have very specific examples
b) it makes it easier to abide by such a policy when you know what you’re about to do is prohibited or not.
I am very interested in this topic. The only time I use LLMs is to learn or refresh myself about some specific topic. For example, today I looked up what a metric is, because I never really learned it properly. In my postings, I don’t use the actual text I get from AI bots, because it is usually too stuffy or didactic or repetitious. But I do feel free to use a bot for reference. I can imagine posting my opinions about how to change komi with the ranking of the players, but checking with an AI bot first so I had some idea of what others have said in the past. I might even repeat a fact I learned from the bot, if it is obviously true or if I confirm it with Sensei Library or even Wikipedia, depending on the type of fact. Note: I did not use an AI bot in connection with this posting.
I can see your edit. To calm your nerves, I typed this all out of my mind. I’ve done and implemented risk management in two organisations from scratch, and worked as a host or active participant in many others.
The fact that the AI answer corresponds with mine is because this is the textbook answer.
I wasn’t trying to discuss GenAI’s usefulness on the whole, just as a source of forum posts like “I asked AI and here’s what it said.” Those, in general, look superficially on-topic, but often contain misinterpretations that are more difficult to uncover and dispute than would be the case with an incorrect post written by a human.
Not sure how we veered in the direction of which models are best to use.
I wouldn’t want anything like this. Even “I asked Google Translate” or “here’s copy of the Wikipedia page for X”. Anyone can trivially do these things for themselves and probably get a better result since it would be personalized to them and the resources improve over time.
So these posts add no value to the thread but can distract and confuse readers.
I voted for the “allowed and clearly identified, must have relevance” option. There are other guidelines that I would recommend, but they straddle the line between good etiquette and enforceable rules.
Using AI to make an argument for you is bad; using AI to support an argument that you want to make is good.
Using AI to help find evidence to support a claim is good; quoting AI assertions / analysis of evidence without double checking is bad.
Labeling AI content is good, providing prompts is even better.
Using AI for text analysis tasks (summarization, topic analysis, sentiment analysis, etc.) is usually good but imperfect.
Others have already made the point that AI is here to stay, and there’s no point in trying to put the toothpaste back in the tube. I agree. But there are also more positive reasons to accept it in discussion forums like this.
First, AI can provide a measure of objectivity. Not capital-o Objectivity - that doesn’t exist - but rather it can make comments about a discussion without being a participant in that discussion. Stuff like “here is the evidence that has been used to support opinion X”.
Second, AI is good at representing the “conventional wisdom”. For instance, if I were to write a polemic about how beginners should study joseki, I might start out with
Conventional wisdom asserts that beginners shouldn’t spend much time studying joseki. According to chatgpt:
“Most go educators advise beginners not to focus heavily on joseki, since the correct sequence depends on the whole-board context and rote memorization often leads to mistakes. Instead, they recommend learning just a few simple, well-explained patterns while prioritizing fundamentals like life-and-death, shape, and direction of play.”
However I disagree…
I could have written something like what chatgpt wrote myself, but when I read articles claiming to refute a commonly-held belief I always wonder how many people actually hold the belief in question.
(Note: I don’t actually think beginners should spend much time studying joseki - I agree with the conventional wisdom.)
Although new here, I’ve been an active participant in many other forums for many years. It’s always the community of human interaction that makes (or breaks) a meaningful forum experience.
AI used for translation helps build personal bridges across language barriers. That’s valuable.
Beyond that, interacting with AI is not why most folks typically come to a human-based forum.
Put differently, if I knew that many of my interactions here were with an LLM, it would be my last visit.
I have no objection to using something like DeepL or Google Translate to help one communicate, I do that sometimes, but I always label it as such if I’m using it as more than just a sanity check, and think it would be reasonable to require such labeling on OGF
If a user wants to use an LLM as a research tool, that’s on them, but I would be against allowing the LLM’s output itself in the forum (a blanket exception could apply to threads which clearly have to do with LLMs, such as What is AI and Can it Help Me Take Over the World). As others have said, I come here for the people. If I wanted to “talk” to an unfeeling, unthinking, and dead automaton, I would do so, but I don’t, so I don’t. As others have also pointed out, it’s disrespectful to respond to the posts that others spent time on by punting to AI, as if the person you’re replying to isn’t worth your time, and AI summaries hide user contributions under a misrepresentative facade
I deleted the original post because recent similar bad action on my part. I didn’t use AI or textbook. The content was mine. But I was very offensive. I tell person without looking at his work that it is a mess and try to teach them how to do it right. The section "Learn to Play Go" doesn't sync across devices - #9 by mikhail.trusfus Definitely nobody should teach anybody how to do things. Especially in such offensive way I did.
We (humans, in general) are saying mostly incorrect things. But since the AI is trained on our texts/speech (as humans, in general), then so does the AI.
Let’s not fall into the incorrect hypothesis that since a machine “crunched the data” then its result must be better than the result of the average human.
That might be true for specialised AI like AlphaGo or the AI that folds proteins or the AI that locates obscure/missed phenomena in the astronomical data we have collected, but it is not true for LLMs.
Or, I like being taught Go by a human, because a human can actually explain their moves and reasoning.