What does the community want regarding AI generated content in the forum

Recently, an increasing amount AI generated content has been appearing in posts here in the forums, and creating concern about the effect that this will have here.

One argument for why this is Not A Good Thing goes like this:

The main driving factor behind the OGF community is, to my mind, the sense of community that it has; in particular the human-to-human interactions we have with each other here.

AI generated content is truly capable of destroying healthy communities by essentially removing the humans from it, and we should take care that such harm will not come to OGF. I don’t think this will be the case, but we should be vigilant, at least.

The idea that ā€œAI generated content is not a good thingā€ is causing any post that has AI-generated content identifiable in it to generate objections and flags.

This won’t be sustainable, so we need to get some idea of what would be the right course to steer and have a common understanding about that.

I’d like to see some indication of what people feel right now:

  • No AI generated content at all please
  • No AI generated content, except for translation
  • AI generated content can be allowed, but must be clearly identified as such, and have some relevance to the thread in which it appears
  • I don’t mind if AI is allowed here without any control
0 voters

If the community is overwhelmingly in favour of either of the first two options, then flagging on any identifiable AI content would be justified, and moderation of that would be clear.

If the community sees value in being able to quote ChatGPT’s opinion, or share an experiment with AI here, then ā€œHey, this is AIā€ would not be a valid flag. Instead, some argument about why it is ā€œa bad AI exampleā€ would be needed. In that scenario I would be allowed to quote ChatGPT’s summary of a thread and make an argument based on it. You could respond by pointing out that ChatGPT is hallucinating, but it would not be a ā€œdisallowed AIā€ flag.

Let’s see the numbers, in the poll above.

9 Likes

I am a non English speaker. Sometimes I need to use google translate to translate an entire sentence to English. I suppose there is should be something like term called ā€œfair useā€ of AI. I notice that sometimes my language sounds rude. But it sounds rude when I do not use help of Google Translate.

I do not think that using AI as helper to state thoughts in English is wrong. It is not generated content. The source is not AI.

7 Likes

I voted for ā€œmust be clearly identified as AI generatedā€, though that option also says ā€œand must have relevance to the threadā€, which I don’t care about nearly as much. I also don’t mind when humans go off topic a little bit.
The other poll options, I don’t care for personally.

9 Likes

I’d generally like AI content to be flagged as such, however I’m also okay with some exceptions such as for translation purposes.

I’d like AI media to be labeled as such, unless there’s a particular thread where it’s clear the thread revolves around AI media.

I hate the AI summaries of threads that have been posted, as well as the ā€œI’ve asked chatgpt to make a response to your postā€ posts. If someone wanted a reply from AI they would have asked it themselves, and if they feel that they are in need of a summary of the conversation they likewise can ask it themselves. Given that It’s free to sign up for an account to things like chatgpt, and that it’s quite easy to copy and past some text, banning this content would be no real impediment to those who want it. Maybe if there’s a line or two you want to quote, but no replies that are almost completely written by AI.

I can understand that it’s a tool, but there are very few circumstances where the AI needs to be quoted directly. e.g. If you used it as a tool to find out some information, ask it for a source and verify the info is real and not a hallucination. If you do that you can just post a quote from the source instead of the AI.

I guess what I’m saying is I’d prefer a more granular approach, if that’s feasible for mods. We don’t have to ban it outright, or require users who don’t speak english to state on all of their posts that they had the help of AI to translate. We could ban some things, require some things to be marked as AI, and other things could have an exemption.

13 Likes

Do we now really think that we can manage the AI process of seeping into every aspect of life, culture, society, etc? The AI-ization of the world is a irreversible process. AI is here to stay. Get used to it and learn to live with it.

4 Likes

Till now AI doesn’t prove to be fruitful in the discussions we have IMHO. The content generated makes the debates heavier with new elements far to be always relevant or in some interesting logic.

It’s not a good tool to me in its basic use. I’m sure it could be of some interesting use if there is some rework of the result of the query. If the publisher is more involved as a simple copy/paste but instead will study, check and produce something on his own.

But this would be hard to moderate, I dunno. Where are the limits? Can it be enforced? There is a lot of subjectivity.

Mention of AI use: It will have maybe some good consequences but at the same time it shouldn’t be enough to allow whatever being published under this label.

6 Likes

I agree it’s here to stay. But just because it’s here to stay doesn’t mean we can’t manage it, and just because we can’t manage it perfectly doesn’t mean we shouldn’t try to manage it at all.

12 Likes

That’s what we are doing with this thread, see how we could manage the use of AI, but not everywhere, just in this forum.

3 Likes

Considering the choices, I am voting for ā€œAI generated content can be allowed, but must be clearly identified as such, and have some relevance to the thread in which it appearsā€ since that includes AI advancements in Go, moves and joseki made by AI.

Some nuance to it:
a) AI responses to post shouldn’t be included to the above, unless the user states that this is the only way they can post (e.g. due to translation issues or some other hindrance), in which case, it is what it is.
b) AI thread summaries should be outright discouraged as post content. Being called to ā€œdebugā€ the AI summary is literally calling you to argue with yourself. :thinking: If people enjoy this, fair enough, but I am not doing it. I consider it extremely rude to ignore someone’s post and I always read the whole text even if I disagree with them, out of respect for their time and opinion, however an AI is not a person and it lacks thoughts and opinions, therefore I am certainly skipping automatically any AI generated summary of a topic.
c) Similarly, ā€œcontent dumpā€ posts by AI are also being skipped, as far as I am concerned. I will happily spend hours looking into sources and fact-checking of posts if they are made by humans and they are accompanied by their opinions and reasoning, but I am not going through the ā€œAI infoā€ unless that ā€œinfo dumpā€ is accompanied by the opinion and reasoning of the human that makes the post.

So, as far as I am concerned:

This is fair use of AI.

We cannot manage that, for sure, but in this tiny corner of the internet where humans gather to discuss things, we can at least try.

I wouldn’t really bet on that. The hype is certainly huge, but will its use be worth it on all those facets of life?
It is still a gamble.
The current iteration of AI has its merits, but it is overhyped so much in the hope that the next versions/improvements will make all the hype come true (which is why all those billions are being poured into this).

7 Likes

I fear that instead of debates with our own different views, we are going to debate using the knowledge of how to make a good query to get efficient arguments. Will this give us better contents in the threads?

On the content itself, I fear the easiness involved by the trust in what an AI is producing. That’s what already happened in our forum a few times very recently. I want a resume and I won’t even take the time to select and modify, add my own view etc… Just take it as it is written by the AI and go manage that yourself.
My own experience is that is leading to poor contents besides disturbing the discussion involved.

4 Likes

May be we also should ask AI about it? I am definitely against giving AI voting rights.

I did it and what I think about its (MS Copilot) answer

1 Like

I think its important to write which exactly AI generated something.
Imagine someone posting ā€œAI josekiā€, where ā€œAIā€ is GNU Go.

7 Likes

I don’t mind use of generative AI here. I do think it should be clearly labeled.

3 Likes

I think this is an example of the kind of ai content if rather not see. You can ask the AI, read the answer and see if it helps you understand something. And if it does, incorporate that into your post. But I don’t want to be reading a wall of text from an AI that is parroting some generic points. Basically none of the things the copilot ai said there were interesting or insightful.

It’s like if someone asked some question about joseki and you answered with ā€œI have a book about joseki, here is s screenshot of the introductory chapterā€. Probably not very helpful, except perhaps for especially textbook questions.

12 Likes

That exactly why a posted it here. As an example. AI is just a search machine. An improved version.

I just want to reinforce my point. Forums is person to person communication.

Anything else is wrong.

May I disagree with you here? I think screenshot of the introductory chapter may be appropriate as an answer. Even if the book is not about joseki. It may be about cooking. But it is still an answer from a person. Not good. But person to person communication.

(A screenshot of a book is ok as long as it’s clear that it’s a screenshot. Giving out the contents of a book as an answer is not ok).

3 Likes

Two days ago, something new happened when I was doing an online search using Secure Search. The result at the top of the list said ā€œAI Summary,ā€ and it consisted of a ChatGPT summary of sources responsive to my query. It had the expected disclaimer that people should verify the results for themselves, but of course most won’t do so. I have never downloaded ChatGPT, nor used it indirectly, but now it is foisted upon me and others very much against my will.

The problem with this is that information vacuum cleaners like ChatGPT present a mountain of trash mixed with the valid information. As I looked over the summary, I saw that my query coincidentally evoked a bunch of conspiracy theories that I was not even aware of in connection with my subject, but which I could immediately see were complete bunk (I know a great deal about conspiracy theories in general because of my interest in mass hysteria events).

This was an eye-opener. One of the main problems in evaluating raw intelligence is excessive volume. This is why a document dump is so effective in the legal profession, as I’ve mentioned before. It is also why I have said for some 30 years that the main problem of modernity is ā€œHow do we know what is true?ā€

Now that volume of indiscriminate information has come to internet searches. Yes, the internet has long had too much information that was indiscriminate, but the addition of such automatic AI summaries will multiply the problem by orders of magnitude. It is also a boon to the tactic of manufacturing fake references (disinformation in its original meaning).

The relevance to this thread is that AI-generated comments almost invariably are predicated on a great deal of bunk, and yet, due to the modern reverence of computers, carry an undeserved air of authority.

My personal instinct is to ban AI-generated content entirely except in discussions about that subject. However, that is obviously not going to happen here, so I have voted with the majority to manage it.

15 Likes

I think it is not enough. And sometimes not necessary.
There should be a purpose to use generative AI.

Let see an example. Someone ask to solve go problem.

If you just place a screenshot of representation of an AI solution (for example katrain screenshot) and label it accordingly it may not be enough in some cases without any personal explanation.

On the other hand if you use AI as helper when analyzing board but do not copy-paste results it may be not necessary label it.

In both cases it depends.

The source of content is matter.

I thought it was clear that this thread is not discussing Katrain-style AI.

In case it wasn’t clear, I was more specific in my own post


Yes, I think there is some discression involved, but I hope that we can come to some agreement on what, in spirit, is considered okay in the forums.

1 Like

Just an example to understand.

I do not use an AI except google translate. Unfortunately AI is not good enough even for solving very simple tasks. So it is hard to me to find more appropriate particular example.

In general I mean that you should not copy and paste AI answer without a good reason. But definitely if you did it should clearly labeled. And only in case if the source of information is an AI.

In case when you refine your own thought with help of AI I do not think that is necessary to always mention of using an AI.

3 Likes

Clarification: THIS IS NOT DISCUSSING ā€œGOā€ AI.

What’s ā€œGOā€ AI?

AI specifically trained for Go, with a Go interface, not a language based interface.

(I’m not sure if editing the top post would clear the poll, so I haven’t put the clarification there)

Discussing and sharing results from GO AI in OGS Forums is not part of this review for opinion about ā€œAIā€.

12 Likes