Recently, an increasing amount AI generated content has been appearing in posts here in the forums, and creating concern about the effect that this will have here.
One argument for why this is Not A Good Thing goes like this:
The main driving factor behind the OGF community is, to my mind, the sense of community that it has; in particular the human-to-human interactions we have with each other here.
AI generated content is truly capable of destroying healthy communities by essentially removing the humans from it, and we should take care that such harm will not come to OGF. I donāt think this will be the case, but we should be vigilant, at least.
The idea that āAI generated content is not a good thingā is causing any post that has AI-generated content identifiable in it to generate objections and flags.
This wonāt be sustainable, so we need to get some idea of what would be the right course to steer and have a common understanding about that.
Iād like to see some indication of what people feel right now:
No AI generated content at all please
No AI generated content, except for translation
AI generated content can be allowed, but must be clearly identified as such, and have some relevance to the thread in which it appears
I donāt mind if AI is allowed here without any control
0voters
If the community is overwhelmingly in favour of either of the first two options, then flagging on any identifiable AI content would be justified, and moderation of that would be clear.
If the community sees value in being able to quote ChatGPTās opinion, or share an experiment with AI here, then āHey, this is AIā would not be a valid flag. Instead, some argument about why it is āa bad AI exampleā would be needed. In that scenario I would be allowed to quote ChatGPTās summary of a thread and make an argument based on it. You could respond by pointing out that ChatGPT is hallucinating, but it would not be a ādisallowed AIā flag.
I am a non English speaker. Sometimes I need to use google translate to translate an entire sentence to English. I suppose there is should be something like term called āfair useā of AI. I notice that sometimes my language sounds rude. But it sounds rude when I do not use help of Google Translate.
I do not think that using AI as helper to state thoughts in English is wrong. It is not generated content. The source is not AI.
I voted for āmust be clearly identified as AI generatedā, though that option also says āand must have relevance to the threadā, which I donāt care about nearly as much. I also donāt mind when humans go off topic a little bit.
The other poll options, I donāt care for personally.
I hate the AI summaries of threads that have been posted, as well as the āIāve asked chatgpt to make a response to your postā posts. If someone wanted a reply from AI they would have asked it themselves, and if they feel that they are in need of a summary of the conversation they likewise can ask it themselves. Given that Itās free to sign up for an account to things like chatgpt, and that itās quite easy to copy and past some text, banning this content would be no real impediment to those who want it. Maybe if thereās a line or two you want to quote, but no replies that are almost completely written by AI.
I can understand that itās a tool, but there are very few circumstances where the AI needs to be quoted directly. e.g. If you used it as a tool to find out some information, ask it for a source and verify the info is real and not a hallucination. If you do that you can just post a quote from the source instead of the AI.
I guess what Iām saying is Iād prefer a more granular approach, if thatās feasible for mods. We donāt have to ban it outright, or require users who donāt speak english to state on all of their posts that they had the help of AI to translate. We could ban some things, require some things to be marked as AI, and other things could have an exemption.
Do we now really think that we can manage the AI process of seeping into every aspect of life, culture, society, etc? The AI-ization of the world is a irreversible process. AI is here to stay. Get used to it and learn to live with it.
Till now AI doesnāt prove to be fruitful in the discussions we have IMHO. The content generated makes the debates heavier with new elements far to be always relevant or in some interesting logic.
Itās not a good tool to me in its basic use. Iām sure it could be of some interesting use if there is some rework of the result of the query. If the publisher is more involved as a simple copy/paste but instead will study, check and produce something on his own.
But this would be hard to moderate, I dunno. Where are the limits? Can it be enforced? There is a lot of subjectivity.
Mention of AI use: It will have maybe some good consequences but at the same time it shouldnāt be enough to allow whatever being published under this label.
I agree itās here to stay. But just because itās here to stay doesnāt mean we canāt manage it, and just because we canāt manage it perfectly doesnāt mean we shouldnāt try to manage it at all.
Considering the choices, I am voting for āAI generated content can be allowed, but must be clearly identified as such, and have some relevance to the thread in which it appearsā since that includes AI advancements in Go, moves and joseki made by AI.
Some nuance to it: a) AI responses to post shouldnāt be included to the above, unless the user states that this is the only way they can post (e.g. due to translation issues or some other hindrance), in which case, it is what it is. b) AI thread summaries should be outright discouraged as post content. Being called to ādebugā the AI summary is literally calling you to argue with yourself. If people enjoy this, fair enough, but I am not doing it. I consider it extremely rude to ignore someoneās post and I always read the whole text even if I disagree with them, out of respect for their time and opinion, however an AI is not a person and it lacks thoughts and opinions, therefore I am certainly skipping automatically any AI generated summary of a topic. c) Similarly, ācontent dumpā posts by AI are also being skipped, as far as I am concerned. I will happily spend hours looking into sources and fact-checking of posts if they are made by humans and they are accompanied by their opinions and reasoning, but I am not going through the āAI infoā unless that āinfo dumpā is accompanied by the opinion and reasoning of the human that makes the post.
So, as far as I am concerned:
This is fair use of AI.
We cannot manage that, for sure, but in this tiny corner of the internet where humans gather to discuss things, we can at least try.
I wouldnāt really bet on that. The hype is certainly huge, but will its use be worth it on all those facets of life?
It is still a gamble.
The current iteration of AI has its merits, but it is overhyped so much in the hope that the next versions/improvements will make all the hype come true (which is why all those billions are being poured into this).
I fear that instead of debates with our own different views, we are going to debate using the knowledge of how to make a good query to get efficient arguments. Will this give us better contents in the threads?
On the content itself, I fear the easiness involved by the trust in what an AI is producing. Thatās what already happened in our forum a few times very recently. I want a resume and I wonāt even take the time to select and modify, add my own view etc⦠Just take it as it is written by the AI and go manage that yourself.
My own experience is that is leading to poor contents besides disturbing the discussion involved.
I think this is an example of the kind of ai content if rather not see. You can ask the AI, read the answer and see if it helps you understand something. And if it does, incorporate that into your post. But I donāt want to be reading a wall of text from an AI that is parroting some generic points. Basically none of the things the copilot ai said there were interesting or insightful.
Itās like if someone asked some question about joseki and you answered with āI have a book about joseki, here is s screenshot of the introductory chapterā. Probably not very helpful, except perhaps for especially textbook questions.
That exactly why a posted it here. As an example. AI is just a search machine. An improved version.
I just want to reinforce my point. Forums is person to person communication.
Anything else is wrong.
May I disagree with you here? I think screenshot of the introductory chapter may be appropriate as an answer. Even if the book is not about joseki. It may be about cooking. But it is still an answer from a person. Not good. But person to person communication.
(A screenshot of a book is ok as long as itās clear that itās a screenshot. Giving out the contents of a book as an answer is not ok).
Two days ago, something new happened when I was doing an online search using Secure Search. The result at the top of the list said āAI Summary,ā and it consisted of a ChatGPT summary of sources responsive to my query. It had the expected disclaimer that people should verify the results for themselves, but of course most wonāt do so. I have never downloaded ChatGPT, nor used it indirectly, but now it is foisted upon me and others very much against my will.
The problem with this is that information vacuum cleaners like ChatGPT present a mountain of trash mixed with the valid information. As I looked over the summary, I saw that my query coincidentally evoked a bunch of conspiracy theories that I was not even aware of in connection with my subject, but which I could immediately see were complete bunk (I know a great deal about conspiracy theories in general because of my interest in mass hysteria events).
This was an eye-opener. One of the main problems in evaluating raw intelligence is excessive volume. This is why a document dump is so effective in the legal profession, as Iāve mentioned before. It is also why I have said for some 30 years that the main problem of modernity is āHow do we know what is true?ā
Now that volume of indiscriminate information has come to internet searches. Yes, the internet has long had too much information that was indiscriminate, but the addition of such automatic AI summaries will multiply the problem by orders of magnitude. It is also a boon to the tactic of manufacturing fake references (disinformation in its original meaning).
The relevance to this thread is that AI-generated comments almost invariably are predicated on a great deal of bunk, and yet, due to the modern reverence of computers, carry an undeserved air of authority.
My personal instinct is to ban AI-generated content entirely except in discussions about that subject. However, that is obviously not going to happen here, so I have voted with the majority to manage it.
I think it is not enough. And sometimes not necessary.
There should be a purpose to use generative AI.
Let see an example. Someone ask to solve go problem.
If you just place a screenshot of representation of an AI solution (for example katrain screenshot) and label it accordingly it may not be enough in some cases without any personal explanation.
On the other hand if you use AI as helper when analyzing board but do not copy-paste results it may be not necessary label it.
I do not use an AI except google translate. Unfortunately AI is not good enough even for solving very simple tasks. So it is hard to me to find more appropriate particular example.
In general I mean that you should not copy and paste AI answer without a good reason. But definitely if you did it should clearly labeled. And only in case if the source of information is an AI.
In case when you refine your own thought with help of AI I do not think that is necessary to always mention of using an AI.