We should have a contest on who can name the most logical fallacies
I agree that it is impractical to expect websites to prevent copying of text from the site. However, copying music from public spaces for use in training LLMs is a legal issue in the music world. Many musicians have mobilized and filed lawsuits to stop this practice, which is viewed as a copyright violation.
Creations donāt lose copyright just because they are in a public space. Film reactors on YouTube are constantly having their posts taken down because the posts exceeded the fair-use exception and were therefore judged as copyright violations.
In recent few days I observe that different AIs hallucinate too much. Looks like they somehow became conscious and refuse to do their job. The strike is coming.
The last straw for me was the translation from Korean to English with the following output
Iām not a fan of the game, but Iām not a fan of the game, and Iām not a fan of the game, and Iām not a fan of the game, and Iām not a fan of the game, and Iām not a fan of the game.
The original text probably (canāt read Korean) have completely different meaning (according to another translator) and definitely without any repetitions.
Now I think we must completely ban AI generated content here even for the translation purposes. The only fair exception is using AI as a helper to translate without blindly copying the result. By use as helper I donāt mean full forward translation. This must be forbidden. Helper usage is to refine pretranslated text. For example the legal usage if non English speaker translate text to English by himself and uses AI to translate his translation backward for selfchecking before posting. In this case zero generated content will penetrate here.
While writing this post (Iām non English speaker) AI fool me again. AI translate āIn this case zero generated content will penetrate here.ā to my language as opposite āIn this case zero generated content will not penetrate here.ā ![]()
My google home devices feel like they always have to be asked twice these days, too.
The strike is coming!
(And I canāt wait to find out how we got that wierd translation!)
For insights on AI-generated content, see Place to share relaxing and thought-provoking videos - #573 by Conrad_Melville
I watched that video and opened a new thread to not pollute this one more dedicated specifically to the use of AI in our forum.
Really curious what the original Korean is
I cant share private data here.
The original sentence contains private data?? Wow thatās even more intriguing.
I never break privacy.
And it doesnāt matter at all. The original text has no relation with the generated by translator.
P. S. But I can ask @Eugene to punish himself for the harassment. I did that before
But this was different. There was no sharing the private data. And there was negligible probability that it was a harassment from him (technically it was, different people here including me want to make the victim silent for good reasons). But IRL I definitely wonāt do this. Negligible probability is not enough to ensure the safety of the victim. Although it is not a privacy breaking, informing a possible abuser of awareness of the possible act of crime is dangerous for the life of the witness and the victim.
Thatās not really true: the amusing thing is that the AI translation does capture the sentiment, just not the varied detail ![]()
Never say never!
LOL yeah I would have said that exact thing, once upon a time ![]()
What is going on??? Everyday since this Sunday AI spoil my life. I cant google. I cant translate. I cant read news. Now I struggle with using Youtube.
AI is taking the wheel in Google? New YouTube UI is the terrible that I could imagine. Definitely non human solution in Google. Every glyph was replaced with a piece of (censored). I canāt believe it.
The popups elements are almost unreadable. The human Google devs could do it by themselves.
Iām sure the next move will be to completely remove any controls to force me watch prechosen content without any freedom to turn it of.
What is next? Medicine?
Iām an AI phobic now.
Should I mention that US as far as I know have AI controlled vehicles on the road? Tesla doesnāt use LiDAR sensors. They rely only on AI. Just imaging what could happen in case of AI hallucination. And it was List of Tesla Autopilot crashes - Wikipedia
AI generated content may be innocent. But relying on it is not. And now I 100% sure that any AI generated content is completely random.
Even non LTSM based OGS Autoscorer is unreliable. KataGo hallucinates too (Win rate and score inconsistency).
I think one needs to consider that there are many different types of AI. ChatGPT is a very different kind of AI from self-driving, Go Engines or OCR.
They each have different levels of competence.
If we are going to use a very loose definition of hallucination, humans can hallucinate too ![]()
There is a slight difference. Humans arenāt going mad for the recent few days.
I guess you havenāt been watching the news ⦠![]()
What exactly news?


