If moderators can handle (say) 10 cheating reports/day, and if 20 people/day are accused of cheating, leaving tickets open would lead to a pile of 3650 reports after 1 year.
I wonder what lichess does to make its automatic cheat detection economical? Presumably there’s some pretty heavy pruning which could be done to the number of games slated for automatic review
So @GreenAsJade could we get a message listing the most important TOS points including no AI use as bullet points upon creation of new accounts if it is not taking too much time please?
Then we could happily close this thread and move on.
Do we have a draft for what would be included there? Something like…
AI use on one’s ongoing games is, of course, prohibited, as are all forms of outside help not explicitly endorsed by the interface (for example, you can use the “Personal” tab of the game chat to keep notes to yourself)
Analyzing others’ ongoing games with or without AI is fine so long as you don’t provide any information to the players. You can even type in the game chat and it will be hidden from the players until after the game
The onboarding flow is the most difficult thing to change.
You have to answer the question “how will you present this to the person in a way that they are at all likely to absorb it while not increasing the number of clicks to get in?”.
Do you have a design for this?
Completely irrelevant. This raises a strawman of someone thinking “Oh, they use AI to determine if the game is over, so I can use it to play”. Utter nonsense.
I don’t think there’s a compelling argument that people who use AI “don’t know that it’s cheating” or “don’t know that it’s not allowed, if only they knew they wouldn’t do it”.
I can’t recall an appeal after a ban where the person asserted that they didn’t know.
Maybe it happens, but I don’t recall such.
People who are banned for AI use either apologise, acknowledging it was wrong, or protest that there is no way they were in fact using it.
If you’re that person who “feels like the world owes you a win” and reach for the AI, you’re not going to think “Oh, hang on, when I registered they said I shouldn’t do this, I won’t do it afterall”. IMO.
I was thinking of a simple pop-up window with a confirm button at the bottom and a bullet list:
Hereby i acknowledge that i will:
*Not harass other players
*Not use AI during my games
*Not pull the plug of the ogs servers
*and so on
If one of these rules is violated the account may be suspended immediately.
[Confirm]
very simple, very little text. Bullet points can be taken from the TOS.
Let’s just raise the psychological bar a tiny bit by this. I think it might have some effect although i cannot prove it.
I’d also advocate of being very strict when handling cheaters: (I am talking of securely confirmed cases.)
If it’s a new account < 100 games ban them and automatically annul all their won games.
If its an old account with a lot of games give them a warning first time and if they do it again within like 6 months ban them and annul their last 50 won games. There should be a script for the mods to do this efficiently.
If this would be done consequently it should have a big effect.
Thanks for the insights. I never tried to check top 5 match %, only the AI win rate graph, but that sounds like a very reasonable approach.
I can see how that’s very frustrating. I am only mid-SDK and haven’t come across as much obvious AI use around my level. Fundamentally, this might be because OGS has a small number of dan level players, and AI cheaters often get to dan levels very quickly and stay around 1-3 dan to make sure they can still get matched into games fairly quickly.
This is approximately what currently happens, minus the precise number of “100”.
Accounts that are clearly “just AI games” are “banned and annulled”.
Even in the case of warned accounts all recent AI-affected games are annulled. AI-affected means “the rank of this player is wrong due to previous AI use”.
Please no IP-based bans as the first response, only for extreme cases. Some internet service providers use dynamic IP: the same IP address is allocated to different customers on different days. I’ve been banned from other services because someone else in Australia did something stupid.
That’s how it is.
Telling people “Do not use Go engines to cheat” is like hanging a sign saying “Stealing is not allowed”. It is not going to deter anybody.
From my point of view that is not the complete picture:
Yes i saw that happening to some AI-cheater accounts. But what i also saw is:
Accounts banned and only like 2 games annulled
AI-reports vanishing without any visible action and no feedback given which is really bad.
I am not saying nothing was done, i am just describing what it is from the view of the reporter.
In my opinion there should be a short feedback (e.g. a chat message like it was in older times) in any case because i am very close to stopping reporting at all because of frustration no visible action (from the reporters point of view) is taken.
If people do stop reporting it will get worse and worse.
My opinion differs from @xela 's because i think IP-based bans would do far more good than harm.
If you do not IP-ban AI cheaters they are creating accounts over and over again which increases the workload to the mods.
On the other side if you ban a dynamic ip… what are the odds the new owner of the ip is a go player?
And if so they can appeal via email (if they know).
There are a lot of prohibition signs in the world - are they all useless?
I respect most and when i was a kid they had even more effect. I am not saying it will solve the problem but it will help to make OGS’ s point of view clear and maybe reduce it like 2%. It’s only a pop-up window nothing big.
Alrighty how can we come to an conclusion? What is the decision process? Do the devs decide or is there a poll or how is it done? I do not want to spend more time on this. It’s only a popup window to make the first step. It will not solve all the problems of the world but maybe it’s a little step in the right direction or maybe it’s completely useless. We will never know.
First some argumentation, because you’ve made some statements that are not true, and asked some questions that warrant answers …
Actually, we don’t “have to” do that at all.
We are perfectly at liberty to make a “warning” the initial way that we tell users they should not do something … and in fact there are many specific things, such as “abandonning games” where a warning is the first time you get to hear officially that we don’t want you to do that.
It’s not cheap.
It is cheap to implement but it is high cost in terms of onboarding flow. That flow is agonised-over and pared down to be as low-impedance as possible.
We haven’t heard much, have we?
There hasn’t been a flood of people jumping in saying “please please do this, anything to just reduce this by a few percent would be wonderful”.
It hints to me that while you’ve experienced a frustrating burst of this problem, perhaps it’s not as dominant for the wider user base?
People who are willing to make accounts again and again are equally willing to use a VPN with dynamic IPv6 IP.
Fortunately, these “determined trolls” are really few, and they provide more like light entertainment than unwelcome load to moderators.
- I’m interested to know what devs and mods think (e.g. what is the current status of automatic detection - what are the problems)
As has been mentioned elsewhere, we have automatic analysis (“detection”) running on all games. It gives more like an “indication of likelihood, contributing factors” than a yes/no answer - it would have too many false positives to warrant investigating each conclusion it reaches, but it is available to support moderators in dealing with reports that come from a person who has a good reason to suspect.
The real trouble is that a concrete decision to take action is simply and un-escapably difficult.
Some AI cheaters are like “oh, ha, you got me, OK I will behave”.
Others are desperately wound up in the accusation, and make the life of the moderator very miserable … I’m sure you can imagine the kind of thing that goes on when someone decides that they object.
A moderator only has to be dragged through this once or twice before they decide it’s not worth it.
This means that AI reports do go unanswered. There are more AI reports than the few determined and dedicated moderators can handle.
A huge problem here is that a large slice of them are “sore losers” or “result fishers”.
If people would only report AI when there’s a solid basis, then the queue would be dramatically reduced.
So: that’s what I think that devs and mods think (I sit in both seats from time to time).
At last:
anoek is the ultimate gatekeeper.
The process is this:
- The proposal needs to demonstrate a reasonable level of consensus in the forum.
In the absence of that it’s hard to persuade anoek
I don’t see much discussion here let alone consensus. The good news is no-one has said “that will suck” … but the reason is that all the people who read this have already registered, so they really care less about what the registration process feels like
But - in favour of the proposal, at least there’s only me saying “it’s not as cheap as you think”,
- We need an actual design, not just an “idea”
I guess we have a candidate proposed design: you proposed a popup that a new user is required to click OK.
In order for that to get in, anoek would need to approve it.
However, in my opinion it is not worth asking him yet, because last time this came up he said “I don’t want a TOS dialog, because it is too much of an imposition on the registration flow”.
I suspect that this thread has not yet made the case that “actually, it’d be worth it”.
But you could always ask him yourself
Hey there,
Well that was more a rhetorical ‘have to’ from my side than a there’s no other way to do it ‘have to’.
I’ve been a software engineer all my life and i do not understand why a popup-window would be high cost in terms of onboarding flow but maybe i don’t have to and i just accept what you say as you know the system and i do not.
I do not agree but opinions differ and that’s ok.
I understand the problem. The consequence is that unanswered AI reports lead to frustration of the people reporting. i’d suggest: make it clear that the reporter has to be like 99% sure that it’s an AI cheated game in the AI-use form. If people do false reports over and over that ability could be taken from them. And in my opinion feedback should be given. People only learn when there is feedback.
Ok then let’s ditch it. Can some mod close the thread please? - i do not see a function to do it myself.
I do not like how you phrased that. Please keep in mind that even if you may be right with the “not cheap” thing in terms of OGS it might not be right in a general sense. Your point of view is not the absolute truth either.
So Long, and Thanks for All the Fish.