PSA: UK users will soon be unable to access the forums or chat

The censorship effect is showing already

4 Likes

I gave a stab at trying to understand what it would look like trying to comply with the “Online Safety Act” and putting together an assessment: Online-go.com UK Online Safety Act

It seems not impossible but pretty daunting - there are several pdfs with multiple hundreds of pages of content, most of which you’re supposed to follow or at least consider for applicability. I think following the spirit of it isn’t insurmountable and overall I haven’t gotten to a place where it would have found that it requires something completely impossible.

I think the main things that eliminate a bunch of requirements/complications are:

  • OGS is not a “large service”
    • that’s 7 million monthly active UK users
  • OGS is not “multi-risk to children”
    • I think this is true, but this is the part where the hundreds of pages of PDF are relevant
    • but in broad strokes, for multi-risk we would have to think there is at least “moderate likelihood” or likely “moderate impact” on OGS for several categories of risks such as “abuse and incitement of hatered”, “bullying”, “promoting eating disorders”, “promoting suicide”, etc. It might be difficult to determine this, but at the face of it that surely isn’t the case, right?

And with these assumptions, the recommended measures are basically:

  • do content moderation and handle complaints from users
  • make sure your ToS is good (in alignment with OSA)
  • allow personal blocking/muting/disabling comments/etc

Because of hundreds of pages of PDFs, I’m sure there’s many more details.

And the specific requirements of reporting and providing evidence and so on I haven’t tried to understand and they might be annoying and difficult in their own right.

9 Likes

This is basically using bureaucratic cr*p and vague subjective criteria to cover as far and wide as possible. And only those with in-house legal teams or who had the money to hire an expensive firm would be able to isolate themselves from its risk. And it basically boiled down to “make Ofcom” happy so they don’t come for you, hence self-censorship already showing right away. (the same logic as the Chinese GFW, it covers so much, and the only way is to self-censor, and they still can come for you from all the possible vague subjective criteria they deem fit)

5 Likes

I think that depends on why you think it isn’t the case. Are you putting your faith in the userbase or in the moderation team or?

Basically if someone messages you with a PM, I think that is very much multi-risk of anything.

Sure you can report it, but I don’t think that’s the point, damage can be done by receiving a message in the first place.

Similarly with game chats, the site chats, uploading of images etc.

Just if you had to fairly assess a risk, not in how likely it is to happen for ordinary users but to how easy it is for it to happen with bad actors.

3 Likes

That is true, but it is also not a binary determination of “it can theoretically happen with bad actors therefore the risk is high” but you’re supposed to actually go and find evidence, perhaps:

  1. Do some reasonable queries on database/logs (of chat, of reported games/players) to look for instances of the categories of harms
  2. Extrapolate to how frequent these are
  3. Quantify how “harmful” they are, what would be the impact (eg for “eating disorders”: a tiny profile picture of a skinny person is very different from a group chat where people share tips for how to discreetly throw up after dinner)
  4. And then genuinely draw a conclusion for both likelihood of children encountering this content and then what impact that works have and say where both of those are on the scale of: negligible, low, moderate, high.

I don’t think there is moderate likelihood of children being moderately harmed by incitement to hatred, on ogs. And it’s genuinely expect if we went through the exercise then that’s what we would conclude based on the evidence as well.

So I still feel like the problem is going through all the motions required to assess all the categories of harm and documenting all the factors the law requires you to document. The problem probably isn’t that the law requires ogs to do anything substantially different WRT children in the UK.

Although of course in doing all the documentation and assessment work, we could discover that there is a whole lot of racial slurs in game chats that we’re not weeding out because people are not reporting it, etc. So it’s possible that there would be some actions to take to make the low risk determinations defensible.

3 Likes

Another determination that might actually be possible is that ogs isn’t a service “likely to be used by children” after all. I know the quick assessment tool lights up because of the user content.

But the guidelines for assessing this require you to actually estimate number of UK children and then determine if they are “material to the service” which I’m inclined to guess that they are not. Even if some of us might in general be even working towards getting kids to play go :slight_smile:

But again, this requires going thorough the numerical estimation and kind of a cultural and user interface analysis (eg: are children likely to be drawn to ogs because of hikaru no go, or does the web site look like it’s enticing to kids). But if that determination was a “no” then it would just cut out all the other required justification.

2 Likes

Ah, that is reasonable, but not exactly correct.
You see, you are thinking like a normal person and not a “bad actor”.
This means that after something becomes an issue, then you try to begin to exploit it, but before that you act perfectly normal.
This happens a lot in my country where the laws (and their loopholes) change all the time.
People adapt to the situation after the law is passed, but before it, you cannot ever tell that they are keen to use/exploit a loophole/issue or not, because that loophole/issue does not exist yet.

“Much easier just to press a button” and block them to quote “Yes, Prime Minister” and, I could argue that it is much fairer, too.

Let’s say that you are having a barbeque party. Everyone is having fun and someone suddenly declares that the people of a particular house are now, suddenly, vegan and unless your barbeque caters to their newly acquired taste they will sue you for feeding them food that they “cannot eat”.

What do you do?
A) Comply, leave whatever you where doing unfinished and spend time and money to run to the grocery story to buy food and equipment to accomodate for those new tastes.
or
B) Gently tell them to go home, because this is a barbeque party and that cannot change, and that you enjoyed their company and if they change their minds they can return and eat, but you cannot take the risk and have a nice day preparing your own vegan food.

I’d choose B, every day of the week.

Unless there are some consequences to the end-users (which are the only ones that can vote and pressure the lawmakers), then compliance with whatever silly rule each country comes up with is practically enabling and encouraging that bad behaviour.

And what if they come up with new rules and amendments? Should the OGS management be on the constant lookout for updates on the legislative results of the UK or whatever other country starts doing similar stuff?

There is no long term solution or viability in choosing compliance with nonsense like this.
Out they go, till THEY fix THEIR problem. That’s the only reasonable choice in situations like that. Never give in to ultimatums.

1 Like

Wrong road. No need to go further.

1 Like

It’s not what I am thinking, tthis is what the guidance from Ofcom requires that services do. They don’t require that you build a service where there is no possibility that a child would ever run into “bullying” content, but they require that you take steps (such as the ones I outline above) to assess that risk and based on the assessment, they have a set of recommended measures you should consider to address the risks.

Eg: for this risk, the guidance requires that large services (7 million active UK users) or services with multi-risk (two or more categories of harm have moderate or higher risk, based on your assessment) monitor for the rate of problematic interactions. But since OGS (I think) does not meet that bar, this wouldn’t be one of the things that Ofcom would “recommend” for OGS:

https://www.ofcom.org.uk/siteassets/resources/documents/consultations/category-1-10-weeks/statement-protecting-children-from-harms-online/main-document/protection-of-children-code-of-practice-for-user-to-user-services-.pdf?v=399756

PCU A5: Tracking evidence of new and increasing harm to children

But recommended for:

Services likely to be accessed by children that are either a large service or multi-risk (children).

2 Likes

I stand corrected on that, then. :slight_smile:
The rest of my points stand, though… noone has to care what Ofcom “requires” or what Ofcom even is, if you do not fall within its jurisdiction. And getting beyond its jurisdiction is not only easier and cheaper than following along with its requirements, but also fairer and more effective, long term.

A genuine piece of advice to a lot of people on this thread, apparently: the question isn’t if you can ignore he law, but if the law is going to ignore you

4 Likes

There’s been some discussion about this point already:

The likelihood of facing repercussions isn’t the only factor you have to consider, you also need to consider the severity of the punishment if they do come after you.

1 Like

And also how your cloud/payment providers will approach the law.

5 Likes

Man I just read all this. Honestly insane. I never thought this would happen but here we are…

Makes me worried about what’s next…

7 Likes

Good work. I was planning to look at the this too. I’ll update if I actually manage to do that.

There is also something about proportionally in the Act. I’ve not looked how far this applies but there is, for example, in the section about “safety duties for illegal content” a bit about most things only being required “if proportionate”, with proportionate factoring
“the size and capacity of the provider of a service”

I would say that’s it’s pretty easy to argue that for a small service OGS already takes disproportionate measures to address various risks.

I suspect the only practical legal impact for OGS is the paperwork around making risk assessments and some ToS tweaks. But I’ll reconsider this when I’ve looked in more detail. And I’m willing to conclude that’s it’s easier to block UK access than to actually figure out what the requirements even are!

5 Likes

First line of OGS’s ToS:
In consideration of your use of the Service, you represent that you are of legal age to form a binding contract.

Could we argue that due our terms of service, we do not have underage users on the site, thus age verification is not needed?

Wow does that mean that children shouldn’t use OGS? I didn’t know that.

1 Like

It doesn’t work like that in the Online Safety Act, Otherwise, any service can claim they did the same thing (even adult entertaining websites). The whole point of Ofcom is to check if any service would have effective enforcement, either with age checks, or allow underage access but with Ofcom approved child protection measures.

5 Likes

Techically speaking, yes.

Tho i guess that depends a lot on local laws. For example, in finland there is no age limit for forming a binding contract, the law states: “A child has the right to make only contracts that are of minor importance and that are customary in the circumstances.”

I would assume that using ogs and thus accepting the ToS would fall within that category.

I’m not sure how much that would matter. It’s one thing for someone to enter into a contract, but I’d imagine that, ultimately, contracts must still be subject to the law. Similar to what Counting_Zenist said, but more generally, if this were not the case then companies’ ToS would just be a way to circumvent laws they didn’t like.

3 Likes