hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

10K
active users

If you'd like to learn more see: about.iftas.org/activities/mod

(I tried to leave a comment with the link but maybe youtube's anti internet links policy made it pending review by @j12t

IFTAS · Content Classification Service
More from IFTAS

@thisismissem Comment is live as far as I can tell without review.

@j12t huh, odd, I don't see the one with the link to IFTAS — I left two comments, one which linked to the about.iftas.org site for more information — maybe that's something you can add?

@thisismissem @iftas Could this service also be used for public #XMPP chat rooms for example?

@uexo @iftas

not currently, because there's some specific integration work necessary for different platforms, and our focus has been on federated social media.

We've also had asks about Matrix support for it.

Those are quite different use-cases to social network services. Where you probably want automatic takedowns, due to the broadcast ability for a new account to reach many people.

@thisismissem @iftas That seems very similar to public channels though. New accounts broadcast messages by posting to public channels all the time. Is there some kind of API which XMPP service providers could call to scan unencrypted uploads or images posted to public rooms?

@uexo there's several: PhotoDNA, Project Arachnid, Thorn Safer, are ones just off top of mind.

But the data you can have for chat messages is different to that which you have with social media posts

@thisismissem @iftas This looks very interesting! Thank you for your work. However, I don't know if I really understand it: Does this prototype already enable to moderate or delete the content without having to take a look at it?

Moreover: How does it differ from the "chat control" which is conducted by Facebook, which is widely criticized in the community? Or do you at IFTAS not have any (political or legal) concerns because of the chat control?

@resieguen @iftas

Currently no, as we were concerned about potential false positives, and in general believe moderators should be in control of their instances.

@resieguen @iftas "chat control" as in the EU proposal? That does hash and match on device of content before it is sent.

Here we're processing content on behalf of your instance, to reduce their liabilities.

So yes, we process every status that is posted directly on your instance, but there isn't a way to identify harmful content without doing that; but we take many measures to ensure privacy and reduce data retention to only what is necessary to provide the service.

@resieguen @iftas that is there is an agreement between the instance and IFTAS for the processing of data, and that should be disclosed in the privacy policy.

Here the instance has liability because they are hosting your content on their servers.

With chat control, it's processing data that the servers involved normally wouldn't have access to, due to E2EE.

@resieguen @iftas

I.e., if your server stores an encrypted blob of data that it cannot decrypt, then there isn't liability because you don't know what's in that blob; but if you do know what's in it (unencrypted end-to-end) then you've liability for that content.

It's the knowledge of the unencrypted content that creates liability for hosting, storing & distributing CSAM, as far as I know. This is why telegram is in hot water: they knew what was in the content because it wasn't e2ee

@thisismissem @iftas I am referring to the current situation, the "voluntary" chat control based on the temporary derogation in the Regulation 2021/1232, which is already criticized.

From a legal point of view, I don't see the need for such a technology. As you mention, providers are not liable without actual knowledge. Storing unencrypted data doesn't make you liable, if you don't have knowledge. The legal steps taken against telegram are symbolic politics, not compatible with the rule of law.

@resieguen okay, so there's no legal liability until you know about it, but do you as an instance operator want your service that you provide being used to distribute CSAM?

Yes, scanning means you create liability for removing the content and there is no obligation to scan, but as a service provider you may choose to scan, especially when possession of that material *is* a crime.

@resieguen Do you want to be charged with possession of CSAM because someone reported a tonne of it on your servers that you're legally responsible for?

Chat Control is about forcing the detection of content before it is sent, and consequently breaking end-to-end encryption by making device & application creators legally obligated to scan all media leaving your device

@resieguen Scanning content to detect abuse on a server you host is not legally necessary, but you may wish to do it to remove legal liability for possession or distribution of that content.

Do you want to be proactive in preventing your service being used for abuse and illegal activity, or do you want to be reactive?

@resieguen e.g., moving away from CSAM, in Germany you have a legal obligation to not distribute adult content to children (youth protection law), and age verification is the suggested way to segregate users between adults and children.

If your privacy policy and other steps are taken to make your service inaccessible to children, then adult content has no effect, since you aren't deliberately distributing to children.

At least, that's my understanding of the youth protection act.

@thisismissem I don't want to judge it. I am just a lawyer, not a moralist.

I just want to point out, that Facebook is critized for their voluntary chat control, also because of legal concerns in terms of human rights, and it seems to me that your technology does more or less the same. You are even using the same arguments.

If this technology was used by instance providers, they would do much more than Meta, because Meta has rolled out E2EE at least for a part of the private conversations.

@resieguen @thisismissem Except those who want there to be zero legal restrictions on public speech, I don't think many people criticise Facebook for the fact that they actively look for content that breaches the law. There are ofc many who criticise the company for doing it badly, inconsistently, or even with malicious selectiveness; and who fight any attempt by governments to mandate specific moderation tech, in particular those that require to break e2ee (-> chat control).

@ilumium It is not about zero legal restrictions on public speech, it is about the voluntary chat control which is already happening at Facebook. Many people criticize it, for example Patrick Breyer and some organisations which fight for digital rights... @thisismissem

@resieguen
The ChatControl proposal and the critique is primarily about scanning private communication, breaking end-to-end-encryption. The voluntary scanning of Facebook includes Facebook Messenger. In this context it's important to distinguish between private communications and public facing content.
@ilumium @thisismissem

@pneutig Of course, you could just claim that there is no private communication in the Fediverse, because posts for mentioned people only are actually public. But then you are applying double standards. @ilumium @thisismissem

@resieguen @ilumium @thisismissem OK I think I get your point now. It's about whether IFTAS CCS also applies to DMs? I thought it is meant for public facing content (not DMs).

@resieguen @pneutig @ilumium

Remember: mastodon does NOT have DMs, there is no security or privacy here, if you want to privately say something, you should probably be using E2EE

@pneutig @resieguen @ilumium

We do process mentioned-only and followers-only posts created from your server, since Mastodon's webhooks make no distinction between them, they're all just posts.

If Mastodon had actual DMs separate from regular posts, then it'd be at server operator's' discretion whether we see and process that content.

Even in mentioned-only posts, the media content is still publicly available, it's just you need to know the URL to access it.

@thisismissem I see it differently, and it is just not true that you just need the URL to access a DM. The missing E2EE is not a good argument. Telephone and sms also doesn't have E2EE.

Sorry, I don't have the time to continue this discussion for the whole day. I don't want to say that what you are doing is wrong. I just worked a lot on both of these topics recently (legal questions in Fedi and chat control) and I notice contradictions, that there are double standards applied @pneutig @ilumium

@resieguen @pneutig @ilumium

You're really not though. Proactive scanning for CSAM, not mandated by law, is just the same as proactive scanning for spam & phishing, hate speech or other forms of abuse

We have already seen attacks against fediverse servers where someone uploads CSAM & then reports that content being on your server to your host & to law enforcement.

@pneutig @resieguen @ilumium

Yeah, whereas on the fediverse there is currently no private communications, there is no e2ee, so your server (and potentially others) are always involved in that communication.

The creation of IFTAS CCS was driven by our Moderator Needs Assessment where help detecting & dealing with CSAM was our highest ranked issue:

about.iftas.org/moderator-need

IFTAS · Moderator Needs Assessment2024 Fediverse Needs Assessment Last year we had over 130 responses representing over 200 servers. We’d love to hear from many more of you this year! Even if you are a one-person instance, yo…