Over the past few months I’ve been putting my UX Interviewing hat on and helping @jenniferplusplus’s Letterbook project, by talking to a variety of people doing moderation work, largely in the Fedi space
This is the first step towards designing interfaces specifically to tackle moderation challenges, but we learned a *lot* and I typically summarize my interviews anyway, and we think those learnings are worth sharing:
https://letterbook.com/blog/post/2024/11/18/moderation-tooling-research.html
while writeups like this are sort of a byproduct of my process (writing is a way of honing thoughts), I’ve found repeatedly they accomplish two things:
1, illustrating to stakeholders just how bad a particular problem is
2, and this gives me great personal satisfaction, helping a bunch of people who *care* and feel neglected feel like they’ve been heard
This is a form of social work, but it exists nearly everywhere in places that build software
@mattly @jenniferplusplus something that might interest you, which is a recent change I'm working on for Mastodon is expanding the moderation notes capabilities to more places. So, for starters, Instances: https://github.com/mastodon/mastodon/pull/31529
But also there is Appeals and maybe blocked email domains, hashtags, and IP rules. All tbd though.
@mattly man now you really got me thinking about dusting off that plugin system I sketched out
but of course that's only a small part of it; the fedi has a lot of work to do building out the trust and safety tooling the community needs, but having it summarized like this is super helpful I think
@technomancy @mattly plugin system for what?
@jenniferplusplus @mattly a while back I hacked a bit on gotosocial to add a lua-based plugin system that could do MRF-adjacent type stuff:
https://github.com/technomancy/gotosocial/tree/plugin
it's all fairly bitrotten by now and I've forgotten everything I know about golang and how their dependency system works
@mattly one I really like the idea of moving moderator actions into ActivityPub where possible instead of bespoke admin UIs, but I guess this mostly relies on a lot of standardization work happening to define the necessary vocabulary
but a stopgap measure perhaps could be say for example a bot which could plug into a read-only view of the DB and respond to queries to provide context about a certain user or conversation in question
of course in this model your visualization and interaction options are severely constrained so maybe that's not much of an improvement; I dunno
but it does have the advantage that it can be developed, deployed, and upgraded independently of the server implementation
@mattly@hachyderm.io @jenniferplusplus@hachyderm.io This sums up a lot of frustration in a nutshell. Not sure if we particularly talked about post visibility but I've made complaints on this account publicly before.
Sometimes a post that is part of a report is a response to a post that's been deleted, and since the server software deletes those posts immediately, the full thread is gone. It's completely possible for a report to become "un-solveable" by any reasonable standard, but the model around ActivityPub flags don't allow for this.It's really unfortunate how software doesn't save an "archive" of a reported note. This is just a bandaid to the underlying problem but it would reduce the amount of times I've had to close a report. It's really unfortunate when I have to close a report because there is no longer anything "actionable". I am hesitant to 100% follow the words of the reporter (something which you also touch up on) because I cannot be sure that we both agree on what constitutes a violation of rules. I've found that most instances report things that violate their rules to us, instead of forwarding reports for rule violations of our own rules. This is fine, but it means that what may be a rule violation to another instance is completely acceptable here. Having to read through a remote instance's rules and make that determination is not fun.
@puppygirlhornypost2 @mattly multiple people shared concerns in this area
@jenniferplusplus@hachyderm.io @mattly@hachyderm.io yup ik. just wanted to get out some more information
@jenniferplusplus@hachyderm.io @mattly@hachyderm.io it's genuinely so annoying when i get a report and i don't know what to do with it. I constantly think "is remote going to be mad i didn't take action?" and other questions
@mattly@hachyderm.io @jenniferplusplus@hachyderm.io
And sometimes it's a permissions issue, where posts not visible to the public (either "metioned people only" or "followers only") are either the subject of a report or part of the evaluation context for one. However, some server software applies the standard visibility rules to admins, so the moderator can't see these non-public posts that are part of the larger evaluation context.This is another thorn. I am lucky in the sense I follow a lot of users around us, but even I do not have access to some follower only posts. If I get a report in with a direct message, or a hidden visibility note I am kinda fucked. Sometimes our users report content to us and we do not forward the report to the remote instance (this ties back into earlier about how sometimes what's allowed on this instance isn't allowed on others and vice versa). If I find an instance that allows for hatespeech/transphobia I am not going to forward the report over to them. Instead I'll probably defederate in that case (depending on the severity and if they often allow for that sort of conduct) at the worst and at the best remote suspend the user.
@mattly this is great. Thank you.