hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.9K
active users

I see people on here still defending their use of Twitter.

Stop frequenting the Nazi bar, especially when the majority of patrons want you to die.

How long do you think it's gonna be before they doxx you?

I don't want to shame anyone but can we please be serious about the Nazi problem?

@Jorsh

As I've said before:
* At peak, 20% of Twitter daily active users were Black.
* The Nazi experience is *worse* on the Fediverse than Twitter. Much worse. It's not even close.

Many Fediverse users want people to "stop using Twitter," more than they want to make Fediverse "welcoming to all people."

People are still on Twitter not because of Elon, or their own personal failings. They're still there (for now) because of our failure to create a better Fedi.

Yes, so far I've failed too. 🤷🏿‍♂️

@mekkaokereke @Jorsh

"The Nazi experience is *worse* on the Fediverse than Twitter. Much worse. It's not even close."

Either the mods on my instance have done a bang-up job, or my experience here is far too limited to have even gotten a whiff of this. I *might* have encountered one possible Nazi in the last several months.

Circa 2017, was almost a daily experience on Twitter for me.

@staidwinnow @Jorsh
Yep. There was controversy over Eugen's (correct!) decision a few months ago to route all new joinmastodon.org users, to mastodon.social. Because the experience of new users varies wildly based on which instance they join. If they join a poorly moderated instance the experience is much worse than on Twitter

95% of moderation:
1) Block "the worst of the worst." 80%
2) Have an active and engaged moderation team. 10%
3) Have at least a basic understanding of anti-Blackness. 5%

@staidwinnow @Jorsh

Also, reply visibility means that you won't even see most of the anti-Black racism on the Fediverse. 🤷🏿‍♂️

@mekkaokereke@hachyderm.io @staidwinnow@mastodon.social @Jorsh@beige.party help me understand what you mean about reply visibility. A difference in design means that I won't see engagement between people I don't follow?

@mekkaokereke@hachyderm.io @staidwinnow@mastodon.social @Jorsh@beige.party I don't disagree with you, I like this place but it is no magical kingdom where all racism has been banished. Just trying to understand the mechanism

@mekkaokereke @wolleysegap @staidwinnow @Jorsh It's on the road map as MAS-37; I hope they get to it sooner rather than later. It's also been discussed in the context of quote posting. I know the team wants to do it.

@mekkaokereke @wolleysegap @staidwinnow @Jorsh is there a maintained list of such servers that allow abuse? Can server admins use this list to defederate these servers?

@MarcBrillault @mekkaokereke @wolleysegap @staidwinnow @Jorsh yes and yes, but this doesn’t solve the problem of new users not knowing how to find good servers.

@MarcBrillault @thatandromeda @wolleysegap @staidwinnow @Jorsh

2 things can make this better.

1) Imagine if as an admin, during Mastodon server setup time (or any time after!), you could just click a checkbox and subscribe to an auto-updating denylist that blocks all the CSAM, violent racism and transphobia, etc. The worst of the worst.

2) Imagine if Joinmastodon listed multiple instances again. But it only listed instances that it *programmatically verified* blocked the worst of the worst.

@MarcBrillault @thatandromeda @wolleysegap @staidwinnow @Jorsh

Part 1 (give us admins a simple setting to subscribe to an auto-updating blocklist of the "worst of the worst) is the goal of FSEP:

nivenly.org/docs/papers/fsep/

Part 2 (Programmatically verify that instances for new users have done at least this minimum level for ensuring user safety):

This one isn't at proposal stage yet. Soon...

nivenly.orgFederation Safety Enhancement Project (FSEP)With the surging popularity of federating tools, how do we make it easier to make safety the default?

@mekkaokereke @thatandromeda @wolleysegap @staidwinnow @Jorsh thanks a lot ! these two steps seem mandatory for a nicer experience with Mastodon for everybody.

@philippe @mekkaokereke @thatandromeda @wolleysegap @staidwinnow @Jorsh you mean, by deciding to not block a certain instance because it's not nazi enough?

@MarcBrillault @mekkaokereke @thatandromeda @wolleysegap @staidwinnow @Jorsh I wanted to make a (bad) joke about that, then I thought it would be misunderstood. I had more in mind, if at a later stage there are too commercial instances, they could be in a specific block list. But I guess it's too early. Sorry for lack of precision in my post.

@philippe @MarcBrillault @thatandromeda @wolleysegap @staidwinnow @Jorsh

Yes, that's "composable moderation," planned for later. I wouldn't necessarily call that a denylist, and I absolutely, 100%, do *not" think that any of those should be recommended to all new admins.

It's very different from "worst of the worst." If you don't block "worst of the worst" you could be putting users in danger. (I'm not a lawyer but...) In some jurisdictions, you may even be exposing yourself to legal risk.

@mekkaokereke @MarcBrillault @thatandromeda @wolleysegap @staidwinnow @Jorsh
the problem with so many of these blocklists is that they're full of instances that shouldn't be there and are more reflective of vendettas between certain fedi cliques than anything else

haven't heard of this "FSEP" though so i wish them the bes—

> Contributors: The Bad Space, IFTAS, Nivenly Foundation, Oliphant.social

four entities, two of which i've never heard of, one of which is a guy who is notorious among experienced fedi users as an unhinged crank with an axe to grind, and the last one's recently been in an ongoing drama where he defamed and threw one of his own users under the bus on the advice of another person who's notorious among, etc..

yeah no i wouldn't consider myself safe with these people
Mastodon hosted on oliphant.socialOliphant SocialA small but mighty band of warrior-poet oliphants. Safe harbor and fair speech zone.

@apophis @Jorsh @MarcBrillault @wolleysegap @mekkaokereke @staidwinnow @thatandromeda

I saw part of that throwing under the bus, but much of it I couldn't see, presumably because the servers weren't connected?

That was a prime situation where the user should have the right to open the thread for any of their followers to see everything from all instances. (Not sure if that's possible?)

The user had to build a long thread the next day to explain & expose the bad actor.

@mekkaokereke @MarcBrillault @thatandromeda @wolleysegap @staidwinnow @Jorsh This is a longish thread and I haven't reached the end of it yet. But I want to extend sincere thanks to @mekkaokereke and thread contributors for enlightening me!

@mekkaokereke@hachyderm.io @MarcBrillault@eldritch.cafe @thatandromeda@ohai.social @staidwinnow@mastodon.social @Jorsh@beige.party sounds like a good fix, for Mastodon. It also sounds like there are some more broad based things in the works, just not entirely implemented yet, that could apply for Fedi in general. With, I'm guessing an opt in choice to be made by the admins of each server, or set of servers. I'm sure some won't want to opt in, but they would have to realize that making that choice would almost certainly put their server on a Blocklist. If there's a default method in place to keep out the worst offender servers, then you would just have the responsibility of admins on each server to take care of individual bad actors on their server. Still quite a job, I'm sure, but easier than it is now I should think.

@marnanel I kind of think so as well, but IIUC the two proposals do not need to be exclusive.

I put it here becuase @mekkaokereke seems invested in helping do something about the problem and well, seemed appropriate

@wolleysegap @staidwinnow @Jorsh @jwz

@mekkaokereke @wolleysegap @staidwinnow @Jorsh

Thank you. This helps to explain why the problem is hidden from me.

The right of reply idea seems to have merit, though I can't say I comprehend it all.

It seems that there should also be a protocol that shows your followers all replies to your posts as well, so the Nazis can't hide? Call it a right to expose, at least on post threads that the user originates?

@mekkaokereke @wolleysegap @staidwinnow @Jorsh Hmmm... not sure this proposal supports a workflow in which replies are permitted by default, but can be removed by the original poster later (for people who get noxious ones, but rarely). Also, in case of a reply to a reply, it's not obvious who has to approve -- only the author of the post being replied to directly? Everyone else all the way up the chain?

@mekkaokereke @wolleysegap @staidwinnow @Jorsh Seems like a super-solid solution to a defined and clear problem. My only concern is it becoming a vector for ddos attacks against an instance -- but honestly, ActivityPUb is rife enough with DDos vectors that it's hardly worth considering unless it involved -considerably- more work on the server side than the untrusted client (which this does not).

@mekkaokereke @wolleysegap @staidwinnow @Jorsh Possible solution to the DDOS issue: don't have every server ask for permission to the parent server(s)for each reply. Instead, the originating server can ask the replied servers for cryptographic signatures to attach to the message. Then it gets passed around with the signof(s) l, and any server that cares about these things can run it against the originating server's public key to verify it, so it's a round trip per reply.

@mekkaokereke @wolleysegap @staidwinnow @Jorsh yet again, something that was fixed 20 years ago in Usenet but everyone forgot 🫤