@dysfun @TechConnectify @JustinH if I understand the problem, imagine every time you made a post you got thousands of replies, some of which you'd like to interact with, but a significant number of them are from accounts you've never interacted with on random other servers that are borderline abuse.
No amount of moderation on your server instance can address this problem, because your server is doing nothing wrong.
@dysfun @TechConnectify @JustinH what is being asked for (I think) is to first recognise the problem. This thread shows that this is far from happening.
Second you need some tools and policies to filter the poor behaviour. Less than a ban, but more than "don't look at messages that are addressed to you, that might upset you"
At least I think that's what is being asked for.
@cbehopkins @dysfun @JustinH exactly. One thing that I've just realized is that, when we compare this platform to email, subject lines are effectively content warnings. I don't go to my inbox and see every email that's been sent to me, I see a list of content warnings.
Here, though, it's as if I open my inbox and am reading every single email.
Social media is a different beast from email. A feed full of content warnings is tedious and boring. But there has to be filtering.
@TechConnectify @cbehopkins @dysfun @JustinH I want this platform to work for folks like you, so I'm trying to wrap my head around this. I assume that the volume of block requests you would have to submit is unworkably high, not that your instance is ignoring your block requests? For instance, we're quick to block accounts that are aggressively annoying our users, but we're only like 5 users with the most followed having under 5k followers, so we only get about 1 request/wk.
@holly @cbehopkins @dysfun @JustinH On the bird site, this sort of stuff was just... not a thing I ever had to do.
I blocked maybe half a dozen people and muted perhaps a dozen.
The sort of behavior that's bothering me here simply didn't cross my feed on twitter /because they had automated systems to detect it/ and it was hidden.
I'm really asking for a jerkwad detector - not for a means of recourse when I encounter jerkwads. Because, frankly, not much of what they do merits real moderation.
@holly @cbehopkins @dysfun @JustinH There is absolutely no means of filtering signal to noise here. And the common response to complaints of noise is to play whack-a-mole and stomp it out, or else move instances because they'll have more thorough instance blocking.
But that's not the problem - it's not individuals. It's behavioral norms in aggregate. Some really shitty behavior is tolerated here in no small part because there's no means for the crowd to signal it's bad behavior.
@TechConnectify @holly @cbehopkins @dysfun Not trying to deny your lived experience or anything but "some really shitty behavior is tolerated here" more aptly describes my experience with Twitter than anything on my Mastodon instance.
@JustinH @TechConnectify @holly @cbehopkins @dysfun Are you familiar with Twitter's quality filter? I always had it turned on and it kept a lot of the low quality chaff out of my mentions. I have no idea how it worked or how to replicate it but it's my understanding that for big accounts it was even more powerful.
@AGTMADCAT @JustinH I think this is the thing that made all the difference. Plus, I only started using Twitter in 2018 so lots of positive changes were already made, and I only ever used the first-party client
(unrelated, I still can't get my head around why so many people tagged the thread reader app all the damn time - I had no issue reading threads, so I can only assume their weird clients parsed them weirdly)
@TechConnectify @JustinH I tried turning off the quality filter once for about a week and it made the site an absolute nightmare.
I got on Twitter in 2008 so I followed pretty much its whole arc - this feels very 2008-2010 Twitter in a lot of ways - mostly weird friendly nerds, no significant moderation or trust and safety tooling, etc. The first part of that I put down to being on an excellent instance, because a lot of my interaction is from other people on infosec.exchange who are generally very well behaved grown ups, but for big accounts with a lot of inter-instance interaction like yours that probably wouldn't be the case.
I'm not sure if maybe @jerry has any opinions on the moderation issue for large accounts, but he's our instance boss and very good about these sorts of things.
@TechConnectify @AGTMADCAT @JustinH literally the least important thing on this thread, but while I never triggered the threadapp myself, I used its output triggered by someone else to save long threads to read later on my read later app (@wallabag).
Long threads are basically blog post and I like to read them as such.