mas.to is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hello! mas.to is a fast, up-to-date and fun Mastodon server.

Administered by:

Server stats:

13K
active users

Alright, we appear to have uncovered something with the help of y'all.

Twitter had a featured apparently called "Quality Filter" - this was probably turned on at some point unbeknownst to me once I crossed X-number of followers but was opt-in otherwise.

This was undoubtedly the thing that hid nasty replies and prevented them from showing up in my notifs. And if this wasn't on for you (or you didn't have access) that may explain why my lived experience Over There has been easier than here.

Technology Connections

And my complaints are really just about that - quality and volume of replies.

I have received some ugly stuff here, but nothing that is even actionable by moderation. I don't think it's crossed that line. So people saying "just do this!" largely don't understand the scope of the problem. At least, I think that's where we're at now.

What I'm yearning for is a filter to separate signal from noise. Because now everything is signal - which means it's all noise.

@TechConnectify I appreciate all the reflection and investigation you've been doing on this topic, and hope you'll stick around and make this place better! It's been annoying to see a vocal minority make this place less welcoming for some.

@TechConnectify one of my joys on Mastodon has been chatting to people who on twitter I'd have been invisible to - probably because of said filters. You had to play the system there and sort of hustle for engagement, in order to get noticed at all.

That won't have been a thing for you, because you're making quality content and people clearly like it, so of course you wouldn't have been filtered out and left unnoticed.

But I get why it evolved that way, and can now see how it saves overwhelm.

@sarajw I want to stress that, at least as far as how my experience went, I would see most replies. It was only people being dicks that were getting filtered out.

Of course there will be some collateral damage when the filter makes a mistake, but I wouldn't go so far as to say people were invisible to me. Almost all people weren't!

@TechConnectify That may be true, but I'm still going to celebrate that I actually get counter-replies here when I didn't over there!

Guess it's just a numbers game, in that my account here has grown alongside people I admired there, and was essentially invisible to (they had thousands of followers and I was a small drop in a huge ocean) - even if I wasn't filtered, I was probably one of a whole deluge of replies.

You obvs can't pay attention to every reply when there are hundreds.

@sarajw Oh, I've noticed that the quality of engagement here is — usually — much better than over there.

I think that's the plus side of the no-filters, all-chronological approach.

The downside of it is that with a larger following, you're exposed to a lot of stuff that's not healthy for anybody.

@TechConnectify yeah, I'm sorry you're having to deal with that :(

People can really have such different experiences here.

@TechConnectify @sarajw I really want the chronological to be the default, but I wouldn't mind if people made front ends that rearranged it. I mean, that's the whole point of it being open source, so people can alter it. Of course, most youtube creators wouldn't do that themselves, so someone needs to make it for them, either as an instance designed for high volume replies, or a front end client for those same people. Like Tweetdeck, before Twitter killed it.

@TechConnectify Also on twitter when a post had umptybillion replies, I wouldn't bother adding another one. Such a thread wouldn't need yet another 2c from me.

But here on Mastodon, there is a visibility issue in the replies on posts between instances because of weird federation effects. I suspect you get a lot more doubling of similar answers, or people piping up when they wouldn't otherwise have bothered, if they could automatically see how many replies were already under a post.

@TechConnectify @sarajw I was as teeny tiny on Twitter as I am here, and I had the same thing, except clicking the button actually revealed more posts, and they were generally either highly objectionable, or spam bots.

That's not to say that there wasn't anything objectionable or spammy outside of that section, but I had a pretty small circle of people I regularly interacted with, so our experiences are not really an apples to apples comparison.

@TechConnectify I wonder if this could be implemented on the client side? (Or maybe there’s a client that already implements it? Idk)

@TechConnectify I’m sorry you have had such a wretched experience. You don’t deserve that at all. Moderation is so messy here :-(

@TechConnectify I think that simple *prioritization* and sorting of replies and feeds would go a long way towards making it more manageable. Consolidating of favourites/etc would also be a good and necessary step, of course, but following that, being able to sort your feed by "most favourited/boosted" would at least let some of the less "shitposty" items float to the top and help focus your attention.

There are a couple issues on the github. Here's one:

github.com/mastodon/mastodon/i

GitHubAlternative sorting/aggregation methods · Issue #3782 · mastodon/mastodonBy ticki

@rbos why does this need to be in the core product when it could just as well be in a client

@jason Sure. Know of any that do it, offhand? Mostly I use the default advanced web UI, but I'm not married to it.

@rbos I mean it’s a feature I’m decidedly against, or at best indifferent to, so no, but I think a world where we have a basic, stable core with vibrant clients is generally going to make people happier than the web UI having to provide for people with varying wants, so just raising the question.

@rbos There is a problem with popularity metrics being implemented as The Solution, though: the long-term effect is that once enough people start using it as such (after all, that is The Solution), it will give everyone else reason to start optimizing *their posts* for popularity to remain visible, and so you get all of the problems that the popularity contest on other platforms introduces.

It's kind of like a cobra effect; initially it will make the situation better for the early adopters, but then in the long term it will make things worse for *everyone*.

@joepie91 Trading a known actual current problem for a hypothetical future problem, though. If you're concerned about that, I don't think the answer is "throw up hands and do nothing", I think it's "find a way to mitigate".

I'm not convinced that that'll actually happen, anyway.

@rbos People have already been finding ways to mitigate this for a long time, that do *not* have this problem, that's the point.

@joepie91 Still, not convinced that there's such a straight line between "sorting a feed" and "everything going to shit". :)

Regardless, mastodon *badly* needs some kind of solution to the cognitive overload problem. I'm open to ideas, but I simply can't keep up with every post by the people that I want to follow, and replies are much too demanding.

Something is going to break, and either Mastodon is permanently a backwater or some solution is found.

@TechConnectify That's interesting and also makes sense. I wonder if it would be possible to construct a filter using Mastodon's filtering system and maybe build that in as a filter that is available by default - both because it would be useful in itself and also as an example of how to build a filter.

That approach might be rudimentary by comparison. And as with all proposed changes on Mastodon, it might be controversial.

@TechConnectify I think if you want to have 30k followers, you have to find someone or some system that will run an algorithm to do the filtering you want.

I for my part very much want to be in a network with no algorithm, and I don't want to follow or be followed by more than one or two hundred people. If somebody inserts an algorithm between me and my ppl, I'm gone.

@twobiscuits @TechConnectify

I agree. I'd be gone too. But for -

"If somebody inserts an algorithm between me and my ppl *involuntarily*, I'm gone."

@Madagascar_Sky @TechConnectify I really don't want any remote ppl who I don't know deciding what I see. I am on a small local instance and the ppl who started it have formed a registered association and a regular donation is in my plans and I know where to go to meet them in person. That's exactly what I want. >

@Madagascar_Sky @TechConnectify I used to have a birdsite account with follower/following nos. around 1000. It was fun and gave me an educational peep into lots of milieus I otherwise wouldn't have experienced, but it ate up my time and attention so I downsized with a new account. And since the Space Karen Putsch it has dawned on me that all the fun and nice stuff may be just a side product of something that is costing us a price we can't afford – Musk, Zuck & others running the world. >

@Madagascar_Sky @TechConnectify These fuckers are literally more powerful than elected governments, they are complicit in crimes against humanity and I want to withdraw as far as possible from anything they are profiting from. I also wonder if the whole concept of social media has been flawed from the start because it was developed by techbros in silicon valley who quite likely have never experienced living in real-world local communities. >

@Madagascar_Sky @TechConnectify So happy that I can start again but based in a real place in Europe where we have a European consciousness of place and community and can reach out carefully from our little node into the wider world. Peace and love to Mr 30k Followers, but I feel we're in different games or different leagues and I don't want his game to be the norm where I am. I'm not looking for a replacement for X or Insta, I want to reinvent what might have been if they had never happened. <>

@TechConnectify I wonder if this is a problem that third party clients can help address. And/or just a feature they have to build yet. Comparetively, it is still a newer platform and probably hasn’t had to deal much with The Big Accounts Problem so far.

@TechConnectify

Hm~ what's your benchmark for what moderation should care about? It seems that clearly you (and others) would benefit from having way stricter moderation, so it sounds like there's raison d'etre of an instance that has such stricter moderation (not only on its own users, but also on remote users by way of instance-wide mutes and blocks targeting individual remote users). (It might be impractical to actually run such an instance due to having to police ~all of fedi, but that sounds like something where "moderation subscriptions" might actually work.)

@TechConnectify I don't have anything helpful to add to this ongoing topic, but I do hate the shit you're getting here. We can and should be building better things here.

@TechConnectify

Yeah a people making suggestions lack understanding on how interaction changes when your posts increase the probably that they "break containment."

Individually taking action every time doesn't scale well past a regular reach of maybe a few hundred people.

@TechConnectify [mildly edited repost from an instance you don't see, my bad] It didn't just filter out nasty replies, unfortunately. It also created a "second class" of posters who were shadowbanned for talking to those with it on for daring to stand up for themselves when attacked.

@TechConnectify Attacked by people who were not just trolls but genuinely advocating atrocities, for having "extreme" political opinions that might just be our own right to not be discriminated against, or for having a friend circle where people swear or joke about piss. Regardless of what we were going to say to you.

The Twitter where people could report calls for their genocide and get told those were fine, actually? That was mostly where the second class lived.

@TechConnectify [additional] I also can't remember if the quality filter in practice shadowbanned the replies outright rather than from your own notifications: I would be highly unsurprised if both behaviours happened at different points because "curating your replies" became a thing.

@TechConnectify You seem like you want a timeline and a filter curated by algorithms. That’s Threads. Mastodon is always going to be social networking to the bare metal.

@TechConnectify
> What I'm yearning for is a filter to separate signal from noise. Because now everything is signal - which means it's all noise.

And unfortunately, that would require some sort of automated analysis of the post content to gauge things like sentiment or quality, which many here would consider an "algorithm" and rule it out on principle. Maybe the best bet will end up being a third party client that implements those sorts of things locally...

@TechConnectify you could do as some people do and spend hours crafting the perfect keyword filter to remove 99% of nasty comments. (That was a joke. For the sake of your sanity, please don’t do that 😅)

@TechConnectify I hope that someone will integrate such a filter in Mastodon eventually, but I wouldn't hold my breath waiting for it. It's a user-centric feature, and Mastodon development is more ideology-centric ("federation" and "no algorithms" above all else).

@TechConnectify I was thinking it might be sometime like that...

Earlier I was thinking it would be really hard to add an LLM/sentiment analysis layer that beats notifications... But maybe it could be done in the client instead?

Especially given how all the phones have linear algebra coprocessors these days.

@kilpatds It's possible that it could be done client-side, but to be honest I don't just want to be protected from seeing that crap. I also want the mechanism to disincentivize people from sharing it or saying it.

Maybe hiding it is enough, but Twitter's methods kill engagement by hiding that reply for everybody - and honestly I think that's valuable (even if it sometimes causes collateral damage)

@TechConnectify

FWIW I agree that we need front-end algorithms.

But where should they run, and who should decide what gets boosted/muted?

My answer: there should be a client, or a client plugin, run on the front end at the user's choice. Like everything else Free, it will require some coders who know how to do that.

I think such a thing would also help to solve a lot of the static between POC and the prevailing whiteness of admins round here.

@TechConnectify

To me this sounds like something that should be made available in Mastodon apps/readers. Let users tweak the display and do filtering with tools they can control rather than something that happens secretly. I don't know of anything that does this though.

@TechConnectify I'm sorry this is happening to you. I enjoy your stuff, and will miss you if you leave - but also understand.

@TechConnectify I think you nailed it here.

I've been an active participant in the fediverse for 7ish years and, while I can't speak to what it's like to have 34k followers (or, at least, that's how many it looks like you have from here), I have started bumping up against the limitations of signal-to-noise in this environment.

I love the fediverse, I love what it represents, I love what it could be, but Context Collapse is a huge problem, and there just simply aren't tools for dealing with being popular or producing something popular.

... That was a lot of words to say: I hope this problem does not overwhelm you, and I hope that solutions arise eventually.

@irenes thanks again! it's a great problem description, it's very similar to the "reply-guy" problem: unwanted responses that don't violate any rules.

Twitter's approach of "use a discriminatory 'quality content' algo to filter" works to some extent (although at a cost of filtering out some legit stuff), but generally works better for some (cis, white, male) than others (who are also face more of the costs (more likely to be filtered out).

@irenes Letting users chose and customize and design filtering algorithms could potentially be a good path forward if there are anti-oppressive algorithms, transparency about biases, easy customization tools (that don't manipulate people or favor biases algorithsm), well-chosen defaults, etc etc. Easier said than done of course.

@jdp23 governance structure would be the core thing in that. however the project is run you can expect the contradictions in the governance model to translate directly and immediately into contradictions in the algorithmic decisions and the social effects they have

@jdp23 compare and contrast "block together" btw, which solves the identical problem but for the benefit of people from marginalized backgrounds, who fundamentally are never going to be well served by any algorithmic approach because nobody has yet designed a governance structure that meets their needs for this sort of thing

@irenes great point about governance structure, and also very much agreed about how designing from the perspective of people from marginalized backgrounds. Have you seen Afsenah Rigot's work on "Design from the Margins"?
belfercenter.org/publication/d

Block Party's another good example of a tool designed from that perspective: the algorithms for lockout filters are simple and understandable, and "Helper view" copmplements it with a focus on people (not algorithms)

Belfer Center for Science and International AffairsDesign From the MarginsIn an age of virtual connectivity and increased reliance on the internet for daily functions, including by marginalized groups, can companies and technologists reframe their features or standards to support the most marginalized users’ needs? Can the modes of resilience within digital spaces from some of the most marginalized groups be listened to, learned from, and centered when creating technology? Design from the Margins (DFM), a design process that centers the most impacted and marginalized users from ideation to production, pushes the notion that not only is this something that can and must be done, but also that it is highly beneficial for all users and companies. For this to happen, consumer interest conversations need to be framed outside the “biggest use case” scenarios and United States and European Union-centrisms and refocused on the cases often left in the margins: the decentered cases. This report outlines how the DFM method can be used to build our most well-known and relied-upon technologies for decentered cases (often deemed “edge cases” which is atypical or less common use case for a product) from the beginning of the design process, rather than retrofitting them post-deployment to cater to communities with what are perceived to be extra needs.

@jdp23 we've been pointed to it before but haven't managed to dig in, thank you for reminding us about it

@irenes It's interesting to think about:

- what would a system look like that's designed around Block Party-like functionality from the beginning?

- how easy would it be to add it to current and emerging fedi software?

Filter Buddy is another interesting project along these lines - homes.cs.washington.edu/~axz/p

homes.cs.washington.eduAmy X. Zhang - UW CSEAmy X. Zhang is a professor at UW CSE focusing on social computing and HCI research.

@jdp23 those are good questions that we do like

we note that the data model for these things is essentially the same "moderation decision list" that we've talked about elsewhere in the context of shared block lists, and this is one of those other use-cases for it that makes is think it's an important concept to remove from specific conceptions of how it's used or generated

@irenes not sure if i've seen the other discussions but it makes sense, blocklists (as csv files) are recommendations and applying them makes moderation decisions. filtering is a moderation decision too, so is accepting a follower request, allowing/hiding a reply, opening a CW'ed post ... so yes, a good general framework!

@jdp23 well, rather, just a point of terminology - the way we see it is the people who curate the list itself are the ones making moderation decisions

decisions such as "this is rude, this is an expression of hatred, this is a racial dogwhistle", etc

the people applying the list to their own timelines aren't making moderation decisions because moderation is the thing you do when you engage with the content to understand it, and the whole point is to spare them from having to do that