A few days ago, Beehaw posted an announcement in their Chat community about the challenges of content moderation and the possibility of leaving Lemmy. That post was eventually locked.
Then, about two days ago, Beehaw posted an announcement in their support community that they aren’t confident about the long-term use of Lemmy, due to so-called concerns about Lemmy.
If you currently use Beehaw and want to stay on the federated Lemmy network, consider migrating your account to another instance like lemm.ee.
This was an implementation choice for Lemmy (and Mastodon) but isn’t required by the protocol. It’s a trade off of caching data locally to render vs network traffic. For other systems where its Activity Pub - think FourSquare check ins ( https://joinmobilizon.org/en/ ) or “a new video has been posted” or “a new blog entry has been posted” ( https://wordpress.org/plugins/activitypub/ ) - it works fine.
Layering a microblogging system on top of it where you want faster rendering time (and lower network traffic - unless you’re hosting a popular site) is awkward. Trying to layer a link aggregator with votes and comments on top of it with local caching of data… and you’re getting some of the problems that Lemmy is demonstrating.
Reddit on top of activity pub is much more awkward than “link sharing and commenting on top of activity pub” because it doesn’t have to be Reddit.
Part of the caching data everywhere is part of the original design intention of the developers ( https://join-lemmy.org/docs/users/05-censorship-resistance.html ). And so while this functionality isn’t a mistake (from the standpoint of the developers), its implications weren’t fully considered when the total number of posts for years was 1/10th of what was posted in the past two months.
That’s interesting. Does Mobilizon actually not do any mirroring between instances? How does it work when a Mobilizon user accesses a group/community that isn’t in their home instance and posts some content there?
About the Wordpress plugin: my impression is that it only works as a broadcasting activitypub feed, but the blog authors registered in that Wordpress instance do not have any way to use that account to subscribe to any other ActivityPub feed, correct? if so, that piece of the puzzle would still be missing, and it’s there where we typically find mirroring.
As far as I understand it (and I could be wrong), there is no way in the ActivityPub protocol for a user from another instance to actually publish content (eg. a reply or a comment) directly into a different instance (that is, without hosting it in their own instance first), so at the moment the way it works in services like mastodon/lemmy is that the user posts content on their own instance referencing the content from the second instance that they are replying to, and then the second instance mirrors it and displays it as a reply of the original post.
This, as far as I understand it, is the origin for the need of mirroring, and not really any thirst for “censorship resistance” or “faster rendering time”. I feel the problem is still originating from limits in ActivityPub. Or am I wrong? Is there a way to do this in the current protocol without mirroring?
I don’t think the need for faster latency justifies the mirroring. You could still get a fast time by sending the requests directly to the original host, without proxying/mirroring them at all from the service offering the frontend. Just allow for cross-domain requests to call directly the API from the client, without needing server-to-server requests for that. Of course if the host is slow then the request will be slow, but if it’s fast the request will be fast. The responsibility for performance when providing content should fall on the content host. The instance where the user has an account could provide some token for identification as proof of the user belonging to it, and have third party content providers validate that proof and decide on their end whether the user is allowed to access/post content there directly (being subjected to the moderation of the content provider, who is the one hosting the content).
The more troublesome part of this approach would be having to rely on client-side aggregation of the content coming from different providers in order to build a feed. But I think this could still be viable. Or it could be handled by another different type of instance that acts as indexer but doesn’t really mirror the content, just references it. This also would only be necessary if the user really wants an aggregated feed, which might not always be the case, sometimes you just want to directly browse the feed of a particular community or your subscriptions from a particular instance.
I mean, I get that for some use cases mirroring would be a good thing, but that could be entirely a separate layer without requiring it as part of the communication. Making it mandatory places a huge responsibility in the instance host without it being necessarily something that every user needs or even wants. I don’t want to be dependant on what other instances my particular instance decides to mirror so I can access them. What’s the point of the fediverse if in order to access content from two instances I have to create separate accounts just because they don’t like each other’s content policy?
If that’s the case, I’ll probably be sticking around! I’m not fond of having the possibility of CSAM but unlike a lot of lawmakers I see the logical difference between photos and videos (disgusting abuse) versus artificial images (artwork with no basis in reality) and know there are developed and functional nations where the former is absolutely illegal but the latter is not (just FYI, best examples are Japan and Colombia, and I’ve been to the latter personally) and the instances of both filmed CSAM and IRL sexual assault have gone down in both since the relevant laws were passed.
I get the squick, I’m asexual and very protective of children, I just know that someone directed me to the evidence (multiple websites with reseach articles about the psychology of pedophilia including Wikipedia pages) and while filming real kids being forced into that is evil, I cannot ignore the psychological evidence that even regular, consenting adult porn is illegal in India and China yet rape and sexual assault in those countries are higher than the average per population in the US, Australia, Germany, etc. If a fictional portrayal keeps real people safe, we can’t afford to unquestioningly believe big media when news networks provide information. I’ve seen cops mention “it encourages the sickos” on live TV news but the evidence isn’t completely backing that up, so I decided the only moral solution is to allow people to exist in privacy instead of witch-hunting.
Aside from that, sickos aren’t the main worry for parents when it comes to kidnapping; Divorces where the one parent can’t handle losing visitation are common, and sad and horrifying as it is a lot of kidnappers are looking to sell their targets into slavery in another country, not make videos that could be traced back to the criminals.
By all means try to remove CSAM of real children, because how would you feel if you were the child who had been abused to make it? How would you feel if that child was someone you know? If someone is being directly hurt then they’re being directly hurt. My problem with almost the entire internet is that if someone is being indirectly hurt, that’s like an SJW saying an Irish immigrant is guilty of racism against African-Americans just for being white-skinned, or an Alt-Right Conservative politician saying a transvestite or homosexual is automatically a sexual offender.
I’ll leave it on this note. I tried to look up an episode of an old kid’s cartoon from the 90s called Bobby’s World on Start page. Safe Search was absolutely on. Apparently the made up word I misremembered as the episode title, “Kidzilla”, somehow triggers this: stopitnow.org
No search results, just loads up that site whenever a search contains that word. That is some serious 1984 thoughtcrime bullshit. If having uncomfortable content is the price I pay for having an uncensored internet, I’ll fight in a court to say “enough is enough, you (politicians) don’t have the right to e-police everyone 24/7/365 just to pretend you aren’t terrible people, and peer-to-peer services NEED to be designed as uncensorable platforms, fix this oversight because if I have to run 12 commands to delete one image then I’m only running it to protect REAL children, not a collection of pixels or words created entirely from scratch by some poor artist with a disturbing but not necessarily harmful kink”.
Now, if you DO put actual photos, video or audio of CSAM on my planned Lemmy server, I won’t just report that shit to the police, I’ll tell the cops “better find this bastard before I do because I’ll f-ing murder him for what he’s directly and purposefully supporting something that - despite my disagreement with the current boundaries of legality - is on the wrong side of even the line I feel should be drawn”.
Seriously, people, we’re talking about protecting free speech here becuase we’re trying to protect kids from harm, not to escape justice; Big Brother is a creep who watches your children with a predatory gaze too, after all.