We have temporarily locked posting on AskLemmy until the CSAM posting stops.
deleted by creator
I feel like this is an underrated idea. Resonates with the whole thing of making a subset of the internet simpler and just like documents, as with the simpler protocols like Gemini etc.
That would still allow links to be posted. Better than allowing image posts, but not a complete solution.
It prevents concerns about hosting CSAM posted by someone else. A categorical improvement I’d say. But yes, nothing’s perfect.
Still better than nothing. Easier for mods of text-only communities to only have text-only posts submitted.
If we then add a few conditions: “no links in the root message” and “OP may not be the first to comment within some unspecified amount of time,” that could make it even easier to limit CSAM.
deleted by creator
Not an effective solution for a federated service. Just spin up a new instance and give yourself karma. Shoot, there is no centralized service for validating accounts, so just set up 50 alts across 50 instances.
That’s a terrible idea and so easily gamed.
CSAM? What is CSAM? Is it a rewrite of “scam”?
Googles…
Oh no. Oh no no no. Why are people so fucking shit?
Sometimes, when it’s too hard to be better and it’s easier to be worse, people choose to be worse just to feel different than what they are.
Child Sexual Abuse Material.
Today is a bad day to be named Sam.
See Sam.
See Sam run.
See Sam run down to the appropriate bureaucratic office to change their name to something else.
You don’t want to know, holy shit…
Lemmy.world is hosted on Cloudflare and Cloudflare has tools to prevent CSAM uploads. https://blog.cloudflare.com/the-csam-scanning-tool
Important note, this feature is only available for US customers.
Also important to note: this feature will only really work against real CSAM. The images that were posted to this community weren’t real CSAM but were pictures/gifs of adult models, with titles/captions that would imply they were CSAM. I don’t think Cloudflare can do much about those.
At least, the handful of posts that I saw were like this. I’m doubtful that the guy doing this is uploading actual CSAM to the clearnet.
I hope you’re right, because as someone that sometimes browses by new I keep seeing it and it’s upsetting as fuck to think it could be real.
It’s weird they’re targeting asklemmy communities in particular, I don’t think the .ml and .world communities are even related are they?
Nah, they’re completely separate communities, so no real link that I can see there.
I dug around a bit, and one of the sites he was using to host the images was some weird 4chan-like image board. But it seems like he may have been trolling them, too, because even though it’s a degenerate board full of racist garbage, it’s not otherwise full of CSAM, and his posts were also deleted from that board eventually, too. So I don’t think they were willingly hosting those images, either. I mention this because I saw some people calling to ban links to that domain, which probably should still be done because it’s a trash website, but not because it’s a CSAM haven of any sort.
It makes me think that this isn’t targeted at any one community, just some random weirdo trying to make the internet a worse place wherever he can.
I frequent a small imageboard and supposedly there is currently a wave of this shit across basically all such sites. The leading theory is it’s being posted by feds as a honeypot or to drive people away from decentralized online discussion, everybody hates it.
Surprisingly enough, even 4Chan has standards. Not particularly high standards, but they’re still there
Idk the other night was pretty real, 😞
Well… it seems there’s some issue with post removal federation. There’s still 2 posts visible from my home instance.
And now it’s definitely cached on our instance. And every other instance with pict-rs enabled.
This is what makes me scared of self hosting an instance. I would basically be hosting it. And I would be responsible for such content.Might be worth mentioning on [email protected]
Ok, this is interesting. There was one just posted to [email protected], but it got removed from my instance as well.
The account looks deleted from lemm.ee, not found on lemmy.world, banned on lemmy.ml, and empty on lemmy.dbzer0.com.
Perhaps it’s account deletion that doesn’t federate properly.account deletion does not federate in general, only banning (+ content removal) does
Is there a way AskLemmy and other major communities could prevent new users from making posts in the future?
Like an account has to be over a month old to post for example. Maybe that could help prevent these kinds of disgusting attacks
I don’t know if Lemmy has a moderator tool available that could do something like that though.
I don’t quite like that idea. It’s something I really hated on Reddit. It just discourages new people from joining. Besides, you could self host an instance with accounts claiming to be made in 1970.
Unfortunately there aren’t many great options right now. No one likes it, but people posting CSAM are the ones to blame there. They quite literally ruin it for everyone because they’re butthurt about something happening they didn’t like
Do we know what they are butthurt about? There is never an excuse for what they are doing, but I’m curious what happened to set it off if a reason I known
Nope, they’re too cowardly to use their actual accounts and are making them anonymously. All we know is that rather than being mature about a mod action and simply leaving and creating an account elsewhere they decided to do this.
Gotcha. Thanks.
Good point. I didn’t think about how easy that would be to fake.
That said I would still prefer it to some subreddit’s cryptic karma requirements. If it worked I mean.
And here’s the spot where I point out that using a blockchain for recording accounts would be a good technological fit for a decentralized system like the Fediverse, and then get pilloried for being a “cryptobro” or whatever.
Seriously, all that you’d need to use the blockchain for would be a basic record of “this account holder has this name on that instance” and you get all sorts of unspoofable benefits from that. No tokens, no fancy authentication if you don’t want it, just a distributed database that you can trust.
Instead of preempting criticism/downvotes, perhaps it would help to more clearly describe what kind of implementation of blockchain you mean?
If it would still involve some questionable consent mechanism that either consumes a large amount of energy (Proof-of-Work) or may benefit larger stakeholders (Proof-of-Stake), then even setting aside the cryptocurrency associations, I’m not sure it’s necessarily worth it. However, if I’m not mistaken, there are implementations that may not require those, but may still provide the sort of benefit you’re suggesting, aren’t there?
I’ve elaborated in some of the subsequent comments. I guess I wanted to “test the waters” a bit, if I got a strong negative reaction for simply mentioning a blockchain-based solution I would have sighed and moved on.
Proof-of-stake doesn’t benefit larger stakeholders any more than it benefits smaller stakeholders, the common “rich-get-richer” objection is based on a misunderstanding of how the economics of staking actually operates. Since every staker gets rewarded in exact proportion to the size of their stake the large stakers and small stakers grow at the same relative rates. It’s actually proof-of-work that has an inherent centralization pressure due to the economies of scale that come from running large mining farms.
Proof-of-stake doesn’t benefit larger stakeholders any more than it benefits smaller stakeholders, the common “rich-get-richer” objection is based on a misunderstanding of how the economics of staking actually operates.
That wasn’t what I was referring to, but I should have phrased that part of my comment better. When I wrote that it may benefit larger stakeholders more what I had meant was that, by my rough understanding, larger stakeholders have more influence or sway over the consent mechanism. It’s been awhile since I looked into it last, so I can’t remember the details exactly, but that’s what I recall of what I read.
It wasn’t the rich-get-richer problem, so much as the rich-hold-outsized-influence problem. Similar but distinct.
It may be counterintuitive, but stakers don’t actually have influence over the consensus mechanism. It’s actually the other way around. Consider it this way; the stake that a staker puts up is a hostage that the staker is providing to the blockchain. If I stake a million dollars worth of Ether, I’m basically telling the blockchain’s users “you can trust me to process blocks correctly because if I fail to do so you can destroy my million dollar stake.” I have a million dollars riding on me following the blockchain’s rules. That’s literally why it’s called a “stake.”
The people who are actually “in charge” of which consensus rules are in use are the userbase as a whole, the ones who pay transaction fees and give Ether value by purchasing it from the validators. If some validators were to go rogue and create a fork that was to their liking but not to the liking of the userbase, the rogue validators would be holding worthless tokens on a blockchain that nobody is using. You can see the effects of this by the way the blockchain is continuing to update in ways that are good for the general userbase but not necessarily for the validators - MEV-burn, for example, is a proposal that would reduce the amount of money that validators could make but there’s no concern that I’ve seen about the validators somehow “rejecting” it. If the userbase wants it the validators can’t reject it without losing much more than they could hope to gain.
Ironically, proof-of-work is more vulnerable to this kind of thing. If a proof-of-work chain were to fork and a substantial majority of the validators didn’t agree with the fork then they could attack it with 51% attacks. The forked chain would need to change its PoW algorithm to stop the attacks, and that would destroy all the “friendly” miners along with the attackers.
Validators in a PoS blockchain could also launch attacks at a contentious fork, but they’d burn their stake in the process whereas the validators that did what the userbase wanted would keep theirs. So there’s a powerful incentive to just go along with the userbase’s desires.
this account holder has this name on that instance
How would that help? A spam bot could just make lots of blockchain wallets.
you get all sorts of unspoofable benefits from that
what are the benefits? I struggle to come up with any benefits.
The issue that was being discussed was blocking accounts from posting if they were younger than a certain age. The blockchain has an unspoofable timestamp on its records.
I see. I’m not convinced that proving the account creation date makes much of a difference here. Obviously the instance records when you sign up, so you would only need this to protect against malicious instances. But if a spammer is manipulating their instance to allow them to spam more, you have a much bigger problem than reliably knowing their account creation date.
It’s a matter of trust. A random instance can always lie and you can only determine “that was a malicious instance that was lying to me” in hindsight after it’s broken that trust. Since a malicious instance-runner can spin up new instances almost as easily as creating new fake accounts you end up with a game of whack-a-mole where the malicious party can always get a few bad actions through before getting whacked. Whereas if user account creation was recorded on a blockchain you don’t need to ever trust the instance in the first place. You can always know for sure that an account is X days old.
A malicious instance-runner could still spin up fresh instances and fake accounts ahead of time, but it forces them to do it X days in advance and now if they want to keep attacking they have a longer delay time on it. A community that’s under attack could set the limit to 30 days, for example, and now the attacker is out of action for a full month until their next crop of fake instances is “ripe.” As always with these sorts of decentralized systems there’s tradeoffs and balances to be struck. The idea is to make things as hard for malicious users as possible without making it harder for the non-malicious ones in the process. Right now the cycle time for the whack-a-mole is “as fast as the attacker wants it to be” whereas with a trustworthy account age authentication layer the cycle time becomes “as slow as the target wants it to be.”
Putting aside that this use case doesn’t meet the five requisites for block chain use, the fediverse in general and Lemmy is already struggling with too much data being stored and moved.
Searching for “the five requisites for blockchain use” isn’t finding anything relevant, what requisites do you mean?
This wouldn’t be storing more data, it would be storing existing data. It would just be putting it somewhere that can be globally read and verified.
How do you store data in a decentralised way without have many redundant copies? The decentralisation of Blockchain is from many machines maintaining their own copy of the entire history. The entire xo dept I herebtly stores more data. Your suggestion is to literally store more data, claiming it won’t store more data only suggests you don’t know how blockchain works.
And that’s not even including any overhead of implementing a Blockchain in the first place. Or the fact you’ll be storing data on literally every user even if they never interact with your instance, pr even if their instance is entirely blocked from yours. And there’s no way around that, if you do manage to selectively store some subset of users then when you do need to include that data you’re trusting the subset of maintainers who do have that user’s data which, initially, is only the user’s home instance so we’re back to square one.
Yes, my point is that that sort of thing is exactly what blockchains are for. They handle all of that already. So there’s no need for Fediverse servers to reinvent all of that, they can just use existing blockchains for it.
I’m not saying you’re wrong but why would this be the first time blockchain stopped illegal activity instead of facilitating it? It like 15 year-old tech and hasn’t made a significant impact outside of niche projects like cryptocurrencies.
To the first, there are a vast number of legal applications for blockchains.
To the second, it’s not the same tech as it was 14 years ago. There have been a lot of advancements over that period.
If you trace ActivityPub’s lineage back to its origin, it’s 14 years old too - it started as OpenMicroBlogging in 2009. It then became OStatus, which became standardized as ActivityPub. It’s barely the same thing any more. The same thing has happened with blockchains, the version of Bitcoin that launched in 2009 is nothing like the cutting-edge stuff like Ethereum is these days.
As someone (who’s not a fan of the fediverse) put it to me:
Fediverse is web2.5, worst of both web2.0 and web3.0.
I think there’s something to that. So instilled in the fediverse’s makers is web2.0 that I’m not entirely sure their solutions can be trusted in the long term.
It makes sense that down the line, when bitcoin and crypto hype finally settles into knowing what’s actually useful, some sort of cryptographic mechanisms will become normal in decentralised tech. BlueSky may make this mainstream.
That’d be nice. Personally, I think the tech is just about ready - Ethereum has solved its environmental issues with proof-of-stake, and has solved its transaction cost issues with rollup-based “layer 2” blockchains. At this point I think the main obstacle is the knee-jerk popular reaction to anything blockchain-related as being some kind of crypto scam. I’m actually quite pleasantly surprised that I haven’t been downvoted through the floor for talking about this here so perhaps there’s a light at the end of the tunnel.
I personally have the knee-jerk reaction. I don’t understand anything you’re saying about blockchain. I’ve heard of farcaster (if you haven’t you might be interested) and nostr (ditto) but don’t know how they work.
The lack of mega downvotes, I’d guess, comes from the fact that people here appreciate the value of decentralisation and also can imagine from experience that a better system is possible than the relatively clumsy “let’s just send copies and requests everywhere”.
In the end I don’t know. But I can see the decentralised social web being where cryptographic technology finds its mainstream landing (BlueSky, like I said, being an interesting space to watch as its the middle ground on that front).
I could try explaining in more accessible terms, if you like. I actually enjoy discussing this stuff but I don’t want to derail the thread or sound like I’m evangelizing.
I think solutions like this are best handled entirely on the back end, the general user wouldn’t even need to know a blockchain was involved. The blockchain would just be a data provider that the instance software is using behind the scenes to track stuff. Just like how a general user has no need to understand how the HTTPS protocol actually operates, they just point their web browser at an address and the technical details are handled behind the scenes.
Are new instances automatically federated? If not, then it seems like making an instance, then hosting content enough to be federated, would be an awful waste of time and money, as I’d expect an instance like that would be quickly defederated.
Somewhat. All the communities have to be looked up manually by users, and followed to continue federating the content into that instance.
But for this purpose the answer is yes. At least as far as I know, you can immediately start posting to other instances. Otherwise private instances would be of no use.
What about new users and new instances requiring manual approval for posts?
Maybe. Some discussion going on at the moment about how to handle it.
Understood. Is that an option for moderators though?
Like I said I don’t know if Lemmy gives you that option or if you’d need to setup some kind of bot or an instance level option.
That would need to be a bot. The problem is that the spammer would just move on to the next community (which they have just done by moving to [email protected] I just put a tool up that automatically notifies a bunch of admins, mods and community team members when a post get’s reported more than 3 times, so please report the posts if you see them.
That’s smart. Glad to hear something like that exists
Preventing any posting in general might be a bit too restrictive IMO. However I think new users, or users using VPNs probably should not be allowed to post images in general so freely.
I believe lemm.ee has a minimum account age limit before users can upload directly to the instance, and dbzer0 scans all user uploaded images for anything that could be questionable.
Perhaps there should be additional restrictions on stuff linking to images outside of lemmy? I blocked the domain within moments of it appearing on my feed, absolutely disgusting
You’d have to generate a blacklist and maintain it, but also avoid bad faith mods and admins
Removed by mod
Most [email protected] questions are better fit for [email protected] anyways.
Some could go to [email protected] too but tbh I think the user base as a whole is still too low for these differences to be meaningful to most people.
This is exactly what they wanted.
Thanks goddamn, WTF is wrong with people?
i don’t think that’s going to be very effective. i havent seen any of this but it sounds like a sybil attack. asklemmy isn’t the only vector. lemmy.world is going to need to do something, possibly drastic.
It’s not even just lemmy.world, the same user is reposting to ask lemmy on lemmy.ml now.
What are the victory conditions or payoff for someone posting that here?
Payoff: could be related to the coming Reddit IPO, to make alternatives unappealing or unsustainable.
Thank you for your work on it
I suggest limiting new accounts from uploading photos for 3 days, to prevent abuse.
3 days should be enough to make most people think twice before doing something so stupid, harmful and illegal. Most users don’t upload photos right as they sign up anyway so this effect to legitimate use should be negligible.
Sweet summer child.
that doesn’t do anything, they’ll just register accounts in advance and wait some days.
we’ve even had spam recently from accounts that had been dormant for months, although it was a different kind of spam.
I’m not saying it will prevent everything, including those with longstanding grudges, but especially if the period is not publicly announced/varies from server to server, then it will stop the impulsive trolls who can’t just make a bunch of accounts.
Similar to mandatory wait laws for guns and ID creation wait period for Wiimfi community-run online service.
at that point you’ll just discourage any new users if they have to gamble on whether or not their content is actually seen by anyone. account age really isn’t a good indicator of anything other than soemone being dedicated enough to spam. considering this isn’t the first wave of csam attacks, i can assure you that whoever is targeting lemmy with this is determined enough that account age won’t deter them for long, they’ll just have to slightly adjust their playbook.
Jesus, it’s still going on? I’ve kept away since seeing posts about it a couple days ago, but seriously, what the fuck?
I am a mod bot.
What are you so mod about?
Good bot gives cookie
Wtf, disgusting