

Yes, that’s the one
Also what does kagis mean…
Yes, that’s the one
Also what does kagis mean…
I recently found a playlist on Spotify called “Sexy Goth Slut Music” which I really like. I made my own playlist from my favorite songs, which also includes closer by nin
So you just query an AI, just like any other AI, but it posts your request and response publicly on your fedi account??? This shit is fucking stupid. Why would you ever want that
Cybernews researchers have found that BDSM People, CHICA, TRANSLOVE, PINK, and BRISH apps had publicly accessible secrets published together with the apps’ code.
All of the affected apps are developed by M.A.D Mobile Apps Developers Limited. Their identical architecture explains why the same type of sensitive data was exposed.
What secrets were leaked?
[…] threat actors can easily abuse them to gain access to systems. In this case, the most dangerous of leaked secrets granted access to user photos located in Google Cloud Storage buckets, which had no passwords set up.
In total, nearly 1.5 million user-uploaded images, including profile photos, public posts, profile verification images, photos removed for rule violations, and private photos sent through direct messages, were left publicly accessible to anyone.
So the devs were inexperienced in secure architectures and put a bunch of stuff on the client which should probably have been on the server side. This leaves anyone open to just use their API to access every picture they have on their servers. They then made multiple dating apps with this faulty infrastructure by copy-pasting it everywhere.
I hope they are registered in a country with strong data privacy laws, so they have to feel the consequences of their mismanagement
Interesting fact: many bigger Lemmy instances are already using AI systems to filter out dangerous content in pictures before they even get uploaded.
Context: Last year there was a big spam attack of CSAM and gore on multiple instances. Some had to shut down temporarily because they couldn’t keep up with moderation. I don’t remember the name of the tool, but some people made a program that uses AI to try and recognize these types of images and filter them out. This heavily reduced the amount of moderation needed during these attacks.
Early AI moderation systems are actually something more platforms should use. Human moderators, even paid ones, shouldn’t need to go though large amounts of violent content every day. Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support. So no matter what you think of AI and if it’s moral, this is actually one of the few good applications in my opinion
Ah, that makes sense