• Supermariofan67@programming.dev
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    1 year ago

    Given that Facebook and meta’s other platforms are one of the largest distributors of that, if they scrape Facebook for data this is not exactly a surprise unfortunately…

    • Sapphire Velvet@lemmynsfw.comOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      How are you supposed to train the dam thing to detect something without using that thing though?

      There are various organizations which have clearances to handle child abuse images. Where the data is handled like the plutonium it is, and everybody is vetted. I’m sure they’ve already experimented with developing a bot to detect images.

      They even make available their traditional hash databases to server admins who want to run their images against their hash databank.

      The issue as the report states is that nobody will willingly link said bot/database to the training data, because either they don’t want a copyright fight or they don’t want to acknowledge the issue.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    1 year ago

    This is the best summary I could come up with:


    More than 1,000 known child sexual abuse materials (CSAM) were found in a large open dataset—known as LAION-5B—that was used to train popular text-to-image generators such as Stable Diffusion, Stanford Internet Observatory (SIO) researcher David Thiel revealed on Wednesday.

    His goal was to find out what role CSAM may play in the training process of AI models powering the image generators spouting this illicit content.

    “Our new investigation reveals that these models are trained directly on CSAM present in a public dataset of billions of images, known as LAION-5B,” Thiel’s report said.

    But because users were dissatisfied by these later, more filtered versions, Stable Diffusion 1.5 remains “the most popular model for generating explicit imagery,” Thiel’s report said.

    While a YCombinator thread linking to a blog—titled “Why we chose not to release Stable Diffusion 1.5 as quickly”—from Stability AI’s former chief information officer, Daniel Jeffries, may have provided some clarity on this, it has since been deleted.

    Thiel’s report warned that both figures are “inherently a significant undercount” due to researchers’ limited ability to detect and flag all the CSAM in the datasets.


    The original article contains 837 words, the summary contains 182 words. Saved 78%. I’m a bot and I’m open source!

      • KoboldCoterie@pawb.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 year ago

        While I agree with the sentiment, that’s 2-6 in 10,000,000 images; even if someone was personally reviewing all of the images that went into these data sets, which I strongly doubt, that’s a pretty easy mistake to make, when looking at that many images.

        • RecallMadness@lemmy.nz
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          “Known CSAM” suggests researchers ran it through automated detection tools which the dataset authors could have used.

        • Sapphire Velvet@lemmynsfw.comOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          They’re not looking at the images though. They’re scraping. And their own legal defenses rely on them not looking too carefully else they cede their position to the copyright holders.