Reddit has a new AI training deal to sell user content::Reddit has reportedly made a deal with an unnamed AI company to allow access to its platform’s content for the purposes of AI model training.

    • NoRodent@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      10 months ago

      I mean, there’s /r/SubSimulatorGPT2 that’s been running for years… Although that one was at least hilarious to read because at that stage the AI was in the sweet spot of being simultaneously coherent while making total lapses in logic.

      • fuckwit_mcbumcrumble@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        10 months ago

        Yeah I thought that was pretty well the established conscientious on the thing. People questioning it confuses me honestly.

    • NeatNit@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      10 months ago

      it’s all but guaranteed. Reminds me of this Computerphile video: https://youtu.be/WO2X3oZEJOA?t=874 TL;DW: there were “glitch tokens” in GPT (and therefore ChatGPT) which undeniably came from Reddit usernames.

      Note, there’s no proof that these reddit usernames were in the training data (and there’s even reasons to assume that they weren’t, watch the video for context) but there’s no doubt that OpenAI already had scraped reddit data at some point prior to training, probably mixed in with all the rest of their text data. I see no reason to assume they completely removed all reddit text before training. The video suggest reasons and evidence that they removed certain subreddits, not all of reddit.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    10 months ago

    When spez took away API access, he basically shit on the social contract that offered a fair exchange of free access for the content we fed into reddit. After the API change, there were new terms: there is no contract. There are no terms. If you use reddit now, you are giving away everything you are to be indexed and mangled by statistics. You exist as free labor to statisticians and machines.

    You are more than a few cents of bad memes.

    I’m going to make the request in the AM that Lemmy should add robots.txt rules to disallow AI crawlers, to at least indicate we’re not interested. We need legislation that tells scrapers what they can access.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      We need legislation that tells scrapers what they can access.

      What do you hope that would achieve?

      Because I can only see this as benefitting Reddit, Facebook, and the like, while screwing over smaller players.

    • Crack0n7uesday@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      10 months ago

      They can and do, but they want the training models to come from highly moderated sources otherwise every AI chatbot would be spewing the most racist parts of 4chan because people would train it that way as a joke.

      If you let AI roam freely across the internet, it would only learn porn, sailor moon, dragon Ball z, and nazi germany.

    • Verserk@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      Anything can, the difference is reddit holds the exclusive rights to user comments on their site, and they’ve chosen to sell it.

    • Steak@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      Dick dick pussy cunt cock dick pussy ass shit cunt shit motherfucker shit motherfucker ass tits cunt cock motherfucker shit ass tits motherfucker shit c’mon. Scrape that🔥

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    They say it’s $60 million on an annualized basis. I wonder who’d pay that, given that you can probably scrape it for free.

    Maybe it’s the AI act in the EU. That might cause trouble in that regard. The US is seeing a lot of rent-seeker PR, too, of course. That might cause some to hedge their bets.

    Maybe some people had not realized that yet, but limiting fair use does not just benefit the traditional media corporations but also the likes of Reddit, Facebook, Apple, etc. Making “robots.txt” legally binding would only benefit the tech companies.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    This is the best summary I could come up with:


    Reddit will let “an unnamed large AI company” have access to its user-generated content platform in a new licensing deal, according to Bloomberg yesterday.

    The deal, “worth about $60 million on an annualized basis,” the outlet writes, could still change as the company’s plans to go public are still in the works.

    The news also follows an October story that Reddit had threatened to cut off Google and Bing’s search crawlers if it couldn’t make a training data deal with AI companies.

    Last year, it successfully stonewalled its way out of the biggest protest in its history after changes to its third-party API access pricing caused developers of the most popular Reddit apps to shut down.

    As Bloomberg writes, Reddit’s year-over-year revenue was up by 20 percent by the end of 2023, but it was still $200 million shy of a $1 billion target it had set two years prior.

    The company was reportedly advised to seek a $5 billion valuation when it opens up for public investment, which is expected to happen in March.


    The original article contains 346 words, the summary contains 175 words. Saved 49%. I’m a bot and I’m open source!

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 months ago

    For anyone looking for a gibberish generator to replace their Reddit content with, here’s one. This shit is like poison for those large models.

    For automatic edition I’m not sure on what people can use nowadays; back then just before the APIcalypse I’ve used power delete suite, I’m not sure if it still works and I’m not creating a Reddit account just to test it out.

    • greaprr@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Not that I’m against telling Reddit to fuck off in no uncertain terms, but won’t providing this kind of poisoning to AI training just make it more resilient to exactly this kind of thing?