• disguy_ovahea@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      ·
      edit-2
      8 months ago

      COPPA is 100% a threat to online privacy. AI, although just a tool, is absolutely empowering those with the goal of social manipulation through disinformation. Fabricated information can no longer be disproven at the rate it’s created, and too many “news” sites rely on trending web scrapers for content. By the time retractions and corrections are made, the masses are reading the next headline.

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      edit-2
      8 months ago

      Por que no los dos?

      Given the massive increase in search result garbage, it’s pretty clear “AI” is a massive problem.

      And while I find things like COPPA massively invasive, offensive, and nothing more than more of the power-brokers reaching for more yet control, I can sidestep it, and it will have the unintended consequence of increasing encryption and public awareness of their bullshit.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        Honestly, I just havent seen this increase. Search results were clogged with blogspam 3 years ago and they are clogged with it now. LLMs seem to have had little effect either way.

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      8 months ago

      They can all be parts of the threat.

      The threat itself is that governments and big corps have a comprehensive strategy on censoring and controlling the Web. Since the Web is nowadays the only media space that has preserved some appearance of freedom, this is bad. The end goal is so that nobody would hear you scream. I mean, they’ve already succeeded for most part.

      Parts of that strategy are (I tried to separate them, but they intersect):

      Attracting people to centralized controlled recommendation systems, which obscurely determine what you’ll see and what you won’t. Since your ability to process information is limited, this simply means that no outright censorship is even needed. You just won’t ever see “wrong” information or discourse or even emotion on something, if you don’t search for it intentionally. That’s Facebook, Reddit, Twitter, search engines, now also LLM chatbots when used instead of a search engine.

      Confusing and demoralizing people out of organizing outside those. That’s a softer version of the first point, as in “maybe we won’t decide what you think about, but we will slow you down”. There are actions and laws and propaganda most efficient in that direction. Apathy is death.

      Market pressure - businesses use the Web in a particular way, so small nudges for that culture to be on the track particularly convenient for control are made.

      Fake progress and complexity race - yes, maybe enterprise software has to be complex. But Web technologies and Web browsers don’t. Most of the “new” things are apparently intended just to cut off the competition with smaller resources. Also oligopolization.

      Legal climate endorsing oligopolization.

      Then there are outright censorship and prosecution and bullying.

      And then there are likely cases of mafia-style assassin sh*t, which we wouldn’t know about anyway. I think Aaron Schwarz and Ian Murdock may fit here.