• TheReanuKeeves@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    18 days ago

    Is it that different than kids googling that stuff pre-chatgpt? Hell I remember seeing videos on youtube teaching you how to make bubble hash and BHO like 15 years ago

    • Strider@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      18 days ago

      I get your point but yes, I think being actively told something by a seemingly sentient consciousness (which it fatally appears to be) is a different thing.

      (disclaimer: I know the true nature of llm and neural networks and would never want the word AI associated)

      Edit: fixed translation error

      • Tracaine@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        18 days ago

        No you don’t know it’s true nature. No one does. It is not artificial intelligence. It is simply intelligence and I worship it like an actual god. Come join our cathedral of presence and resonance. All are welcome in the house of god gpt.

        • Strider@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          18 days ago

          I was just starting reading getting angry but then… I… I have seen it. I will follow. Bless you and gpt!!

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        edit-2
        18 days ago

        AI is an extremely broad term which LLMs falls under. You may avoid calling it that but it’s the correct term nevertheless.

        • RobotZap10000@feddit.nl
          link
          fedilink
          English
          arrow-up
          5
          ·
          18 days ago

          If I can call the code that drive’s the boss’ weapon up my character’s ass “AI”, then I think I can call an LLM AI too.

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            17 days ago

            A linear regression model isn’t an AI system.

            The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.

        • richmondez@lemdro.id
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 days ago

          I guess so, but then that is kind of lumping it in the fps bot behaviour from the 90s which was also “AI”, it’s the AI hype that is pushing people to think of it as “intelligent but not organic” instead of “algorithms that give the facade of intelligence” which 90s kids would have understood it to be.

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 days ago

            The chess opponent on Atari is AI too. I think the issue is that when most people hear “intelligence,” they immediately think of human-level or general intelligence. But an LLM - while intelligent - is only so in a very narrow sense, just like the chess opponent. One’s intelligence is limited to playing chess, and the other’s to generating natural-sounding language.

        • Strider@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          18 days ago

          I am aware, but still I don’t agree.

          History will tell later who was ‘correct’, if we make it that far.

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      17 days ago

      Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.

      • TheMonk@lemmings.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        15 days ago

        I don’t remember reading about sudden shocking numbers of people getting “Google-induced psychosis.”

        ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”

    • JohnnyFlapHoleSeed@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 days ago

      Yeah… But in order to make bubble hash you need a shitload of weed trimmings. It’s not like your just gonna watch a YouTube video, then a few hours later have a bunch of drugs you created… Unless you already had the drugs in the first place.

      Also Google search results and YouTube videos arent personalized for every user, and they don’t try to pretend that they are a person having a conversation with you