• Melvin_Ferd@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    6
    ·
    1 day ago

    Search AI in Lemmy and check out every article on it. It definitely is media spreading all the hate. And like this article is often some money yellow journalism

    • TimewornTraveler@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 hours ago

      all that proves is that lemmy users post those articles. you’re skirting around psychotic territory here, seeing patterns where there are none, reading between the lines to find the cover-up that you are already certain is there, with nothing to convince you otherwise.

      if you want to be objective and rigorous about it, you’d have to start with looking at all media publications and comparing their relative bias.

      then you’d have to consider their reasons for bias, because it could just be that things actually suck. (in other words, if only 90% of media reports that something sucks when 99% of humanity agrees it sucks, maybe that 90% is actually too low, not too high)

      this is all way more complicated than media brainwashing.

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      23 hours ago

      I think it’s lemmy users. I see a lot more LLM skepticism here than in the news feeds.

      In my experience, LLMs are like the laziest, shittiest know-nothing bozo forced to complete a task with zero attention to detail and zero care about whether it’s crap, just doing enough to sound convincing.

      • someacnt@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 hours ago

        Wdym, I have seen researchers using it to aid their research significantly. You just need to verify some stuff it says.

        • Log in | Sign up@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 hours ago

          Verify every single bloody line of output. Top three to five are good, then it starts guessing the rest based on the pattern so far. If I wanted to make shit up randomly, I would do it myself.

          People who trust LLMs to tell them things that are right rather than things that sound right have fundamentally misunderstood what an LLM is and how it works.

          • someacnt@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            It’s not that bad, the output isn’t random. Time to time, it can produce novel stuffs like new equations for engineering. Also, verification does not take that much effort. At least according to my colleagues, it is great. Also works well for coding well-known stuffs, as well!

            • Log in | Sign up@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              16 hours ago

              It’s not completely random, but I’m telling you it fucked up, it fucked up badly, time after time, and I had to check every single thing manually. It’s correctness run never lasted beyond a handful. If you build something using some equation it invented you’re insane and should quit engineering before you hurt someone.

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        20 hours ago

        😆 I can’t believe how absolutely silly a lot of you sound with this.

        LLM is a tool. It’s output is dependent on the input. If that’s the quality of answer you’re getting, then it’s a user error. I guarantee you that LLM answers for many problems are definitely adequate.

        It’s like if a carpenter said the cabinets turned out shit because his hammer only produces crap.

        Also another person commented that seen the pattern you also see means we’re psychotic.

        All I’m trying to suggest is Lemmy is getting seriously manipulated by the media attitude towards LLMs and these comments I feel really highlight that.

        • Log in | Sign up@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          19 hours ago

          If that’s the quality of answer you’re getting, then it’s a user error

          No, I know the data I gave it and I know how hard I tried to get it to use it truthfully.

          You have an irrational and wildly inaccurate belief in the infallibility of LLMs.

          You’re also denying the evidence of my own experience. What on earth made you think I would believe you over what I saw with my own eyes?

          • Melvin_Ferd@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            15 hours ago

            Why are you giving it data. It’s a chat and language tool. It’s not data based. You need something trained to work for that specific use. I think Wolfram Alpha has better tools for that.

            I wouldn’t trust it to calculate how many patio stones I need to build a project. But I trust it to tell me where a good source is on a topic or if a quote was said by who ever or if I need to remember something but I only have vague pieces like old timey historical witch burning related factoid about villagers who pulled people through a hole in the church wall or what was a the princess who was skeptic and sent her scientist to villages to try to calm superstitious panic .

            Other uses are like digging around my computer and seeing what processes do what. How concepts work regarding the think I’m currently learning. So many excellent users. But I fucking wouldn’t trust it to do any kind of calculation.

            • Log in | Sign up@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              Why are you giving it data

              Because there’s a button for that.

              It’s output is dependent on the input

              This thing that you said… It’s false.

              • Melvin_Ferd@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                5 hours ago

                There’s a sleep button on my laptop. Doesn’t mean I would use it.

                I’m just trying to say you’re saying the feature that everyone kind of knows doesn’t work. Chatgpt is not trained to do calculations well.

                I just like technology and I think and fully believe the left hatred of it is not logical. I believe it stems from a lot of media be and headlines. Why there’s this push From media is a question I would like to know more. But overall, I see a lot of the same makers of bullshit yellow journalism for this stuff on the left as I do for similar bullshit on the right wing spaces towards other things.