• yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    16 days ago

    The thing that bothers me about LLMs is that people will acknowledge the hallucinations and lies LLMs spit out when their discussing information the user is familiar with.

    But that same person will somehow trust an LLM as an authority on subjects to which they’re not familiar. Especially on subjects that are on the edges or even outside human knowledge.

    Sure I don’t listen when it tells me to make pizza with glue, but it’s ideas about Hawking radiation are going to change the field.

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      16 days ago

      The same used to be said of newspapers (and still ought to be). That is, it’s funny how accurate and informative they appear to be until the topic changes to something about which you have intimate knowledge.

      The logical leap to generalise from that is impossible for far too many people and is also an easy trap for those who can make it.

    • MysteriousSophon21@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      15 days ago

      This is literally the Dunning-Kruger effect in action - people can’t evaluate the quality of AI responses in domains where they lack the knowledge to spot the bs.

    • dylanmorgan@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      16 days ago

      They don’t realize that the chatbot’s “ideas” about hawking radiation were also just posted by a crank on Reddit.