• Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    17 hours ago

    isn’t this just paranoid schizophrenia? i don’t think chatgpt can cause that

    • leftzero@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      LLMs are obligate yes-men.

      They’ll support and reinforce whatever rambling or delusion you talk to them about, and provide “evidence” to support it (made up evidence, of course, but if you’re already down the rabbit hole you’ll buy it).

      And they’ll keep doing that as long as you let them, since they’re designed to keep you engaged (and paying).

      They’re extremely dangerous for anyone with the slightest addictive, delusional, suggestible, or paranoid tendencies, and should be regulated as such (but won’t).

    • Skydancer@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      Could be. I’ve also seen similar delusions in people with syphilis that went un- or under-treated.

    • Alphane Moon@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      17 hours ago

      I have no professional skills in this area, but I would speculate that the fellow was already predisposed to schizophrenia and the LLM just triggered it (can happen with other things too like psychedelic drugs).

      • SkaveRat@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        15 hours ago

        I’d say it either triggered by itself or potentially drugs triggered it, and then started using an LLM and found all the patterns to feed that shizophrenic paranoia. it’s avery self reinforcing loop

      • zzx@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        Yup. LLMs aren’t making people crazy, but they are making crazy people worse

    • nimble@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      15 hours ago

      LLMs hallucinate and are generally willing to go down rabbit holes. so if you have some crazy theory then you’re more likely to get a false positive from a chatgpt.

      So i think it just exacerbates things more than alternatives