• 7eter@feddit.org
      link
      fedilink
      arrow-up
      2
      ·
      21 hours ago

      This! Plus opening up the possibility for Google to use private user data with even less concern. So not a privacy win at all.

  • tal@olio.cafe
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    21 hours ago

    LLMs have non-deterministic outputs, meaning you can’t exactly predict what they’ll say.

    I mean…they can have non-deterministic outputs. There’s no requirement for that to be the case.

    It might be desirable in some situations; randomness can be a tactic to help provide variety in a conversation. But it might be very undesirable in others: no matter how many times I ask “What is 1+1?”, I usually want the same answer.

    • kassiopaea@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      In theory, it’s just an algorithm that will always produce the same output given the exact same inputs. In practice it’s nearly impossible to have fully deterministic outputs because of the limited precision repeatability we get with floating point numbers on GPUs.