• DahGangalang@infosec.pub
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I have to say no, I can’t.

    The best decision I could make is a guess based on the logic I’ve determined from my own experiences that I would then compare and contrast to the current input.

    I will say that “current input” for humans seems to be more broad than what is achievable for AI and the underlying mechanism that lets us assemble our training set (read as: past experiences) into useful and usable models appears to be more robust than current tech, but to the best of my ability to explain it, this appears to be a comparable operation to what is happening with the current iterations of LLM/AI.

    Ninjaedit: spelling

    • KᑌᔕᕼIᗩ@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      1 year ago

      If you can’t make logical decisions then how are you a comp sci major?

      Seriously though, the point is that when making decisions you as a human understand a lot of the ramifications of them and can use your own logic to make the best decision you can. You are able to make much more flexible decisions and exercise caution when you’re unsure. This is actual intelligence at work.

      A language processing system has to have it’s prompt framed in the right way, it has to have knowledge in its database about it and it only responds in a way that it’s programmed to do so. It doesn’t understand the ramifications of what it puts out.

      The two “systems” are vastly different in both their capabilities and output. Even in image processing AI absolutely sucks at driving a car for instance, whereas most humans can do it safely with little thought.

      • DahGangalang@infosec.pub
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        and exercise caution when you’re unsure

        I don’t think that fully encapsulates a counter point, but I think that has the beginnings of a solid counter point to the argument I’ve laid out above (again, it’s not one I actually devised, just one that really put me on my heels).

        The ability to recognize when it’s out of its depth does not appear to be something modern “AI” can handle.

        As I chew on it, I can’t help but wonder what it would take to have AI recognize that. It doesn’t feel like it should be difficult to have a series of nodes along the information processing matrix to track “confidence levels”. Though, I suppose that’s kind of what is happening when the creators of these projects try to keep their projects from processing controversial topics. It’s my understanding those instances act as something of a short circuit where (if you will) when confidence “that I’m allowed to walk about this” drops below a certain level, the AI will spit out a canned response vs actually attempting to process input against the model.

        The above is intended ad more a brain dump than a coherent argument. You’ve given me something to chew on, and for that I thank you!

        • KᑌᔕᕼIᗩ@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Well, it’s an online forum and I’m responding while getting dressed and traveling to an appointment, so concise responses is what you’re gonna get. In a way it’s interesting that I can multitask all of these complex tasks reasonably effortlessly, something else an existing AI cannot do.