• GoodEye8@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    It doesn’t need to verify reality, it needs to be internally consistent and it’s not.

    For example I was setting up logging pipeline and one of the filters didn’t work. There was seemingly nothing wrong with configuration itself and after some more tests with dummy data I was able to get it working, but it still didn’t work with the actual input data. So I have the working dummy example and the actual configuration to chatGPT and asked why the actual configuration doesn’t work. After some prompts going over what I had already tried it ended up giving me the exact same configuration I had presented as the problem. Humans wouldn’t (or at least shouldn’t) make that error because it would be internally inconsistent, the problem statement can’t be the solution.

    But the AI doesn’t have internal consistency because it doesn’t really think. It’s not making sure what it’s saying is logical based on the information it knows, it’s not trying to make assumptions to solve a problem, it can’t even deduce that something true is actuality true. All it can do is predict what we would perceive as the answer.

    • bastion@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 months ago

      Indeed. It doesn’t even trend towards consistency.

      It’s much like the pattern-matching layer of human consciousness. Its function isn’t to filter for truth, its function is to match knowns and potentials to patterns in its environment.

      AI has no notion of critical thinking. It is purely positive “thinking”, in a technical sense - it is positing based on what it “knows”, but there is no genuine concept of self, nor even of critical thinking, nor even a non-conceptual logic or consistency filter.