• Voroxpete@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    208
    arrow-down
    8
    ·
    7 months ago

    We not only have to stop ignoring the problem, we need to be absolutely clear about what the problem is.

    LLMs don’t hallucinate wrong answers. They hallucinate all answers. Some of those answers will happen to be right.

    If this sounds like nitpicking or quibbling over verbiage, it’s not. This is really, really important to understand. LLMs exist within a hallucinatory false reality. They do not have any comprehension of the truth or untruth of what they are saying, and this means that when they say things that are true, they do not understand why those things are true.

    That is the part that’s crucial to understand. A really simple test of this problem is to ask ChatGPT to back up an answer with sources. It fundamentally cannot do it, because it has no ability to actually comprehend and correlate factual information in that way. This means, for example, that AI is incapable of assessing the potential veracity of the information it gives you. A human can say “That’s a little outside of my area of expertise,” but an LLM cannot. It can only be coded with hard blocks in response to certain keywords to cut it from answering and insert a stock response.

    This distinction, that AI is always hallucinating, is important because of stuff like this:

    But notice how Reid said there was a balance? That’s because a lot of AI researchers don’t actually think hallucinations can be solved. A study out of the National University of Singapore suggested that hallucinations are an inevitable outcome of all large language models. **Just as no person is 100 percent right all the time, neither are these computers. **

    That is some fucking toxic shit right there. Treating the fallibility of LLMs as analogous to the fallibility of humans is a huge, huge false equivalence. Humans can be wrong, but we’re wrong in ways that allow us the capacity to grow and learn. Even when we are wrong about things, we can often learn from how we are wrong. There’s a structure to how humans learn and process information that allows us to interrogate our failures and adjust for them.

    When an LLM is wrong, we just have to force it to keep rolling the dice until it’s right. It cannot explain its reasoning. It cannot provide proof of work. I work in a field where I often have to direct the efforts of people who know more about specific subjects than I do, and part of how you do that is you get people to explain their reasoning, and you go back and forth testing propositions and arguments with them. You say “I want this, what are the specific challenges involved in doing it?” They tell you it’s really hard, you ask them why. They break things down for you, and together you find solutions. With an LLM, if you ask it why something works the way it does, it will commit to the bit and proceed to hallucinate false facts and false premises to support its false answer, because it’s not operating in the same reality you are, nor does it have any conception of reality in the first place.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      55
      arrow-down
      5
      ·
      7 months ago

      This right here is also the reason why AI fanboys get angry when they are told that LLMs are not intelligent or even thinking at all. They don’t understand that in order for rational intelligence to exist, the LLMs should be able to have an internal, referential inner world of symbols, to contrast external input (training data) against and that is also capable of changing and molding to reality and truth criteria. No, tokens are not what I’m talking about. I’m talking about an internally consistent and persistent representation of the world. An identity, which is currently antithetical with the information model used to train LLMs. Let me try to illustrate.

      I don’t remember the details or technical terms but essentially, animal intelligence needs to experience a lot of things first hand in order to create an individualized model of the world which is used to direct behavior (language is just one form of behavior after all). This is very slow and labor intensive, but it means that animals are extremely good, when they get good, at adapting said skills to a messy reality. LLMs are transactional, they rely entirely on the correlation of patterns of input to itself. As a result they don’t need years of experience, like humans for example, to develop skilled intelligent responses. They can do it in hours of sensing training input instead. But at the same time, they can never be certain of their results, and when faced with reality, they crumble because it’s harder for it to adapt intelligently and effectively to the mess of reality.

      LLMs are a solipsism experiment. A child is locked in a dark cave with nothing but a dim light and millions of pages of text, assume immortality and no need for food or water. As there is nothing else to do but look at the text they eventually develop the ability to understand how the symbols marked on the text relate to each other, how they are usually and typically assembled one next to the other. One day, a slit on a wall opens and the person receives a piece of paper with a prompt, a pencil and a blank page. Out of boredom, the person looks at the prompt, it recognizes the symbols and the pattern, and starts assembling the symbols on the blank page with the pencil. They are just trying to continue from the prompt what they think would typically follow or should follow afterwards. The slit in the wall opens again, and the person intuitively pushes the paper it just wrote into the slit.

      For the people outside the cave, leaving prompts and receiving the novel piece of paper, it would look like an intelligent linguistic construction, it is grammatically correct, the sentences are correctly punctuated and structured. The words even make sense and it says intelligent things in accordance to the training text left inside and the prompt given. But once in a while it seems to hallucinate weird passages. They miss the point that, it is not hallucinating, it just has no sense of reality. Their reality is just the text. When the cave is opened and the person trapped inside is left into the light of the world, it would still be profoundly ignorant about it. When given the word sun, written on a piece of paper, they would have no idea that the word refers to the bright burning ball of gas above them. It would know the word, it would know how it is usually used to assemble text next to other words. But it won’t know what it is.

      LLMs are just like that, they just aren’t actually intelligent as the person in this mental experiment. Because there’s no way, currently, for these LLMs to actually sense and correlate the real world, or several sources of sensors into a mentalese internal model. This is currently the crux and the biggest problem on the field of AI as I understand it.

      • UnpluggedFridge@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        9
        ·
        7 months ago

        How do hallucinations preclude an internal representation? Couldn’t hallucinations arise from a consistent internal representation that is not fully aligned with reality?

        I think you are misunderstanding the role of tokens in LLMs and conflating them with internal representation. Tokens are used to generate a state, similar to external stimuli. The internal representation, assuming there is one, is the manner in which the tokens are processed. You could say the same thing about human minds, that the representation is not located anywhere like a piece of data; it is the manner in which we process stimuli.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          edit-2
          7 months ago

          Not really. Reality is mostly a social construction. If there’s not an other to check and bring about meaning, there is no reality, and therefore no hallucinations. More precisely, everything is a hallucination. As we cannot cross reference reality with LLMs and it cannot correct itself to conform to our reality. It will always hallucinate and it will only coincide with our reality by chance.

          I’m not conflating tokens with anything, I explicitly said they aren’t an internal representation. They’re state and nothing else. LLMs don’t have an internal representation of reality. And they probably can’t given their current way of working.

          • UnpluggedFridge@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            7 months ago

            You seem pretty confident that LLMs cannot have an internal representation simply because you cannot imagine how that capability could emerge from their architecture. Yet we have the same fundamental problem with the human brain and have no problem asserting that humans are capable of internal representation. LLMs adhere to grammar rules, present information with a logical flow, express relationships between different concepts. Is this not evidence of, at the very least, an internal representation of grammar?

            We take in external stimuli and peform billions of operations on them. This is internal representation. An LLM takes in external stimuli and performs billions of operations on them. But the latter is incapable of internal representation?

            And I don’t buy the idea that hallucinations are evidence that there is no internal representation. We hallucinate. An internal representation does not need to be “correct” to exist.

            • dustyData@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              7 months ago

              Yet we have the same fundamental problem with the human brain

              And LLMs aren’t human brains, they don’t even work remotely similarly. An LLM has more in common with an Excel spreadsheet than with a neuron. Read on the learning models and pattern recognition theories behind LLMs, they are explicitly designed to not function like humans. So we cannot assume that the same emergent properties exist on an LLM.

                • dustyData@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  That’s not how science works. You are the one claiming it does, you have the burden of proof to prove they have the same properties. Thus far, assuming they don’t as they aren’t human is the sensible rational route.

                  • UnpluggedFridge@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    edit-2
                    7 months ago

                    Read again. I have made no such claim, I simply scrutinized your assertions that LLMs lack any internal representations, and challenged that assertion with alternative hypotheses. You are the one that made the claim. I am perfectly comfortable with the conclusion that we simply do not know what is going on in LLMs with respect to human-like capabilities of the mind.

    • ???@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      2
      ·
      7 months ago

      I fucking hate how OpenAi and other such companies claim their models “understand” language or are “fluent” in French. These are human attributes. Unless they made a synthetic brain, they can take these claims and shove them up their square tight corporate behinds.

      • mamotromico@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        I though I would have an aneurism reading their presentation page on Sora.

        They are saying Sora can understand and simulate complex physics in 3D space to render a video.

        How can such bullshit go unchallenged. It drives me crazy.

      • EatATaco@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        19
        ·
        7 months ago

        This is circular logic: only humans can be fluent, so the models can’t be fluent because they aren’t human.

        And it’s universally upvoted…in response to an ais getting things wrong so they can’t be doing anything but hallucinating.

        And will you learn from this? Nope. I’ll just be down voted and shouted at.

        • Danksy@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          edit-2
          7 months ago

          It’s not circular. LLMs cannot be fluent because fluency comes from an understanding of the language. An LLM is incapable of understanding so it is incapable of being fluent. It may be able to mimic it but that is a different thing. (In my opinion)

          • EatATaco@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            7
            ·
            7 months ago

            You might agree with the conclusion, and the conclusion might even be correct, but the poster effectively argued ‘only humans can be fluent, and it’s not a human so it isn’t fluent’ and that is absolutely circular logic.

            • Danksy@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              If we ignore the other poster, do you think the logic in my previous comment is circular?

              • EatATaco@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                4
                ·
                7 months ago

                Hard to say. You claim they are incapable of understanding, which is why they can’t be fluent. however, really, the whole argument boils down to whether they are capable of understanding. You just state that as if it’s established fact, and I believe that’s an open question at this point.

                So whether it is circular depends on why you think they are incapable of understanding. If it’s like the other poster, and it’s because that’s a human(ish) only trait, and they aren’t human…then yes.

        • ???@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          7 months ago

          This is not at all what I said. If a machine was complex enough to reason, all power to it. But these LLMs cannot.

    • el_bhm@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      7 months ago

      They do not have any comprehension of the truth or untruth of what they are saying, and this means that when they say things that are true, they do not understand why those things are true.

      Which can be beautifully exploited with sponsored content.

      See Google I/O '24.

        • el_bhm@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 months ago

          Alternative title for this year Google I/O: AI vomit. You can watch Verge’s TL;DW video on Google I/O. There is no panel that did not mention AI. Most of it is “user centric”.

          AI can deliver and gather ad data. The bread and butter for Google.

          As to how it relates to the quote. It is up to Google to make it as truthful as they want it to be. And given ads is their money driver.

    • nucleative@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      7 months ago

      Well stated and explained. I’m not an AI researcher but I develop with LLMs quite a lot right now.

      Hallucination is a huge problem we face when we’re trying to use LLMs for non-fiction. It’s a little bit like having a friend who can lie straight-faced and convincingly. You cannot distinguish whether they are telling you the truth or they’re lying until you rely on the output.

      I think one of the nearest solutions to this may be the addition of extra layers or observer engines that are very deterministic and trained on only extremely reputable sources, perhaps only peer reviewed trade journals, for example, or sources we deem trustworthy. Unfortunately this could only serve to improve our confidence in the facts, not remove hallucination entirely.

      It’s even feasible that we could have multiple observers with different domains of expertise (i.e. training sources) and voting capability to fact check and subjectively rate the LLMs output trustworthiness.

      But all this will accomplish short term is to perhaps roll the dice in our favor a bit more often.

      The perceived results from the end users however may significantly improve. Consider some human examples: sometimes people disagree with their doctor so they go see another doctor and another until they get the answer they want. Sometimes two very experienced lawyers both look at the facts and disagree.

      The system that prevents me from knowingly stating something as true, despite not knowing, without some ability to back up my claims is my reputation and my personal values and ethics. LLMs can only pretend to have those traits when we tell them to.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        Consider some human examples: sometimes people disagree with their doctor so they go see another doctor and another until they get the answer they want. Sometimes two very experienced lawyers both look at the facts and disagree.

        This actually illustrates my point really well. Because the reason those people disagree might be

        1. Different awareness of the facts (lawyer A knows an important piece of information lawyer B doesn’t)
        2. Different understanding of the facts (lawyer might have context lawyer B doesn’t)
        3. Different interpretation of the facts (this is the hardest to quantify, as its a complex outcome of everything that makes us human, including personality traits such as our biases).

        Whereas you can ask the same question to the same LLM equipped with the same data set and get two different answers because it’s just rolling dice at the end of the day.

        If I sit those two lawyers down at a bar, with no case on the line, no motivation other than just friendly discussion, they could debate the subject and likely eventually come to a consensus, because they are sentient beings capable of reason. That’s what LLMs can only fake through smoke and mirrors.

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      usually, what I see is that the REPL they are using is never introspective enough. The ai cant on its own revert to a prevous state or give notes to itself because the response being fast and in linear time matters for a chatbot. ChatGPT can make really cool stuff when you ask it to break it’s thoght process into steps. Ones it usually fails spectacularly at. It was like pulling teeth to get it to actually do the steps and not just give the bad answer anyway.

    • 5gruel@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      7 months ago

      I’m not convinced about the “a human can say ‘that’s a little outside my area of expertise’, but an LLM cannot.” I’m sure there are a lot of examples in the training data set that contains qualification of answers and expression of uncertainty, so why would the model not be able to generate that output? I don’t see why it would require an “understanding” for that specifically. I would suspect that better human reinforcement would make such answers possible.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        2
        ·
        7 months ago

        Because humans can do introspection and think and reflect about our own knowledge against the perceived expertise and knowledge of other humans. There’s nothing in LLMs models capable of doing this. An LLM cannot asses it own state, and even if it could, it has nothing to contrast it to. You cannot develop the concept of ignorance without an other to interact and compare with.

    • UnpluggedFridge@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      12
      ·
      7 months ago

      I think where you are going wrong here is assuming that our internal perception is not also a hallucination by your definition. It absolutely is. But our minds are embodied, thus we are able check these hallucinations against some outside stimulus. Your gripe that current LLMs are unable to do that is really a criticism of the current implementations of AI, which are trained on some data, frozen, then restricted from further learning by design. Imagine if your mind was removed from all stimulus and then tested. That is what current LLMs are, and I doubt we could expect a human mind to behave much better in such a scenario. Just look at what happens to people cut off from social stimulus; their mental capacities degrade rapidly and that is just one type of stimulus.

      Another problem with your analysis is that you expect the AI to do something that humans cannot do: cite sources without an external reference. Go ahead right now and from memory cite some source for something you know. Do not Google search, just remember where you got that knowledge. Now who is the one that cannot cite sources? The way we cite sources generally requires access to the source at that moment. Current LLMs do not have that by design. Once again, this is a gripe with implementation of a very new technology.

      The main problem I have with so many of these “AI isn’t really able to…” arguments is that no one is offering a rigorous definition of knowledge, understanding, introspection, etc in a way that can be measured and tested. Further, we just assume that humans are able to do all these things without any tests to see if we can. Don’t even get me started on the free will vs illusory free will debate that remains unsettled after centuries. But the crux of many of these arguments is the assumption that humans can do it and are somehow uniquely able to do it. We had these same debates about levels of intelligence in animals long ago, and we found that there really isn’t any intelligent capability that is uniquely human.

      • mindlesscrollyparrot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        7 months ago

        This seems to be a really long way of saying that you agree that current LLMs hallucinate all the time.

        I’m not sure that the ability to change in response to new data would necessarily be enough. They cannot form hypotheses and, even if they could, they have no way to test them.

        • UnpluggedFridge@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          7 months ago

          My thesis is that we are asserting the lack of human-like qualities in AIs that we cannot define or measure. Assertions should be made on data, not uneasy feelings arising when an LLM falls into the uncanny valley.

          • mindlesscrollyparrot@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            5
            ·
            7 months ago

            But we do know how they operate. I saw a post a while back where somebody asked the LLM how it was calculating (incorrectly) the date of Easter. It answered with the formula for the date of Easter. The only problem is that that was a lie. It doesn’t calculate. You or I can perform long multiplication if asked to, but the LLM can’t (ironically, since the hardware it runs on is far better at multiplication than we are).

            • UnpluggedFridge@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      7 months ago

      Very long layman take. Why is your guesstimation so incredibly crucial to understand, then next thing important to understand then really, really important to understand, over and over, when you are not an expert?

    • EatATaco@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      14
      ·
      7 months ago

      they do not understand why those things are true.

      Some researchers compared the results of questions between chat gpt 3 and 4. One of the questions was about stacking items in a stable way. Chat gpt 3 just, in line with what you are saying about “without understanding”, listed the items saying to place them one on top of each other. No way it would have worked.

      Chat gpt 4, however, said that you should put the book down first, put the eggs in a 3 x 3 grid on top of the book, trap them in a way with a laptop so they don’t roll around, and then put the bottle on top of the laptop standing up, and then balance the nail on the top of it…even noting you have to put the flat end of the nail down. This sounds a lot like understanding to me and not just rolling the dice hoping to be correct.

      Yes, AI confidently gets stuff wrong. But let’s all note that there is a whole subreddit dedicated to people being confidently wrong. One doesn’t need to go any further than Lemmy to see people confidently claiming to know the truth about shit they should know is outside of their actual knowledge. We’re all guilty of this. Including refusing to learn when we are wrong. Additionally, the argument that they can’t learn doesn’t make sense because models have definitely become better.

      Now I’m not saying ai is conscious, I really don’t know, but all of your shortcomings you’ve listed humans are guilty of too. So to use it as examples as to why it’s always just a hallucination, or that our thoughts are not, doesn’t seem to hold much water to me.

      • insaan@leftopia.org
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        7 months ago

        the argument that they can’t learn doesn’t make sense because models have definitely become better.

        They have to be either trained with new data or their internal structure has to be improved. It’s an offline process, meaning they don’t learn through chat sessions we have with them (if you open a new session it will have forgotten what you told it in a previous session), and they can’t learn through any kind of self-directed research process like a human can.

        all of your shortcomings you’ve listed humans are guilty of too.

        LLMs are sophisticated word generators. They don’t think or understand in any way, full stop. This is really important to understand about them.

        • EatATaco@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          12
          ·
          7 months ago

          They have to be either trained with new data or their internal structure has to be improved. It’s an offline process, meaning they don’t learn through chat sessions we have with them (if you open a new session it will have forgotten what you told it in a previous session), and they can’t learn through any kind of self-directed research process like a human can.

          Most human training is done through the guidance of another, additionally, most of this is training is done through an automated process where some computer is just churning through data. And while you are correct that the context does not exist from one session to the next, you can in fact teach it something and it will maintain it during the session. It’s just like moving to a new session is like talking to completely different person, and you’re basically arguing “well, I explained this one thing to another human, and this human doesn’t know it. . .so how can you claim it’s thinking?” And just imagine the disaster that would happen if you would just allow it to be trained by anyone on the web. It would be spitting out memes, racism, and right wing propaganda within days. lol

          They don’t think or understand in any way, full stop.

          I just gave you an example where this appears to be untrue. There is something that looks like understanding going on. Maybe it’s not, I’m not claiming to know, but I have not seen a convincing argument as to why. Saying “full stop” instead of an actual argument as to why just indicates to me that you are really saying “stop thinking.” And I apologize but that’s not how I roll.

          • insaan@leftopia.org
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            1
            ·
            edit-2
            7 months ago

            Most human training is done through the guidance of another

            Let’s take a step back and not talk about training at all, but about spontaneous learning. A baby learns about the world around it by experiencing things with its senses. They learn a language, for example, simply by hearing it and making connections - getting corrected when they’re wrong, yes, but they are not trained in language until they’ve already learned to speak it. And once they are taught how to read, they can then explore the world through signs, books, the internet, etc. in a way that is often self-directed. More than that, humans are learning at every moment as they interact with the world around them and with the written word.

            An LLM is a static model created through exposure to lots and lots of text. It is trained and then used. To add to the model requires an offline training process, which produces a new version of the model that can then be interacted with.

            you can in fact teach it something and it will maintain it during the session

            It’s still not learning anything. LLMs have what’s known as a context window that is used to augment the model for a given session. It’s still just text that is used as part of the response process.

            They don’t think or understand in any way, full stop.

            I just gave you an example where this appears to be untrue. There is something that looks like understanding going on.

            You seem to have ignored the preceding sentence: “LLMs are sophisticated word generators.” This is the crux of the matter. They simply do not think, much less understand. They are simply taking the text of your prompts (and the text from the context window) and generating more text that is likely to be relevant. Sentences are generated word-by-word using complex math (heavy on linear algebra and probability) where the generation of each new word takes into account everything that came before it, including the previous words in the sentence it’s a part of. There is no thinking or understanding whatsoever.

            This is why [email protected] said in the original post to this thread, “They hallucinate all answers. Some of those answers will happen to be right.” LLMs have no way of knowing if any of the text they generate is accurate for the simple fact that they don’t know anything at all. They have no capacity for knowledge, understanding, thought, or reasoning. Their models are simply complex networks of words that are able to generate more words, usually in a way that is useful to us. But often, as the hallucination problem shows, in ways that are completely useless and even harmful.

            • EatATaco@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              7
              ·
              7 months ago

              An LLM is a static model created through exposure to lots and lots of text. It is trained and then used. To add to the model requires an offline training process, which produces a new version of the model that can then be interacted with.

              But this is a deliberate decision, not an inherent limitation. The model could get feedback from the outside world, in fact this is how it’s trained (well, data is fed back into the model to update it). Of course we are limiting it to words, rather than a whole slew of inputs that a human gets. But keep in mind we have things like music and image generation AI as well. So it’s not like it can’t be also be trained on these things. Again, deliberate decision rather than inherent limitation.

              We both even agree it’s true that it can learn from interacting with the world, you just insist that because it isn’t persisting, that doesn’t actually count. But it does persist, just not the the new inputs from users. And this is done deliberately to protect the models from what would inevitably happen. That being said, it’s also been fed arguably more input than a human would get in their whole life, just condescended into a much smaller period of time. So if it’s “total input” then the AI is going to win, hands down.

              You seem to have ignored the preceding sentence: “LLMs are sophisticated word generators.”

              I’m not ignoring this. I understand that it’s the whole argument, it gets repeated around here enough. Just saying it doesn’t make it true, however. It may be true, again I’m not sure, but simply stating and saying “full stop” doesn’t amount to a convincing argument.

              They simply do not think, much less understand.

              It’s not as open and shut as you wish it to be. If anyone is ignoring anything here, it’s you ignoring the fact that it went from basically just, as you said, randomly stacking objects it was told to stack stably, to actually doing so in a way that could work and describing why you would do it that way. Additionally there is another case where they asked chat gpt4 to draw a unicorn using an obscure programming language. And you know what? It did it. It was rudimentary, but it was clearly a unicorn. This is something that wasn’t trained on images at all. They even messed with the code, turning the unicorn around, removing the horn, fed it back in, and then asked it to replace the horn, and it put it back on correctly. It seemed to understand not only what an unicorn looked like, but what was the horn and where it should go when it was removed.

              So to say it just can “generate more words” is something you can accuse us of as well, or possibly even just overly reductive of what it’s capable of even now.

              But often, as the hallucination problem shows, in ways that are completely useless and even harmful.

              There are all kinds of problems with human memory, where we imagine things all of the time. You’ve ever taken acid? If so, you would see how unreliable our brains are at always interpreting reality. And you want to really trip? Eye witness testimony is basically garbage. I exaggerate a bit, but there are so many flaws with it with people remembering things that didn’t happen, and it’s so easy to create false memories, that it’s not as convincing as it should be. Hell, it can even be harmful by convicting an innocent person.

              Every short coming you’ve used to claim AI isn’t real thinking is something shared with us. It might just be inherent to intelligence to be wrong sometimes.

              • feedum_sneedson@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                7 months ago

                It’s exciting either way. Maybe it’s equivalent to a certain lobe of the brain, and we’re judging it for not being integrated with all the other parts.