• VintageGenious@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    16
    ·
    18 days ago

    Because you’re using it wrong. It’s good for generative text and chains of thought, not symbolic calculations including math or linguistics

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      18 days ago

      Because you’re using it wrong.

      No, I think you mean to say it’s because you’re using it for the wrong use case.

      Well this tool has been marketed as if it would handle such use cases.

      I don’t think I’ve actually seen any AI marketing that was honest about what it can do.

      I personally think image recognition is the best use case as it pretty much does what it promises.

      • L3s@lemmy.worldM
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        8
        ·
        edit-2
        18 days ago

        Writing customer/company-wide emails is a good example. “Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online”

        Dumbing down technical information “word this so a non-technical person can understand: our DHCP scope filled up and there were no more addresses available for Site A, which caused the temporary outage for some users”

        Another is feeding it an article and asking for a summary, https://hackingne.ws/ does that for its Bsky posts.

        Coding is another good example, “write me a Python script that moves all files in /mydir to /newdir”

        Asking for it to summarize a theory or protocol, “explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2”

        • lurch (he/him)@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          18 days ago

          it’s not good for summaries. often gets important bits wrong, like embedded instructions that can’t be summarized.

          • L3s@lemmy.worldM
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            18 days ago

            My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn’t as accurate.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          edit-2
          18 days ago

          The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn’t have outrage instead of outage just like if you wrote it yourself.

          How do you know the answer on why RIP was replaced with RIPv2 is accurate and not just a load of bullshit like putting glue on pizza?

          Are you really saving time?

          • L3s@lemmy.worldM
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            18 days ago

            Yes, I’m saving time. As I mentioned in my other comment:

            Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, I was trying to keep my examples short.

            And

            and helps correct my shitty grammar at times.

            And

            Hallucinations are a thing, so validating what it spits out is definitely needed.

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              7
              ·
              18 days ago

              How do you validate the accuracy of what it spits out?

              Why don’t you skip the AI and just use the thing you use to validate the AI output?

              • L3s@lemmy.worldM
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                18 days ago

                Most of what I’m asking it are things I have a general idea of, and AI has the capability of making short explanations of complex things. So typically it’s easy to spot a hallucination, but the pieces that I don’t already know are easy to Google to verify.

                Basically I can get a shorter response to get the same outcome, and validate those small pieces which saves a lot of time (I no longer have to read a 100 page white paper, instead a few paragraphs and then verify small bits)

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              edit-2
              18 days ago

              If the amount of time it takes to create the prompt is the same as it would have taken to write the dumbed down text, then the only time you saved was not learning how to write dumbed down text. Plus you need to know what dumbed down text should look like to know if the output is dumbed down but still accurate.

      • The Hobbyist@lemmy.zip
        link
        fedilink
        English
        arrow-up
        6
        ·
        18 days ago

        One thing which I find useful is to be able to turn installation/setup instructions into ansible roles and tasks. If you’re unfamiliar, ansible is a tool for automated configuration for large scale server infrastructures. In my case I only manage two servers but it is useful to parse instructions and convert them to ansible, helping me learn and understand ansible at the same time.

        Here is an example of instructions which I find interesting: how to setup docker for alpine Linux: https://wiki.alpinelinux.org/wiki/Docker

        Results are actually quite good even for smaller 14B self-hosted models like the distilled versions of DeepSeek, though I’m sure there are other usable models too.

        To assist you in programming (both to execute and learn) I find it helpful too.

        I would not rely on it for factual information, but usually it does a decent job at pointing in the right direction. Another use i have is helpint with spell-checking in a foreign language.

      • chiisana@lemmy.chiisana.net
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        5
        ·
        18 days ago

        Ask it for a second opinion on medical conditions.

        Sounds insane but they are leaps and bounds better than blindly Googling and self prescribe every condition there is under the sun when the symptoms only vaguely match.

        Once the LLM helps you narrow in on a couple of possible conditions based on the symptoms, then you can dig deeper into those specific ones, learn more about them, and have a slightly more informed conversation with your medical practitioner.

        They’re not a replacement for your actual doctor, but they can help you learn and have better discussions with your actual doctor.

        • Sippy Cup@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          6
          ·
          18 days ago

          So can web MD. We didn’t need AI for that. Googling symptoms is a great way to just be dehydrated and suddenly think you’re in kidney failure.

          • chiisana@lemmy.chiisana.net
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            18 days ago

            We didn’t stop trying to make faster, safer and more fuel efficient cars after Model T, even though it can get us from place A to place B just fine. We didn’t stop pushing for digital access to published content, even though we have physical libraries. Just because something satisfies a use case doesn’t mean we should stop advancing technology.

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              18 days ago

              AI is slower and less efficient than the older search algorithms and is less accurate.

            • Sippy Cup@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              18 days ago

              We also didn’t make the model T suggest replacing the engine when the oil light comes on. Cars, as it happens, aren’t that great at self diagnosis, despite that technology being far simpler and further along than generative models are. I don’t trust the model to tell me what temperature to bake a cake at, I’m sure at hell not going to trust it with medical information. Googling symptoms was risky at best before. It’s a horror show now.

      • chaosCruiser@futurology.today
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        18 days ago

        Here’s a bit of code that’s supposed to do stuff. I got this error message. Any ideas what could cause this error and how to fix it? Also, add this new feature to the code.

        Works reasonably well as long as you have some idea how to write the code yourself. GPT can do it in a few seconds, debugging it would take like 5-10 minutes, but that’s still faster than my best. Besides, GPT is also fairly fluent in many functions I have never used before. My approach would be clunky and convoluted, while the code generated by GPT is a lot shorter.

        If you’re well familiar with the code you’ve working on, GPT code will be convoluted by comparison. If so, you can ask GPT for the rough alpha version, and you can do the debugging and refining in a few minutes.

        • Windex007@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          6
          ·
          18 days ago

          That makes sense as long as you’re not writing code that needs to know how to do something as complex as …checks original post… count.

          • TimeSquirrel@kbin.melroy.org
            link
            fedilink
            arrow-up
            3
            ·
            18 days ago

            It can do that just fine, because it has seen enough examples of working code. It can’t directly count correctly, sure, but it can write “i++;”, incrementing a variable by one in a loop and returning the result. The computer running the generated program is going to be doing the counting.

      • slaacaa@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 days ago

        I have it write for me emails in German. I moved there not too long ago, works wonders to get doctors appointment, car service, etc. I also have it explain the text, so I’m learning the language.

        I also use it as an alternative to internet search, which is now terrible. It’s not going to help you to find smg super location specific, but I can ask it to tell me without spoilers smg about a game/movie or list metacritic scores in a table, etc.

        It also works great in summarizing long texts.

        LLM is a tool, what matters is how you use it. It is stupid, it doesn’t think, it’s mostly hype to call it AI. But it definitely has it’s benefits.

    • Prandom_returns@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      17 days ago

      So for something you can’t objectively evaluate? Looking at Apple’s garbage generator, LLMs aren’t even good at summarising.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 days ago

        For reference:

        AI chatbots unable to accurately summarise news, BBC finds

        the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer. […] It found 51% of all AI answers to questions about the news were judged to have significant issues of some form. […] 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

        It makes me remember I basically stopped using LLMs for any summarization after this exact thing happened to me. I realized that without reading the text, I wouldn’t be able to know whether the output has all the relevant info or if it has some made-up info.

  • whotookkarl@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    18 days ago

    I’ve already had more than one conversation where people quote AI as if it were a source, like quoting google as a source. When I showed them how it can sometimes lie and explain it’s not a primary source for anything I just get that blank stare like I have two heads.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    33
    ·
    edit-2
    18 days ago

    There is an alternative reality out there where LLMs were never marketed as AI and were marketed as random generator.

    In that world, tech savvy people would embrace this tech instead of having to constantly educate people that it is in fact not intelligence.

    • Static_Rocket@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 days ago

      That was this reality. Very briefly. Remember AI Dungeon and the other clones that were popular prior to the mass ml marketing campaigns of the last 2 years?

  • whynot_1@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    18 days ago

    I think I have seen this exact post word for word fifty times in the last year.

  • eggymachus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    18 days ago

    A guy is driving around the back woods of Montana and he sees a sign in front of a broken down shanty-style house: ‘Talking Dog For Sale.’

    He rings the bell and the owner appears and tells him the dog is in the backyard.

    The guy goes into the backyard and sees a nice looking Labrador Retriever sitting there.

    “You talk?” he asks.

    “Yep” the Lab replies.

    After the guy recovers from the shock of hearing a dog talk, he says, “So, what’s your story?”

    The Lab looks up and says, “Well, I discovered that I could talk when I was pretty young. I wanted to help the government, so I told the CIA. In no time at all they had me jetting from country to country, sitting in rooms with spies and world leaders, because no one figured a dog would be eavesdropping, I was one of their most valuable spies for eight years running… but the jetting around really tired me out, and I knew I wasn’t getting any younger so I decided to settle down. I signed up for a job at the airport to do some undercover security, wandering near suspicious characters and listening in. I uncovered some incredible dealings and was awarded a batch of medals. I got married, had a mess of puppies, and now I’m just retired.”

    The guy is amazed. He goes back in and asks the owner what he wants for the dog.

    “Ten dollars” the guy says.

    “Ten dollars? This dog is amazing! Why on Earth are you selling him so cheap?”

    “Because he’s a liar. He’s never been out of the yard.”

  • gerryflap@feddit.nl
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    4
    ·
    edit-2
    17 days ago

    These models don’t get single characters but rather tokens repenting multiple characters. While I also don’t like the “AI” hype, this image is also very 1 dimensional hate and misreprents the usefulness of these models by picking one adversarial example.

    Today ChatGPT saved me a fuckton of time by linking me to the exact issue on gitlab that discussed the issue I was having (full system freezes using Bottles installed with flatpak on Arch). This was the URL it came up with after explaining the problem and giving it the first error I found in dmesg: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/issues/110

    This issue is one day old. When I looked this shit up myself I found exactly nothing useful on both DDG or Google. After this ChatGPT also provided me with the information that the LTS kernel exists and how to install it. Obviously I verified that stuff before using it, because these LLMs have their limits. Now my system works again, and figuring this out myself would’ve cost me hours because I had no idea what broke. Was it flatpak, Nvidia, the kernel, Wayland, Bottles, some random shit I changed in a config file 2 years ago? Well thanks to ChatGPT I know.

    They’re tools, and they can provide new insights that can be very useful. Just don’t expect them to always tell the truth, or to actually be human-like

    • lennivelkant@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      17 days ago

      Just don’t expect them to always tell the truth, or to actually be human-like

      I think the point of the post is to call out exactly that: people preaching AI as replacing humans

    • TeamAssimilation@infosec.pub
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      18 days ago

      Still, it’s kinda insane how two years ago we didn’t imagine we would be instructing programs like “be helpful but avoid sensitive topics”.

      That was definitely a big step in AI.

  • ClusterBomb@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    17 days ago

    “My hammer is not well suited to cut vegetables” 🤷

    There is so much to say about AI, can we move on from “it can’t count letters and do math” ?

    • ReallyActuallyFrankenstein@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 days ago

      I get that it’s usually just a dunk on AI, but it is also still a valid demonstration that AI has pretty severe and unpredictable gaps in functionality, in addition to failing to properly indicate confidence (or lack thereof).

      People who understand that it’s a glorified autocomplete will know how to disregard or prompt around some of these gaps, but this remains a litmus test because it succinctly shows you cannot trust an LLM response even in many “easy” cases.

  • Tgo_up@lemm.ee
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    17
    ·
    18 days ago

    This is a bad example… If I ask a friend "is strawberry spelled with one or two r’s"they would think I’m asking about the last part of the word.

    The question seems to be specifically made to trip up LLMs. I’ve never heard anyone ask how many of a certain letter is in a word. I’ve heard people ask how you spell a word and if it’s with one or two of a specific letter though.

    If you think of LLMs as something with actual intelligence you’re going to be very unimpressed… It’s just a model to predict the next word.

    • renegadespork@lemmy.jelliefrontier.net
      link
      fedilink
      English
      arrow-up
      25
      ·
      18 days ago

      If you think of LLMs as something with actual intelligence you’re going to be very unimpressed… It’s just a model to predict the next word.

      This is exactly the problem, though. They don’t have “intelligence” or any actual reasoning, yet they are constantly being used in situations that require reasoning.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        18 days ago

        Maybe if you focus on pro- or anti-AI sources, but if you talk to actual professionals or hobbyists solving actual problems, you’ll see very different applications. If you go into it looking for problems, you’ll find them, likewise if you go into it for use cases, you’ll find them.

        • renegadespork@lemmy.jelliefrontier.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 days ago

          Personally I have yet to find a use case. Every single time I try to use an LLM for a task (even ones they are supposedly good at), I find the results so lacking that I spend more time fixing its mistakes than I would have just doing it myself.

          • Scubus@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            17 days ago

            So youve never used it as a starting point to learn about a new topic? You’ve never used it to look up a song when you can only remember a small section of lyrics? What about when you want to code a block of code that is simple but monotonous to code yourself? Or to suggest plans for how to create simple sturctures/inventions?

            Anything with a verifyable answer that youd ask on a forum can generally be answered by an llm, because theyre largely trained on forums and theres a decent section the training data included someone asking the question you are currently asking.

            Hell, ask chatgpt what use cases it would recommend for itself, im sure itll have something interesting.

            • renegadespork@lemmy.jelliefrontier.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              17 days ago

              as a starting point to learn about a new topic

              No. I’ve used several models to “teach” me about subjects I already know a lot about, and they all frequently get many facts wrong. Why would I then trust it to teach me about something I don’t know about?

              to look up a song when you can only remember a small section of lyrics

              No, because traditional search engines do that just fine.

              when you want to code a block of code that is simple but monotonous to code yourself

              See this comment.

              suggest plans for how to create simple sturctures/inventions

              I guess I’ve never tried this.

              Anything with a verifyable answer that youd ask on a forum can generally be answered by an llm, because theyre largely trained on forums and theres a decent section the training data included someone asking the question you are currently asking.

              Kind of, but here’s the thing, it’s rarely faster than just using a good traditional search, especially if you know where to look and how to use advanced filtering features. Also, (and this is key) verifying the accuracy of an LLM’s answer requires about the same about of work as just not using an LLM in the first place, so I default to skipping the middle-man.

              Lastly, I haven’t even touched on the privacy nightmare that these systems pose if you’re not running local models.

      • Tgo_up@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 days ago

        What situations are you thinking of that requires reasoning?

        I’ve used LLMs to create software i needed but couldn’t find online.

        • renegadespork@lemmy.jelliefrontier.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 days ago

          Creating software is a great example, actually. Coding absolutely requires reasoning. I’ve tried using code-focused LLMs to write blocks of code, or even some basic YAML files, but the output is often unusable.

          It rarely makes syntax errors, but it will do things like reference libraries that haven’t been imported or hallucinate functions that don’t exist. It also constantly misunderstands the assignment and creates something that technically works but doesn’t accomplish the intended task.

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      18 days ago

      If you think of LLMs as something with actual intelligence you’re going to be very unimpressed

      Artificial sugar is still sugar.

      Artificial intelligence implies there is intelligence in some shape or form.

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        18 days ago

        Something that pretends or looks like intelligence, but actually isn’t at all is a perfectly valid interpretation of the word artificial - fake intelligence.

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        18 days ago

        Artificial sugar is still sugar.

        Because it contains sucrose, fructose or glucose? Because it metabolises the same and matches the glycemic index of sugar?

        Because those are all wrong. What’s your criteria?

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 days ago

          In this example a sugar is something that is sweet.

          Another example is artificial flavours still being a flavour.

          Or like artificial light being in fact light.

      • Tgo_up@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 days ago

        Exactly. The naming of the technology would make you assume it’s intelligent. It’s not.

      • Scubus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        17 days ago

        Thats because it wasnt originally called AI. It was called an LLM. Techbros trying to sell it and articles wanting to fan the flames started called it AI and eventually it became common dialect. No one in the field seriously calls it AI, they generally save that terms to refer to general AI or at least narrow ai. Of which an llm is neither.

  • Allero@lemmy.today
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    17 days ago

    Here’s my guess, aside from highlighted token issues:

    We all know LLMs train on human-generated data. And when we ask something like “how many R’s” or “how many L’s” is in a given word, we don’t mean to count them all - we normally mean something like “how many consecutive letters there are, so I could spell it right”.

    Yes, the word “strawberry” has 3 R’s. But what most people are interested in is whether it is “strawberry” or “strawbery”, and their “how many R’s” refers to this exactly, not the entire word.

    • Opisek@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      17 days ago

      But to be fair, as people we would not ask “how many Rs does strawberry have”, but “with how many Rs do you spell strawberry” or “do you spell strawberry with 1 R or 2 Rs”

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    17 days ago

    Yeah and you know I always hated this screwdrivers make really bad hammers.

  • dan1101@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    18 days ago

    It’s like someone who has no formal education but has a high level of confidence and eavesdrops on a lot of random conversations.

  • zipzoopaboop@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    17 days ago

    I asked Gemini if the quest has an SD slot. It doesn’t, but Gemini said it did. Checking the source it was pulling info from the vive user manual

  • artificialfish@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    6
    ·
    18 days ago

    This is literally just a tokenization artifact. If I asked you how many r’s are in /0x5273/0x7183 you’d be confused too.

  • Fubarberry@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    18 days ago

    I asked mistral/brave AI and got this response:

    How Many Rs in Strawberry

    The word “strawberry” contains three "r"s. This simple question has highlighted a limitation in large language models (LLMs), such as GPT-4 and Claude, which often incorrectly count the number of "r"s as two. The error stems from the way these models process text through a process called tokenization, where text is broken down into smaller units called tokens. These tokens do not always correspond directly to individual letters, leading to errors in counting specific letters within words.