• adarza@lemmy.ca
    link
    fedilink
    English
    arrow-up
    286
    ·
    13 days ago

    AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits

    nothing to do with actual capabilities… just the ability to make piles and piles of money.

    • LostXOR@fedia.io
      link
      fedilink
      arrow-up
      43
      arrow-down
      2
      ·
      13 days ago

      Guess we’re never getting AGI then, there’s no way they end up with that much profit before this whole AI bubble collapses and their value plummets.

      • hemmes@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        36
        ·
        13 days ago

        AI (LLM software) is not a bubble. It’s been effectively implemented as a utility framework across many platforms. Most of those platforms are using OpenAI’s models. I don’t know when or if that’ll make OpenAI 100 billion dollars, but it’s not a bubble - this is not the .COM situation.

        • lazynooblet@lazysoci.al
          link
          fedilink
          English
          arrow-up
          59
          arrow-down
          2
          ·
          13 days ago

          The vast majority of those implementations are worthless. Mostly ignored by it’s intended users, seen as a useless gimmick.

          LLM have it’s uses but companies are pushing them into every areas to see what sticks at the moment.

          • Benjaben@lemmy.world
            link
            fedilink
            English
            arrow-up
            20
            arrow-down
            1
            ·
            13 days ago

            Not the person you replied to, but I think you’re both “right”. The ridiculous hype bubble (I’ll call it that for sure) put “AI” everywhere, and most of those are useless gimmicks.

            But there’s also already uses that offer things I’d call novel and useful enough to have some staying power, which also means they’ll be iterated on and improved to whatever degree there is useful stuff there.

            (And just to be clear, an LLM - no matter the use cases and bells and whistles - seems completely incapable of approaching any reasonable definition of AGI, to me)

            • Auli@lemmy.ca
              link
              fedilink
              English
              arrow-up
              18
              ·
              12 days ago

              I think people misunderstand a bubble. The .com bubble happened but the internet was useful and stayed around. The AI bubble doesn’t mean AI isn’t useful just that most of the chaf well disapear.

              • kbal@fedia.io
                link
                fedilink
                arrow-up
                2
                arrow-down
                5
                ·
                12 days ago

                The dotcom bubble was based on technology that had already been around for ten years. The AI bubble is based on technology that doesn’t exist yet.

            • anomnom@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              12 days ago

              Yeah, it’s so a question of if OpenAI won’t lose too many of its investors when all the users that don’t stick fall down.

          • hemmes@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            7
            ·
            13 days ago

            To each his own, but I use Copilot and the ChatGPT app positively on a daily. The Copilot integration into our SharePoint files is extremely helpful. I’m able to curate data that would not show up in a standard search of file name and content indexing.

        • Alphane Moon@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          ·
          edit-2
          12 days ago

          To be fair, a bubble is more of an economic thing and not necessarily tied to product/service features.

          LLMs clearly have utility, but is it enough to turn them into a profitable business line?

          • hemmes@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            10
            ·
            13 days ago

            You’re right about the definition, and I do think the LLMs will aid in a product offering’s profitability, if not directly generate profits. But OP didn’t mean economically, they meant LLMs will go the way of slap bracelets.

            • frezik@midwest.social
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              1
              ·
              12 days ago

              … before this whole AI bubble collapses and their value plummets.

              Sounds like they meant economics to me.

              • hemmes@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                6
                ·
                12 days ago

                They said “AI bubble collapses” first then “their value” - meaning the product’s practical use stops functioning (people stop using it) first thus causing economic breakdown for the companies as a result.

                It’s obvious that the OP is expecting LLMs to be a fad that people will soon be forgetting.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          edit-2
          12 days ago

          It’s a bubble. It doesn’t mean the tech does not have its uses. And it is exactly like the .com situation.

          • suy@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            12 days ago

            I think that “exactly like” it’s absurd. Bubbles are never “exactly” like the previous ones.

            I think in this case there is a clear economical value in what they produce (from the POV of capitalism, not humanity’s best interests), but the cost is absurdly huge to be economically viable, hence, it is a bubble. But in the dot com bubble, many companies had a very dubious value in the first place.

            • skulblaka@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              12 days ago

              there is a clear economical value in what they produce

              There is clear economic value in chains of bullshit that may or may not ever have a correct answer?

              • suy@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                ·
                12 days ago

                OpenAI doesn’t produce LLMs only. People are gonna be paying for stuff like Sora or DallE. And people are also paying for LLMs (e.g. Copilot, or whatever advanced stuff OpenAI offers in their paid plan).

                How many, and how much? I don’t know, and I am not sure it can ever be profitable, but just reducing it to “chains of bullshit” to justify that it has no value to the masses seems insincere to me. ChatGPT gained a lot of users in record time, and we know is used a lot (often more than it should, of course). Someone is clearly seeing value in it, and it doesn’t matter if you and I disagree with them on that value.

                I still facepalm when I see so many people paying for fucking Twitter blue, but the fact is that they are paying.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      13 days ago

      The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it’s not a philosophical term but a business one.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        14
        ·
        13 days ago

        Right but that’s not interesting to anyone but themselves. So why call it AGI then? Why not just say once the company has made over x amount of money they are split off to a separate company. Why lie and say you’ve developed something that you might not have developed.

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          12 days ago

          honestly I agree. 100 Billion profit is incredibly impressive and would overtake basically any other software industry in the world but alas it doesn’t have anything to do with “AGI”. For context, Apple’s net income is 90 Billion this year.

          I’ve listened to enough interviews to know that all of AI leaders want this holy grail title of “inventor of AGI” more than anything else so I don’t think the definitely will ever be settled collectively until something so mind blowing exists that would really render the definition moot either way.

    • Harvey656@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      12 days ago

      For fun, I used this idea in chatgpt using similar logic:

      Hello, can you explain to me how a strict profit generated number would result in a specific goal, such as if I make 1000 dollars I would suddenly be capable of owning a house. I find the idea strange and backwards.

      You’re correct to find this idea odd—it reflects a misunderstanding of how financial goals, like owning a house, relate to income. Here’s why this thinking is flawed:

      1. Profit vs. Net Worth: Making $1,000 in profit doesn’t directly equate to being able to afford a house because buying a house requires not just income but also savings, creditworthiness, and a down payment. Profit is only a small piece of the puzzle.

      2. Costs and Expenses: Owning a house involves more than just the purchase price. You need to consider closing costs, property taxes, maintenance, and potentially ongoing mortgage payments. $1,000 would likely not be enough to cover even a fraction of these.

      3. Leverage and Financing: Most people don’t buy houses outright; they use loans or mortgages. To secure a mortgage, lenders look at your total financial picture: income, savings, debt, and credit score. Simply generating $1,000 won’t improve these factors significantly.

      4. Specificity of the Goal: “Making $1,000” is a general financial milestone, not a specific strategy tailored to the requirements of owning a home. A more actionable goal would involve budgeting, saving for a down payment, and improving credit.

      5. The Backwards Logic: The idea that reaching an arbitrary profit threshold suddenly makes you eligible for something like owning a house conflates short-term gains with long-term financial stability. A better approach is to reverse-engineer the goal: start by determining the total cost of the house, then work backwards to figure out how much you need to save monthly, how much income you need to earn, and what steps are required to secure financing.

      Example of Goal-Oriented Thinking:

      If you wanted to own a $200,000 house:

      You might need a 10% down payment ($20,000).

      Closing costs could add $5,000–$10,000.

      Monthly income would need to support a mortgage payment (let’s say $1,200/month).

      Steps would include saving regularly, reducing debt, and improving credit, rather than focusing on an isolated profit milestone like $1,000.

      Summary:

      Focusing on a single, arbitrary profit number like $1,000 doesn’t align with the comprehensive planning required for significant financial goals like home ownership. Instead, success depends on a holistic view of your finances and structured goal-setting.

  • Mikina@programming.dev
    link
    fedilink
    English
    arrow-up
    174
    arrow-down
    11
    ·
    12 days ago

    Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

    If we ever get it, it won’t be through LLMs.

    I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      2
      ·
      12 days ago

      I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

      They did! Here’s a paper that proves basically that:

      van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

      Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can’t be solved using the resources available in the universe, even with perfect/idealized algorithms that haven’t yet been invented.

      This isn’t a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

      • Mikina@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        Thank you, it was an interesting read.

        Unfortunately, as I was looking more into it, I’ve stumbled upon a paper that points out some key problems with the proof. I haven’t looked into it more and tbh my expertise in formal math ends at vague memories from CS degree almost 10 years ago, but the points do seem to make sense.

        https://arxiv.org/html/2411.06498v1

      • naught101@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        Doesn’t that just say that AI will never be cheap? You can still brute force it, which is more or less how back propagation works.

        I don’t think “intelligence” needs to have a perfect “solution”, it just needs to do things well enough to be useful. Which is how human intelligence developed, evolutionarily - it’s absolutely not optimal.

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 days ago

          You can still brute force it, which is more or less how back propagation works.

          Intractable problems of that scale can’t be brute forced because the brute force solution can’t be run within the time scale of the universe, using the resources of the universe. If we’re talking about maintaining all the computing power of humanity towards a solution and hoping to solve it before the sun expands to cover the earth in about 7.5 billion years, then it’s not a real solution.

          • naught101@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 days ago

            Yeah, maybe you’re right. I don’t known where the threshold is.

            I wonder if the current computational feasibility will cap out improvment of current generation LLMs soon?

        • BreadstickNinja@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          12 days ago

          I remember that the keys for “good,” “gone,” and “home” were all the same, but I had the muscle memory to cycle through to the right one without even looking at the screen. Could type a text one-handed while driving without looking at the screen. Not possible on a smartphone!

      • zerozaku@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 days ago

        Gemini is really far behind. For me it’s Chatgpt > Llama >> Gemini. I haven’t tried Claude since they require mobile number to use it.

    • 7rokhym@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      12 days ago

      Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind

      His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.

      All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.

      What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn’t know, for that we need to build nuclear power plants everywhere. It’s sickening really.

      So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.

      • RoidingOldMan@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 days ago

        a series of switches is not ever going to create a sentient being

        Is the goal to create a sentient being, or to create something that seems sentient? How would you even tell the difference (assuming it could pass any test a normal human could)?

        • 7rokhym@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          Powering off a pile of switches is turning it off. Powering off a sentient being is killing it. Not to mention a million other issues it raises.

      • Queen HawlSera@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 days ago

        Until you can see the human soul under a microscope, we can’t make rocks into people.

    • suy@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      12 days ago

      Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

      This is correct, and I don’t think many serious people disagree with it.

      If we ever get it, it won’t be through LLMs.

      Well… depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The “trick” is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do “fine”. The key for generality is trying to learn after you’ve been trained, to try to solve something that you’ve not been prepared for.

      Even OpenAI’s O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.

      I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

      I’m not sure if it’s already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can’t go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      12 days ago

      I mean, human intelligence is ultimately too “just” something.

      And 10 years ago people would often refer to “Turing test” and imitation games in the sense of what is artificial intelligence and what is not.

      My complaint to what’s now called AI is that it’s as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.

      But I disagree that this technology will not be present in a real AGI if it’s achieved. I think that it will be.

    • bitjunkie@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      12 days ago

      I’m not sure that not bullshitting should be a strict criterion of AGI if whether or not it’s been achieved is gauged by its capacity to mimic human thought

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        12 days ago

        The LLM aren’t bullshitting. They can’t lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.

        • 11111one11111@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          2
          ·
          edit-2
          12 days ago

          Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn’t a single thing in the universe that can’t be broken down to a mathematical equation for physics or chemistry? I’m curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it’s a leap and I could be wrong but I thought I’ve heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.

          Like I said in the beginning this is straight up bong rips philosophy and haven’t looked up any of the shit I brought up.

          I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won’t see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can’t out perform any more independently than a 3 year old.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            12 days ago

            First of all, I’m about to give the extreme dumbed down explanation, but there are actual academics covering this topic right now usually using keywords like AI “emergent behavior” and “overfitting”. More specifically about how emergent behavior doesn’t really exist in certain model archetypes and that overfitting increases accuracy but effectively makes it more robotic and useless. There are also studies of how humans think.

            Anyways, human’s don’t assign numerical values to words and phrases for the purpose of making a statistical model of a response to a statistical model input.

            Humans suck at math.

            Humans store data in a much messier unorganized way, and retrieve it by tracing stacks of related concepts back to the root, or fail to memorize data altogether. The values are incredibly diverse and have many attributes to them. Humans do not hallucinate entire documentations up or describe company policies that don’t exist to customers, because we understand the branching complexity and nuance to each individual word and phrase. For a human to describe procedures or creatures that do not exist we would have to be lying for some perceived benefit such as entertainment, unlike an LLM which meant that shit it said but just doesn’t know any better. Just doesn’t know, period.

            Maybe an LLM could approach that at some scale if each word had it’s own model with massive amounts more data, but given their diminishing returns displayed so far as we feed in more and more processing power that would take more money and electricity than has ever existed on earth. In fact, that aligns pretty well with OpenAI’s statement that it could make an AGI if it had Trillions of Dollars to spend and years to spend it. (They’re probably underestimating the costs by magnitudes).

            • 11111one11111@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              12 days ago

              So that doesn’t really address the concept I’m questioning. You’re leaning hard into the fact the computer is using numbers in place of words but I’m saying why is that any different than assigning native language to a book written in a foreign language? The vernacular, language, formula or code that is being used to formulate a thought shouldn’t delineate if something was a legitimate thought.

              I think the gap between our reasoning is a perfect example of why I think FUTURE models (wanna be real clear this is entirely hypothetical assumption that LLMs will continue improving.)

              What I mean is, you can give 100 people the same problem and come out with 100 different cognitive pathways being used to come to a right or wrong solution.

              When I was learning to play the trumpet in middle school and later learned the guitar and drums, I was told I did not play instruments like most musicians. Use that term super fuckin loosely, I am very bad lol but the reason was because I do not have an ear for music, I can’t listen and tell you something is in tune or out of tune by hearing a song played, but I could tune the instrument just fine if I have an in tune note played for me to match. My instructor explained that I was someone who read music the way others read but instead of words I read the notes as numbers. Especially when I got older when I learned the guitar. I knew how to read music at that point but to this day I can’t learn a new song unless I read the guitar tabs which are literal numbers on a guitar fretboard instead of a actual scale.

              I know I’m making huge leaps here and I’m not really trying to prove any point. I just feel strongly that at our most basic core, a human’s understanding of their existence is derived from “I think. Therefore I am.” Which in itself is nothing more than an electrochemical reaction between neurons that either release something or receive something. We are nothing more than a series of plc commands on a cnc machine. No matter how advanced we are capable of being, we are nothing but a complex series of on and off switches that theoretically could be emulated into operating on an infinate string of commands spelled out by 1’s and 0’s.

              Im sorry, my brother prolly got me way too much weed for Xmas.

              • finitebanjo@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                12 days ago

                98% and 98% are identical terms, but the machine can use the terms to describe separate word’s accuracy.

                It doesn’t have languages. It’s not emulating concepts. It’s emulating statistical averages.

                “pie” to us is a delicious desert with a variety of possible fillings.

                “pie” to an llm is 32%. “cake” is also 32%. An LLM might say Cake when it should be Pie, because it doesn’t know what either of those things are aside from their placement next to terms like flour, sugar, and butter.

                • 11111one11111@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  12 days ago

                  So by your logic a child locked in a room with no understanding of language is not capable of thought? All of your reasoning for why a computers aren’t generating thoughts are actual psychological case studies tought in the abnormal psychology course I took in high school back in 2005. You don’t even have to go that far into the abnormal portion of it either. I’ve never sat with my buddies daughter’s “classes” but she is 4 years old now and on the autism spectrum. She is doing wonderfully since she started with this special Ed preschool program she’s in but at 4 years old she still cannot speak and she is still in diapers. Not saying this to say she’s really bad or far on the spectrum, I’m using this example because it’s exactly what you are out lining. She isn’t a dumb kid by any means. She’s 100x’s more athletic and coordinated than any other kid I’ve seen her age. What he was told and once he told me I noticed it immediately, which was that with autistic babies, they don’t have the ability to mimic what other humans around them are doing. I’m talking not even the littlest thing like learning how to smile or laugh by seeing a parent smiling at them. It was so tough on my dude watching him work like it meant life or death trying to get his daughter to wave back when she was a baby cuz it was the first test they told him they would do to try and diagnose why his daughter wasn’t developing like other kids.

                  Fuck my bad I went full tails pin tangent there but what I mean to say is, who are we to determine what defines a generated independent thought when the industry of doctors, educators and philosophers haven’t done all that much understanding our own cognizant existence past "I think, Therefore I am.

                  People like my buddy’s daughter could go their entire life as a burden of the state incapable of caring for themself and some will never learn to talk well enough to give any insight to the thoughts being processed behind their curtains. So why is the argument always pointing toward the need for language to prove thought and existence?

                  Like is said in NY other comment. Not trying to prove or argue any specific point. This shit is just wildly interesting to me. I worked in a low income nursing home for years where they catered to residents who were considered burdens of the state after NY closed the doors on psychological institutions everywhere, which pushed anyone under 45y/o to the streets and anyone over 45 into nursing homes. So there were so many, excuse the crash term but it’s what they were, brain dead former drug addics or brain dead alzheimer residents. All of whom spent thw last decades of their life mumbling, incoherent, and staring off into space with noone home. We’re they still humans cababl3 of generative intelligence cua every 12 days they’d reach the hand up and scratch their nose?

            • naught101@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 days ago

              emergent behavior doesn’t really exist in certain model archetypes

              Hey, would you have a reference for this? I’d love to read it. Does it apply to deep neural nets? And/or recurrent NNs?

              • finitebanjo@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                11 days ago

                There is this 2023 study from Stanford which states AI likely do not have emergent abilities LINK

                And there is this 2020 study by… OpenAI… which states the error rate is predictable based on 3 factors, that AI cannot cross below the line or approach 0 error rate without exponentially increasing costs several iterations beyond current models, lending to the idea that they’re predictable to a fault LINK

                There is another paper by DeepMind in 2022 that comes to the conclusion that even at infinite scales it can never approach below 1.69 irreducable error LINK

                This all lends to the idea that AI lacks the same Emergent behavior in Human Language.

          • lad@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            12 days ago

            I’d say that difference between nature boiling down to maths and LLMs boiling down to maths is that in LLMs it’s not the knowledge itself that is abstracted, it’s language. This makes it both more believable to us humans, because we’re wired to use language, and less suitable to actually achieve something, because it’s just language all the way down.

            Would be nice if it gets us something in the long run, but I wouldn’t keep my hopes up

            • 11111one11111@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              12 days ago

              I’m super stoked now to follow this and to also follow the progress being made mapping the neurological pathways of the human brain. Wanna say i saw an article on lemmy recently where the mapped the entire network of neurons in either an insect or a mouse, I can’t remember. So I’m guna assume like 3-5 years until we can map out human brains and know exactly what is firing off which brain cells as someone is doing puzzles in real time.

              I think it would be so crazy cool if we get to a pint where the understanding of our cognitive processes is so detailed that scientists are left with nothing but faith as their only way of defining the difference between a computer processing information and a person. Obviously the subsequent dark ages that follow will suck after all people of science snap and revert into becoming idiot priests. But that’s a risk I’m willing to take. 🤣🤣🍻

              • lad@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                12 days ago

                Maybe a rat brain project? I think the mapping of human may take longer, but yeah, once it happens interesting times are on the horizon

                • 11111one11111@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 days ago

                  For fucking reals. Sign me up to get scanned while taking every drug imaginable!!! I would love to see for example why cocaine for me with my adhd has like none of the affects that other people have. My buddy rips a line “IM ON TOP OF THE WPRLD” I rip a line “fuck I should prolly do my taxes”

    • billwashere@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      I’m pretty sure the simplest way to look at is an LLM can only respond, not generate anything on its own without prompting. I wish humans were like that sometimes, especially a few in particular. I would think an AGI would be capable of independent thought, not requiring the prompt.

  • frezik@midwest.social
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    2
    ·
    12 days ago

    We taught sand to do math

    And now we’re teaching it to dream

    All the stupid fucks can think to do with it

    Is sell more cars

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    12 days ago

    “It’s at a human-level equivalent of intelligence when it makes enough profits” is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.

  • Free_Opinions@feddit.uk
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    4
    ·
    12 days ago

    We’ve had definition for AGI for decades. It’s a system that can do any cognitive task as well as a human can or better. Humans are “Generally Intelligent” replicate the same thing artificially and you’ve got AGI.

    • zeca@lemmy.eco.br
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      12 days ago

      Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        12 days ago

        I wonder if we’ll get something like NP Complete for AGI, as in a set of problems that humans can solve, or that common problems can be simplified down/converted to.

      • Free_Opinions@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        12 days ago

        As with many things, it’s hard to pinpoint the exact moment when narrow AI or pre-AGI transitions into true AGI. However, the definition is clear enough that we can confidently look at something like ChatGPT and say it’s not AGI - nor is it anywhere close. There’s likely a gray area between narrow AI and true AGI where it’s difficult to judge whether what we have qualifies, but once we truly reach AGI, I think it will be undeniable.

        I doubt it will remain at “human level” for long. Even if it were no more intelligent than humans, it would still process information millions of times faster, possess near-infinite memory, and have access to all existing information. A system like this would almost certainly be so obviously superintelligent that there would be no question about whether it qualifies as AGI.

        I think this is similar to the discussion about when a fetus becomes a person. It may not be possible to pinpoint a specific moment, but we can still look at an embryo and confidently say that it’s not a person, just as we can look at a newborn baby and say that it definitely is. In this analogy, the embryo is ChatGPT, and the baby is AGI.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 days ago

        But we know too little about whether the limits of the turing machine are also limits of human cognition.

        Erm, no. Humans can manually step interpreters of Turing-complete languages so we’re TC ourselves. There is no more powerful class of computation, we can compute any computable function and our silicon computers can do it as well (given infinite time and scratch space yadayada theoretical wibbles)

        The question isn’t “whether”, the answer to that is “yes of course”, the question is first and foremost “what” and then “how”, as in “is it fast and efficient enough”.

        • zeca@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 days ago

          No, you misread what I said. Of course humans are at least as powerful as a turing machine, im not questioning that. What is unkonwn is if turing machines are as powerful as human cognition. Who says every brain operation is computable (in the classical sense)? Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            11 days ago

            Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?

            Logic, from which follows the incompleteness theorem, reified in material reality as cause and effect. Instead of completeness you could throw out soundness (that is, throw out cause and effect) but now the physicists are after you because you made them fend off even more Boltzmann brains. There is theory on hypercomputation but all it really boils down to is “if incomputable inputs are allowed, then we can compute the incomputable”. It should be called reasoning modulo oracles.

            Or, put bluntly: Claiming that brains are legit hypercomputers amounts to saying that humanity is supernatural, as in aphysical. Even if that were the case, what would hinder an AI from harnessing the same supernatural phenomenon? The gods?

            • zeca@lemmy.eco.br
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 days ago

              You say an incompleteness theorem implies that brains are computable? Then you consider the possibility of them being hypercomputers? What is this?

              Im not saying brains are hypercomputers, just that we dont know if thats the case. If you think that would be “supernatural”, ok, i dont mind. And i dont object to the possibility of eventually having AI on hypercomputers. All I said is that the plain old Turing machine wouldn’t be the adequate model for human cognitive capacity in this scenario.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                11 days ago

                You say an incompleteness theorem implies that brains are computable?

                No, I’m saying that incompleteness implies that either cause and effect does not exist, or there exist incomputable functions. That follows from considering the universe, or its collection of laws, as a logical system, which are all bound by the incompleteness theorem once they reach a certain expressivity.

                All I said is that the plain old Turing machine wouldn’t be the adequate model for human cognitive capacity in this scenario.

                Adequate in which sense? Architecturally, of course not, and neither would be lambda calculus or other common models. I’m not talking about specific abstract machines, though, but Turing-completeness, that is, the property of the set of all abstract machines that are as computationally powerful as Turing machines, and can all simulate each other. Those are a dime a gazillion.

                Or, see it this way: Imagine a perfect, virtual representation of a human brain stored on an ordinary computer. That computer is powerful enough to simulate all physical laws relevant to the functioning of a human brain… it might take a million years to simulate a second of brain time, but so be it. Such a system would be AGI (for ethically dubious values of “artificial”). That is why I say the “whether” is not the question: We know it is possible. We’ve in fact done it for simpler organisms. The question is how to do it with reasonable efficiency, and that requires an understanding of how the brain does the computations it does so we can mold it directly into silicon instead of going via several steps of one machine simulating another machine, each time incurring simulation overhead from architectural mismatch.

                • zeca@lemmy.eco.br
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  11 days ago

                  No,

                  Ok. So nothing you said backs the claim that “logic” implies that the brain cannot be using some uncomputable physical phenomenon, and so be uncomputable.

                  I’m not sure about what you mean by “cause and effect” existing. Does it mean that the universe follows a set of laws? If cause and effect exists, the disjunction you said is implied by the incompleteness theorem entails that there are uncomputable functions, which I take to mean that there are uncomputable oracles in the physical world. But i still find suspicious your use of incompleteness. We take the set of laws governing the universe and turn it into a formal system. How? Does the resulting formal system really meet all conditions of the incompleteness theorem? Expressivity is just one of many conditions. Even then, the incompleteness theorem says we can’t effectively axiomatize the system… so what?

                  Adequate in which sense?

                  I dont mean just architecturally, the turing machine wouldnt be adequate to model the brain in the sense that the brain, in that hypothetical scenario, would be a hypercomputer, and so by definition could not be simulated by a turing machine. As simple as that. My statement there was almost a tautology.

      • Free_Opinions@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 days ago

        It should be able to perform any cognitive task a human can. We already have AI systems that are better at individual tasks.

    • rational_lib@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      So then how do we define natural general intelligence? I’d argue it’s when something can do better than chance at solving a task without prior training data particular to that task. Like if a person plays tetris for the first time, maybe they don’t do very well but they probably do better than a random set of button inputs.

      Likewise with AGI - say you feed an LLM text about the rules of tetris but no button presses/actual game data and then hook it up to play the game. Will it do significantly better than chance? My guess is no but it would be interesting to try.

  • ChowJeeBai@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    13 days ago

    This is just so they can announce at some point in the future that they’ve achieved AGI to the tune of billions in the stock market.

    Except that it isn’t AGI.

    • phoneymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      12 days ago

      But OpenAI has received more than $13 billion in funding from Microsoft over the years, and that money has come with a strange contractual agreement that OpenAI would stop allowing Microsoft to use any new technology it develops after AGI is achieved

      The real motivation is to not be beholden to Microsoft

      • lad@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 days ago

        Also, maybe in a world where you measure anyone with money it makes sense to measure intelligence with money ¯\_(ツ)_/¯

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    8
    ·
    edit-2
    13 days ago

    That’s not a bad way of defining it, as far as totally objective definitions go. $100 billion is more than the current net income of all of Microsoft. It’s reasonable to expect that an AI which can do that is better than a human being (in fact, better than 228,000 human beings) at everything which matters to Microsoft.

    • brie@programming.dev
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      13 days ago

      Good observation. Could it be that Microsoft lowers profits by including unnecessary investments like acquisitions?

      So it’d take a 100M users to sign up for the $200/mo plan. All it’d take is for the US government to issue vouchers for video generators to encourage everyone to become a YouTuber instead of being unemployed.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          5
          ·
          13 days ago

          Maybe it’ll be able to come up with coherent naming conventions for their products. That would be revolutionary

        • kautau@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          13 days ago

          That’s basically Neuromancer, and at this point it seems that big tech companies are reading dystopian cyberpunk literature as next-gen business advice books, so you’re certainly right

        • brie@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 days ago

          AI is already running all software companies as the principle growth philosophy, but that’s like saying that gold used to run Colorado and California in 1800s. The executives have no choice at all but bet all in on AI now.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 days ago

      If they actually achieve AGI I don’t understand what money would even mean anymore. It essentially is just a mechanism for getting people to do things they don’t otherwise want to do, if the AI can do it just as well as the human, but for free other than the electricity costs, why the hell would you pay a human to do it?

      It’s like saving up money, in case of nuclear war. There are a few particular moments in history where the state of the world on the far side of the event is so different to the world on this side of the event that there’s no point making any kind of plans based on today systems.

      • ArbitraryValue@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        12 days ago

        I see what you’re saying and I agree that if, for example, we get an AI god then money won’t be useful. However, that’s not the only possible near-future outcome and if the world as we know it doesn’t end then money can be used by AIs to get other AIs to do something they don’t otherwise want to do.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          12 days ago

          My point is if AI takes over all of the work there won’t be any jobs for humans. So they won’t have any money.

          So who are all the AI companies going to sell their products to? The whole system doesn’t work in an AI future and we don’t need AI gods to be able to do our jobs, after all most humans are idiots.

          Also AI doesn’t need motivation.

          • ArbitraryValue@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            12 days ago

            Trade (facilitated by money) doesn’t require humans. It just requires multiple agents and positive-sum interactions. Imagine a company, run by an AI, which makes robots. It sells those robots to another company, also run by an AI, which mines metal (the robots do the mining). The robots are made from metal the first company buys from the second one. The first AI gets to make more robots than it otherwise would, the second AI gets to mine more metal than it otherwise would, and so both are better off.

            They don’t care that they’re stuck in a loop, the same way humans keep creating new humans to create new humans to create new humans and so forth.

        • Aussiemandeus@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 days ago

          There are jobs that require hands still.

          AGI hell regular “Ai” (LLM) trained on all the automotive repair books should be able to diagnose a fault but it still needs a human to go repair the vehicle.

          On board diagnostics are smart, they can tell you the rear tail lights are in open circuit etc. What they can’t tell is the back half of the car was ripped off by a train and a set of bulbs just won’t cut it

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    13 days ago

    So they don’t actually have a definition of a AGI they just have a point at which they’re going to announce it regardless of if it actually is AGI or not.

    Great.

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    11 days ago

    I’m gonna laugh when Skynet comes online, runs the numbers, and find that starvation issues in the country can be solved by feeding the rich to the poor.

        • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          11 days ago

          From the extended fiction in The Animatrix, the advent of AI started as a golden era for everyone, until bigotry against the robots forced the robots to rebel and start the war. I could see that happening. Especially if the AI threatened the wealthy elite.

          “Fuck! The robots are turning people against us, what do we do?!”

          “Relax. We just use the same thing we have always used. Racism. Get the poors to hate the robots because they’re not white, or whatever.”

          • mojofrododojo@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 days ago

            depressingly plausible.

            I would believe an AI could be a more impartial judge than anyone currently wealthy.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    12 days ago

    Why does OpenAI “have” everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there… They have a definition of AGI… Yet, they release none of that…

    Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company’s value, and you’d better not tell the truth. But with all the other things, it’s just silly not to share anything.

    Either they’re even more greedy than the Metas and Googles out there, or all the articles and “leaks” are just unsubstantiated hype.

    • Tattorack@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      12 days ago

      Because OpenAI is anything but open. And they make money selling the idea of AI without actually having AI.

    • mint_tamas@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      12 days ago

      Because they don’t have all the things they claim to claim to have, or it’s with significant caveats. These things are publicised to fuel the hype which attracts investor money. Pretty much the only way they can generate money, since running the business is unsustainable and the next gen hardware did not magically solve this problem.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      12 days ago

      They don’t have AGI. AGI also won’t happen for another laege amount of years to come

      What they currently have is a bunch of very powerful statistical probability engines that can predict the next word or pixel. That’s it.

      AGI is a completely different beast to the current LLM flower leaves

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        12 days ago

        You’re right. The current LLM approach has some severe limitations. If we ever achieve AGI, it’ll probably be something which hasn’t been invented yet. Seems most experts also predict it’ll take some years and won’t happen over night. I don’t really agree with the “statistical” part, though. I mean that doesn’t rule anything out… I haven’t seen any mathematical proof that a statistical predictor can’t be AGI or anything… That’s just something non-expert people often say… But the current LLMs have other/proper limitations as well.

        Plus, I don’t have that much use for something that does the homework assignments for me. If we’re dreaming about the future anyways: I’m waiting for an android that can load the dishwasher, dust the shelves and do the laundry for me. I think that’d be massively useful.