• silverhand@reddthat.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    13 days ago

    Misleading title. From the article,

    Asked whether “scaling up” current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was “unlikely” or “very unlikely” to succeed.

    In no way does this imply that the “industry is pouring billions into a dead end”. AGI isn’t even needed for industry applications, just implementing current-level agentic systems will be more than enough to have massive industrial impact.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    15 days ago

    It’s ironic how conservative the spending actually is.

    Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?

    No.

    Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.

    Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.

    • bearboiblake@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      wait so the people doing the work don’t get paid and the people who get paid steal from others?

      that is just so uncharacteristic of capitalism, what a surprise

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        13 days ago

        It’s also cultish.

        Everyone was trying to ape ChatGPT. Now they’re rushing to ape Deepseek R1, since that’s what is trending on social media.

        It’s very late stage capitalism, yes, but that doesn’t come close to painting the whole picture. There’s a lot of groupthink, an urgency to “catch up and ship” and look good quick rather than focus experimentation, sane applications and such. When I think of shitty capitalism, I think of stagnant entities like shitty publishers, dysfunctional departments, consumers abuse, things like that.

        This sector is trying to innovate and make something efficient, but it’s like the purse holders and researchers have horse blinders on. Like they are completely captured by social media hype and can’t see much past that.

  • Nemean_lion@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    13 days ago

    I went to CES this year and I sat on a few ai panels. This is actually not far off. Some said yah this is right but multiple panels I went to said that this is a dead end, and while usefull they are starting down different paths.

    Its not bad, just we are finding it’s nor great.

  • deegeese@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 days ago

    Optimizing AI performance by “scaling” is lazy and wasteful.

    Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.

    • NoiseColor @lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      Thing is, same as with GHz, you have to do it as much as you can until the gains get too small. You do that, then you move on to the next optimization. Like ai has and is now optimizing test time compute, token quality, and other areas.

  • Tony Bark@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 days ago

    They’re throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.

  • PeteZa@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    I used to support an IVA cluster. Now the only thing I use AI for is voice controls to set timers on my phone.

    • Nemean_lion@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 days ago

      I use chatgpt daily in my business. But I use it more as a guide then a real replacement.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    15 days ago

    Technology in most cases progresses on a logarithmic scale when innovation isn’t prioritized. We’ve basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we’re in the “bells and whistles” phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      13 days ago

      I remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.

      There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.

      So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).

      Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.

      In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.

    • Rin@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Yes, and maybe finding information right in front of them, and nothing more

  • Teknikal@eviltoast.org
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    13 days ago

    I think the first llm that introduces a good personality will be the winner. I don’t care if the AI seems deranged and seems to hate all humans to me that’s more approachable than a boring AI that constantly insists it’s right and ends the conversation.

    I want an AI that argues with me and calls me a useless bag of meat when I disagree with it. Basically I want a personality.