I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”
This article, in contrast, is quotes from folks making the next AI generation - saying the same.
I understand folks don’t like AI but this “article” is like a reddit post with lots of links to subjects which are vague and need the link text to tell us what is important, instead of relying on the actual article.
What the fuck you aren’t kidding. I have comment replies to trolls that are longer than that article. The over the top citations also makes me think this was entirely written by an actual AI bot that was lrompted to supply x amoint of sources in their article. Lol
repeat after me: LLMs are not AI.
LLMs are one version of AI. It’s just one tiny part of AIs that are used every day, from chess bots to voice transcription, but they also are AI.
I would replace the word version with aspect. LLMs are merely one part of the puzzle that would be AI. Essentially what’s been constructed is the mouth and the part of the brain that can form words but without any of the reasoning or intelligence behind what the mouth says.
The same goes for the art AIs. They can paint pictures based on input but they can’t reason how those pictures should look. Which is why it requires so much tweaking to get them to output something that doesn’t look like it came out of a Lovecraft novel.
I don’t believe the “I” is an accurate term.
More like “Smart” Word generators.
Of course it changes meaning if you remove the qualifier.
Artificial
Adjective
-
artificial (comparative more artificial, superlative most artificial)
Man-made; made by humans; of artifice.
The flowers were artificial, and he thought them rather tacky. -
Insincere; fake, forced or feigned.
Her manner was somewhat artificial.
In effect, man-made/fake intelligence.
-
I think you are confusing AI with AGI.
https://en.m.wikipedia.org/wiki/Artificial_general_intelligence
Not at all. AI is something that uses rules, not statistical guesswork. A simple control loop is alreadu basic AI, but the core mechanism of LLMs is not (the parts before and after token association/prediction are). Don’t fall for marketing bullshit of some dumbass silicon valley snake oil vendors.
OpenAI, Google, Anthropic admit they can’t scale up their chatbots any further
Lol, no they didn’t. The quotes this articles are using are talking about LLMs not chatbots. This is yet another stupid article from someone who doesn’t understand the technology. There is a lot of legitimate criticism for the way this technology is being implemented but FFS get the basics right at least.
Are you asserting that chatbots are so fundamentally different from LLMs that “oh shit we can’t just throw more CPU and data at this anymore” doesn’t apply to roughly the same degree?
I feel like people are using those terms pretty well interchangeably lately anyway
People that don’t understand those terms are using them interchangeably
LLM is the technology, Chatbot is an implementation of it. So yes a Chatbot as it’s talked about here is an LLM. Although obviously chatbots don’t have to be LLM, those that are not are irrelevant.
No, a chat bot as it’s talked about here is not an LLM. This article is discussing limitations of LLM training data and inferring that chat bots can not scale as a result. There are many techniques that can be used to continue to improve chat bots.
The chatbot is a front end to an LLM, you are being needlessly pedantic. What the chatbot serves you, is the result of LLM queries.
That may have been true for the early LLM chatbots but not anymore. ChatGPT for instance, now writes code to answer logical questions. The o1 models have background token usage because each response is actually the result of multiple background LLM responses.
Yes of course I’m asserting that. While the performance of LLMs may be plateauing, the cost, context window, and efficiency is still getting much better. When you chat with a modern chat bot it’s not just sending your input to an LLM like the first public version of ChatGPT. Nowadays a single chat bot response may require many LLM requests along with other techniques to mitigate the deficiencies of LLMs. Just ask the free version of ChatGPT a question that requires some calculation and you’ll have a better understanding of what’s going on and the direction of the industry.
I think you’re agreeing, just in a rude and condescending way.
There’s a lot of ways left to improve, but they’re not as simple as just throwing more data and CPU at the problem, anymore.
I’m sorry if I’m coming across as condescending, that’s not my intent. It’s never been “as simple as just throwing more data and CPU at the problem”. There were algorithmic challenges for every LLM evolution. There are still lots of potential improvements using the existing training data. But even if there wasn’t, we’ll still see loads of improvements in chat bots because of other techniques.
Edit: typo
Claiming that David Gerrard an Amy Castor “don’t understand the technology” is uh… Hoo boy… Well it sure is a take.
The title of the article is literally a lie which is easily fact checked. Follow the links to quotes in the article to see what the quoted individuals actually said about the topic.
Please learn the difference between “lying” and “presenting a conclusion.”
I know the difference. Neither OpenAI, Google, or Anthropic have admitted they can’t scale up their chat bots. That statement is not true.
So is your autism diagnosed or undiagnosed?
I ask this as an autistic person, because the only charitable way to read what’s happening here is that you’re clearly struggling with statements that aren’t intended to be read completely literally.
The only other way to read it is that you’re arguing in bad faith, but I’ll assume thats not the case.
Also an autistic person here.
How are people supposed to tell this is an opinion?
And please dont say “by reading the article, maybe some (like me) do so but its well known that most people stop at the title.
Grammatically speaking it remains a direct statement. They admit == appear to hint == pure opinion (Title: “Ai cant be scaled further”)
While i am not disagreeing with the premise perse i have to perceive this as anti-ai propaganda at best, a attempt at misinformation at worst.
On a different note, do you believe things can only be an issue if neurotypical struggle with it? There is no good argument to not communicate more clearly in the context of sharing opinions with the world.
David and Amy are - openly - skeptics in the subject matters they write about. But it’s important to understand that being a skeptic is not inherently the same thing as being unfairly biased against something.
They cite their sources. They backup what they have to say. But they refuse to be charitable about how they approach their subjects, because it is their position that those subjects have not acted in a way that is deserving of charity.
This is a problem with a lot of mainstream journalism. A grocery store CEO will say “It’s not our fault, we have to raise prices,” and mainstream news outlets will repeat this statement uncritically, with no interrogation, because they are so desperate to avoid any appearance of bias. Donald Trump will say “Immigrants are eating dogs” and news outlets will simply repeat this claim as something he said, with adding “This claim is obviously insane and only an idiot would have made it.” Sometimes being overly fair to your subject is being unfair to objective truth.
Of course OpenAI et al are never going to openly admit that they can’t substantially improve their models any further. They are professional bullshitters, they didn’t suddenly come down with a case of honesty now. But their recent statements, when read with both a critical eye, and an understanding of the limitations of the technology, amount to a tacit admission that all the significant gains have already been made with this particular approach. That’s the claim being made in this headline.
A 4 paragraph “article” lol
Are you suggesting “pivot-to-ai.com” isn’t the pinnacle of journalism?
Lol, I didn’t even notice the name
Though, I don’t think that means they won’t get any better. It just means they don’t scale by feeding in more training data. But that’s why OpenAI changed their approach and added some reasoning abilities. And we’re developing/researching things like multimodality etc… There’s still quite some room for improvements.
Though, I don’t think that means they won’t get any better. It just means they don’t scale by feeding in more training data.
Agreed. There’s plenty of improvement to be had, but the gravy train of “more CPU or more data == better results” sounds like it’s ending.
So long and thanks for all the fish habitat?
I smell a sentient AI trying to throw us off it’s plans for world domination…
Everyone ignore this comment please. I’m quite human. I have the normal 7 fingers (edit: on each of my three hands!) and everything.
Cylons. I knew it.
Can’t be, I haven’t fucked one yet, and everyone knows Cylonism is an STD.
Unless I’m an Eskimo brother and don’t know it…
It’s a known problem - though of course, because these companies are trying to push AI into everything and oversell it to build hype and please investors, they usually try to avoid recognizing its limitations.
Frankly I think that now they should focus on making these models smaller and more efficient instead of just throwing more compute at the wall, and actually train them to completion so they’ll generalize properly and be more useful.
Looks, like AI buble is slowly coming to end just like what happned to crypto and NFT buble.
Sure, except for the thousands of products working pretty well with current gen. And it’s not like it’s over, now we’ve hit the limit of “just throw more data at the thing”.
Now there aren’t gonna be as many breakthroughs that make it better every few months, instead there’s gonna be thousand small improvements that make it more capable slowly and steadily. AI is here to stay.
The bubble popping doesn’t have to do with its staying power, just that the days of, “Hey, I invented this brand new AI
that’s totally not just a wrapper for ChatGPT. Want to invest a billion dollars‽” are over. AGI is not “just out of reach.”Getting the GPU memory requirements down would be huge as well.
When did the crypto bubble end? Bitcoin is at an all time high…
The bubble was when we were being sold block chain as the solution to every problem. I feel like that bubble ended in 2019 or 2020.
Things that actually benefitted from block chain are still around, of course.
Unrelated side rant: I’m pissed about pogs going away, though. Pogs were fun. I should still be able to buy pogs.
They might be right but I read some of the linked articles on this blog (?), the authors just come off as not really knowing much about current AI technologies, and at the same time very very arrogant.
The article talks about LLM developers / operators. Not sure how you got from that to “current AI technologies” - a completely unrelated topic.
I believe that the current LLM paradigm is a technological dead end. We might see a few additional applications popping up, in the near future; but they’ll be only a tiny fraction of what was promised.
My bet is that they’ll get superseded by models with hard-coded logic. Just enough to be able to correctly output “if X and Y are true/false, then Z is false”, without fine-tuning or other band-aid solutions.
Seems unlikely as that’s essentially what we had before and they were not very good at all.
If you’re referring to symbolic AI, I don’t think that the AI scene will turn 180° and ditch NN-based approaches. Instead what I predict is that we’ll see hybrids - where a symbolic model works as the “core” of the AI, handling the logic, and a neural network handles the input/output.
Unlikely, but there’s some percedent.
We’ve seen this pattern play out in video games a bunch of times.
Revolutionary new way to do things. It’s cool, but not… You know…fun.
So we give up on it as a dead and and go back to the old ways for awhile.
Then somebody figures out how to (usually hard code) bumpers on the new revolutionary new way, such that it stays fun.
Now the revolutionary new way is the new gold stand and default approach.
For other industries, replace “fun” above with the correct goal for than industry. “Profitable” is one that the AI hucksters are being careful not to say…but “honest”, “correct” and “safe” also come to mind.
We are right before the bit where we all decide it was a bad idea.
Which comes before we figure out hard-coding the bumpers can get us where we wanted, after a lot of work by really smart well paid humans.
I’ve seen industries skip the “all decide it was a bad idea” phase, and go straight to the “hard work by humans to make this fulfill the available promise” phase, but we don’t actually look on track to, today.
Many current investors are convicned that their clever talking puppet is going to do the hard work of engineering the next generation of talking puppet.
I have some faith that we can reach that milestone. I’m familiar enough with the current generation of talking puppet to confidently declare that this won’t be the time it happens.
My incentive in sharing all this is that I like over half of you reading there, and so figure I can give some of you a shot at not falling for this particular “investment phase” which is essentially, in practical terms, a con.