To paraphrase the great Malcolm Tucker, it’s like watching a clown running across a minefield.
To paraphrase the great Malcolm Tucker, it’s like watching a clown running across a minefield.
If you click the article link, then use a process called “reading”, you would see:
The company has already launched similar services abroad in Egypt, Nigeria, and India. Now it’s bringing the concept to the United States.
Edit: I misunderstood and assumed he hadn’t read the article, which is entirely too common these days.
Most human training is done through the guidance of another
Let’s take a step back and not talk about training at all, but about spontaneous learning. A baby learns about the world around it by experiencing things with its senses. They learn a language, for example, simply by hearing it and making connections - getting corrected when they’re wrong, yes, but they are not trained in language until they’ve already learned to speak it. And once they are taught how to read, they can then explore the world through signs, books, the internet, etc. in a way that is often self-directed. More than that, humans are learning at every moment as they interact with the world around them and with the written word.
An LLM is a static model created through exposure to lots and lots of text. It is trained and then used. To add to the model requires an offline training process, which produces a new version of the model that can then be interacted with.
you can in fact teach it something and it will maintain it during the session
It’s still not learning anything. LLMs have what’s known as a context window that is used to augment the model for a given session. It’s still just text that is used as part of the response process.
They don’t think or understand in any way, full stop.
I just gave you an example where this appears to be untrue. There is something that looks like understanding going on.
You seem to have ignored the preceding sentence: “LLMs are sophisticated word generators.” This is the crux of the matter. They simply do not think, much less understand. They are simply taking the text of your prompts (and the text from the context window) and generating more text that is likely to be relevant. Sentences are generated word-by-word using complex math (heavy on linear algebra and probability) where the generation of each new word takes into account everything that came before it, including the previous words in the sentence it’s a part of. There is no thinking or understanding whatsoever.
This is why [email protected] said in the original post to this thread, “They hallucinate all answers. Some of those answers will happen to be right.” LLMs have no way of knowing if any of the text they generate is accurate for the simple fact that they don’t know anything at all. They have no capacity for knowledge, understanding, thought, or reasoning. Their models are simply complex networks of words that are able to generate more words, usually in a way that is useful to us. But often, as the hallucination problem shows, in ways that are completely useless and even harmful.
the argument that they can’t learn doesn’t make sense because models have definitely become better.
They have to be either trained with new data or their internal structure has to be improved. It’s an offline process, meaning they don’t learn through chat sessions we have with them (if you open a new session it will have forgotten what you told it in a previous session), and they can’t learn through any kind of self-directed research process like a human can.
all of your shortcomings you’ve listed humans are guilty of too.
LLMs are sophisticated word generators. They don’t think or understand in any way, full stop. This is really important to understand about them.
It most likely will be better initially, if for no other reason than they need to strongly differentiate themselves from Google (and Bing and DDG). I’m just not very optimistic for the long-term outlook in these times of “profit uber alles”. I’d love to be wrong.
It’s no surprise that “free” search funded through advertising led to this. The economic incentives were always going to lead us to the pay-to-win enshittification that we see today.
Paid search might look better initially, but a for-profit model will eventually lead to the same results. It might manifest differently, maybe through backroom deals they never talk about, but you’d better believe there will always be more profit to be made through such deals than through subscription fees.
Unpaywalled link: https://archive.is/6RhUG
Wait until the New York Times finds out that the New York Times is one of the biggest propagators of sinophobia.
Also this bit is interesting:
The amygdala is a pair of neural clusters near the base of the brain that assesses danger and can help prompt a fight-or-flight response. A prolonged stress response may contribute to anxiety, which can cause people to perceive danger where there is none and obsess about worst-case scenarios.
At least one study has shown that conservatives tend to have a larger right amygdala: Political Orientations Are Correlated with Brain Structure in Young Adults
We found that greater liberalism was associated with increased gray matter volume in the anterior cingulate cortex, whereas greater conservatism was associated with increased volume of the right amygdala.
I happened to see this discussion on Reddit last night: What’s something you’ve stopped eating because it’s become too expensive?
Hundreds and hundreds of people who apparently didn’t get the memo that the economy is just super-duper right now.
Why not? They can only go up in value!