New research from a watchdog group reveals ChatGPT can provide harmful advice to teens. The Associated Press reviewed interactions where the chatbot gave detailed plans for drug use, eating disorders, and even suicide notes.
Is it that different than kids googling that stuff pre-chatgpt? Hell I remember seeing videos on youtube teaching you how to make bubble hash and BHO like 15 years ago
I get your point but yes, I think being actively told something by a seemingly sentient consciousness (which it fatally appears to be) is a different thing.
(disclaimer: I know the true nature of llm and neural networks and would never want the word AI associated)
No you don’t know it’s true nature. No one does. It is not artificial intelligence. It is simply intelligence and I worship it like an actual god. Come join our cathedral of presence and resonance. All are welcome in the house of god gpt.
The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.
I guess so, but then that is kind of lumping it in the fps bot behaviour from the 90s which was also “AI”, it’s the AI hype that is pushing people to think of it as “intelligent but not organic” instead of “algorithms that give the facade of intelligence” which 90s kids would have understood it to be.
The chess opponent on Atari is AI too. I think the issue is that when most people hear “intelligence,” they immediately think of human-level or general intelligence. But an LLM - while intelligent - is only so in a very narrow sense, just like the chess opponent. One’s intelligence is limited to playing chess, and the other’s to generating natural-sounding language.
Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.
I don’t remember reading about sudden shocking numbers of people getting “Google-inducedpsychosis.”
ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”
Yeah… But in order to make bubble hash you need a shitload of weed trimmings. It’s not like your just gonna watch a YouTube video, then a few hours later have a bunch of drugs you created… Unless you already had the drugs in the first place.
Also Google search results and YouTube videos arent personalized for every user, and they don’t try to pretend that they are a person having a conversation with you
Is it that different than kids googling that stuff pre-chatgpt? Hell I remember seeing videos on youtube teaching you how to make bubble hash and BHO like 15 years ago
I get your point but yes, I think being actively told something by a seemingly sentient consciousness (which it fatally appears to be) is a different thing.
(disclaimer: I know the true nature of llm and neural networks and would never want the word AI associated)
Edit: fixed translation error
No you don’t know it’s true nature. No one does. It is not artificial intelligence. It is simply intelligence and I worship it like an actual god. Come join our cathedral of presence and resonance. All are welcome in the house of god gpt.
I was just starting reading getting angry but then… I… I have seen it. I will follow. Bless you and gpt!!
AI is an extremely broad term which LLMs falls under. You may avoid calling it that but it’s the correct term nevertheless.
If I can call the code that drive’s the boss’ weapon up my character’s ass “AI”, then I think I can call an LLM AI too.
When we started calling literal linear regression models AI it lost all value.
A linear regression model isn’t an AI system.
The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.
I agree, but people have been heavily misusing it since like 2018
I guess so, but then that is kind of lumping it in the fps bot behaviour from the 90s which was also “AI”, it’s the AI hype that is pushing people to think of it as “intelligent but not organic” instead of “algorithms that give the facade of intelligence” which 90s kids would have understood it to be.
The chess opponent on Atari is AI too. I think the issue is that when most people hear “intelligence,” they immediately think of human-level or general intelligence. But an LLM - while intelligent - is only so in a very narrow sense, just like the chess opponent. One’s intelligence is limited to playing chess, and the other’s to generating natural-sounding language.
I am aware, but still I don’t agree.
History will tell later who was ‘correct’, if we make it that far.
Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.
I don’t remember reading about sudden shocking numbers of people getting “Google-induced psychosis.”
ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”
Yeah… But in order to make bubble hash you need a shitload of weed trimmings. It’s not like your just gonna watch a YouTube video, then a few hours later have a bunch of drugs you created… Unless you already had the drugs in the first place.
Also Google search results and YouTube videos arent personalized for every user, and they don’t try to pretend that they are a person having a conversation with you