The thing that bothers me about LLMs is that people will acknowledge the hallucinations and lies LLMs spit out when their discussing information the user is familiar with.
But that same person will somehow trust an LLM as an authority on subjects to which they’re not familiar. Especially on subjects that are on the edges or even outside human knowledge.
Sure I don’t listen when it tells me to make pizza with glue, but it’s ideas about Hawking radiation are going to change the field.
The same used to be said of newspapers (and still ought to be). That is, it’s funny how accurate and informative they appear to be until the topic changes to something about which you have intimate knowledge.
The logical leap to generalise from that is impossible for far too many people and is also an easy trap for those who can make it.
This is literally the Dunning-Kruger effect in action - people can’t evaluate the quality of AI responses in domains where they lack the knowledge to spot the bs.
The thing that bothers me about LLMs is that people will acknowledge the hallucinations and lies LLMs spit out when their discussing information the user is familiar with.
But that same person will somehow trust an LLM as an authority on subjects to which they’re not familiar. Especially on subjects that are on the edges or even outside human knowledge.
The same used to be said of newspapers (and still ought to be). That is, it’s funny how accurate and informative they appear to be until the topic changes to something about which you have intimate knowledge.
The logical leap to generalise from that is impossible for far too many people and is also an easy trap for those who can make it.
This is literally the Dunning-Kruger effect in action - people can’t evaluate the quality of AI responses in domains where they lack the knowledge to spot the bs.
They don’t realize that the chatbot’s “ideas” about hawking radiation were also just posted by a crank on Reddit.