The fundamental flaw of the Turing test is that it requires a human. Apparently, making a human believe they are talking to a human is much easier than previously thought.
Bro, humans literally don’t have that capability (that’s the presumption here). Or are you saying that many of us don’t have better consciousness than AIs? I might agree with that!
The AI can only judge by having a neural network trained on what’s a human and what’s an AI (and btw, for that training you need humans)… which means you can break that test by making an AI that also accesses that same neural network and uses it to self-test the responses before outputting them, providing only exactly the kind of output the other AI would give a “human” verdict on.
So I don’t think that would work very well, it’ll just be a cat & mouse race between the AIs.
The fundamental flaw of the Turing test is that it requires a human. Apparently, making a human believe they are talking to a human is much easier than previously thought.
Much easier, in fact; Eliza could pass the Turing test in 1966. Humans are incredibly eager to assess other things as being human or human-like.
Go on.
And what makes you think that?
Mhm. Tell me more.
“Human or human-like”. Can you tell me more about that?
How do you feel about it?
deleted by creator
You can take a sharpie and draw a sad face on a rock and then you’ll feel sad for it. We’re gullable.
But why is the rock sad :(
I know… I get sad just thinking about the sad rock :(
Wilsooooonnnnn!
Slap some 2D anime girl avatar on it and you got yourself a top grossing v-tuber.
A test that didn’t require a human could theoretically be tested automatically by the machine preemptively and solved easily.
I can’t imagine how would you test this in a way that wouldn’t require a human.
Let two AI’s talk to each other and see if they find out that they both aren’t humans?
Bro, humans literally don’t have that capability (that’s the presumption here). Or are you saying that many of us don’t have better consciousness than AIs? I might agree with that!
deleted by creator
The AI can only judge by having a neural network trained on what’s a human and what’s an AI (and btw, for that training you need humans)… which means you can break that test by making an AI that also accesses that same neural network and uses it to self-test the responses before outputting them, providing only exactly the kind of output the other AI would give a “human” verdict on.
So I don’t think that would work very well, it’ll just be a cat & mouse race between the AIs.
Why is it a flaw? What do you think the Turing Test is?