Turing test is not do AI sound like humans some of the time, but is it possible to tell an AI is AI just by speaking with it.
The answer is definitely yes, but it's not by casual conversation, but by asking weird logic problems it has tremendous problems solving and will give totally nonsensical inhuman answers to.
I'm not convinced. Openai specifically trained their models in a way that is not trying to pass the Turing test. I suspect current models are more than capable of passing Turing tests. For example, i suspect most humans will give nonsense answers to many logic problems!
It's pretty inhuman in the ways it messes up. For example try asking GPT-4 to write you a non-rhyming poem. It gives you a rhyming poem instead. Complain about the rhyming and ask it to try again, gives you another rhyming poem after apologizing about the inadvertent rhymes. It clearly understands what rhyming is and its apologies sound sincere, yet it's incapable of writing a poem that doesn't rhyme. That's pretty inhuman.
Also the way and context that it gets logic puzzles wrong is pretty inhuman. First of all, it's capable of doing some pretty hard puzzles that would stump most people. Yet if you change the wording of it a bit so that it no longer appears in training data, it's suddenly wrong. Humans are frequently wrong of course, but the way they're wrong is that they give vague solutions, then muddle through an answer while forgetting important pieces. This is contrary to GPT-4 which will walk you through the solution piece by piece while confidently saying things that make no sense.