That's simply not true. I've invented totally novel logical problems in the form of stories for it to solve and it has done so successfully. It doesn't get it everytime in the first way that I ask but it absolutely will get with relatively modest rewriting of the prompt. In one case it finally had to ask me why I was asking it, and offered a couple of suggestions including "is this a way to test my reasoning capabilities?". Once it understood the context and intent of my question, it solved it easily.
I think the people who dismiss this are simply ones that stopped at the first "gotcha" and moved on. They forget that GPT4 is not human and so it doesn't always understand things in the same way a human would. But once it understands the underlying concept, it can indeed solve novel problems.
I think the people who dismiss this are simply ones that stopped at the first "gotcha" and moved on. They forget that GPT4 is not human and so it doesn't always understand things in the same way a human would. But once it understands the underlying concept, it can indeed solve novel problems.