My thoughts exactly. natural language semantics is imperfect and human reasoning is weird. Let's not mistake LLM models as a single source of absolute truth, but a funny & bullshitting assistant who happens to read and vaguely remembers much information.