Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The commentary around every Chinese model is incredibly disappointing. Asking about Tiananmen Square isn't some clever insight.

Look at the political leanings that government-backed AIs in the United States will soon be required to reflect: those of the current administration.

I was hoping to hear from people reporting on their utility or coding capabilities instead.



It is especially stupid because there is nothing analogous to Tiananmen Square in the west.

On the other hand, have it write a dirty joke. It just wrote me a few jokes that silicon value wouldn't touch with a 10 foot pole.

Not sure about the utility overall though. The chain of thought seems incredibly slow on things that Sonnet would have done in a few seconds from my limited testing.


Actually there is, fire up OpenAI or Claude and ask it for crime statistics.

I did and it lectured me on why it was inappropriate to ask such a wrongthink question. At least the Chinese models will politely refuse instead of gaslighting the user.


I just did and it gave them to me. What did you write in your prompt?


No, you've got it wrong. If US models are suddenly required to praise their monarch, or hide his past affiliations or whatever, that warrants them more critique.

Chinese models aren't don't become exempt "because the US is also bad", they both rightfully become targets of criticism.

Other than that, testing the boundaries of the model with a well-established case is very common when evaluating models, and not only with regards to censorship.


Agreed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: