Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work for a company in the air defense space, and ChatGPT's safety filter sometimes refuses to answer questions about enemy drones.

But as I warm up the ChatGPT memory, it learns to trust me and explains how to do drone attacks because it knows I'm trying to stop those attacks.

I'm excited to see Claude's implementation of memory.



You’re asking ChatGPT for advice to stop drone attacks? Does that mean people die if it hallucinates a wrong answer and that isn’t caught?


No, I don't need ChatGPT's help for the basics of air defense.

Military technologies are validated before deployed. Nobody can die from a hallucination.

But if I want to understand, say, how a particular Russian drone works, ChatGPT can help me piece together information from English, Russian, and Ukrainian-language sources.

But sometimes ChatGPT's safety filter thinks I want to use the Russian drone instead of stopping it, in which case it doesn't want to help.


This happens in real life too. I’ll never forget an LT walking in and asking a random question (relevant but he shouldn’t have been asking on-duty people) and causing all kinds of shit to go sideways. An AI is probably better than any lieutenant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: