Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Which, to be fair—we're talking about the pre-GPT-3.5 era—it kind of was?


Don't you remember all of the scaremongering around how unethical it would be to release a GPT3 model publicly.

Google personally reached out to someone trying to reproduce GPT3 and convinced him to abandon his plan of releasing it to the public.


There was scaremongering about releasing GPT-2.

GPT-2!!


You're right. I was remembering gpt2 and it was OpenAI that reached out. He was in contact with Google to get the training compute.

https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62...


And here we are after deepseek and the qwen models and so so much more like glm 4.6 which are reaching sota of sorts.


I mean, the level of scams that have occurred that time due to LLMs have increased so it's not exactly wrong.


The unfortunate truth when you're on the cusp of a new technology: it isn't good yet. Keeping a team of guys around whose sole job it is to tell you your stuff sucks is probably not aligned with producing good stuff.


There's almost like an "uncanny valley" type situation with good products. As in new technologies start out promising, but less okay. Then as they get better they becomes close to being a "good project" the more it's not there yet. In that way it could feel sort of worse than a mediocre project. Until it's done.


There's a world of difference between saying "our stuff sucks" vs "here are the specific ways our stuff isn't ready for launch". The former is just whining, the latter is what a good PM does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: