Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Their only special sauce is the first-mover advantage. Then it attracted users (data), brand recognition, talent and became a positive feedback cycle.


GPT4 was created before most feedback cycle. They had GPT4 ready before ChatGPT launch.

If I recall right, GPT4 got done in October. After that, it was RLHF and safety work (Bing starts using GPT4 publicly in February, a month earlier than official launch)


If I recall right, before ChatGPT launched Google already had LaMDA which an employee believed to be sentient and was subsequently fired. The foundation model was definitely done, but to launch Bard, Google needed a kick in the ass in additional RLHF, safety and groundedness work.

Ultimately though, it's futile to argue which model got done first, as long as the models were behind closed doors. But ChatGPT launched before Bard did and that's the pertinent part that gave OpenAI the first-mover advantage.


The LaMDA is sentient guy gave me the impression of being a bit nuts. I'm sure google would show their weight and out-compete openai if they could. We all know all this "AI safety" is for show, right?


> We all know all this "AI safety" is for show, right?

No. A lot of people think it really matters

A lot of other people pretend to care about it because it also enables stifling the competition and attempting regulatory capture. But it's not all of them.


I'm personally devoting my career to AI safety, on a volunteer basis, because I think it's is legitimately of high importance. (See my blog, e.g. https://amistrongeryet.substack.com/p/implications-of-agi, if you want to understand where I'm coming from.)

What makes you think it is for show?


No, it's for brand safety and reputation. In 2016 Microsoft released Tay [1] without or lacking guards and it ended up being a failure and hurter the Microsoft brand.

[1] https://en.wikipedia.org/wiki/Tay_(chatbot)


LaMDA is really far from being sentient.

It's outputs non-sensical (aka highly hallucinating) or relatively useless but coherent text.

It really needs further refinement.

This is one big reason why GPT-4 is still the most popular.


GPT-4 was done training August 2022


Thanks!


The RHLF is probably quite important even on top of a good base model.


That’s not it. It’s not just hype. The underlying model is better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: