Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The new GPT-4 model has a context length of 120k. For consumers this equates to slightly more than $1/message input-only.

If ChatGPT is using this model then it's more reasonable to assume that they are bleeding money and need to cut costs.

People really need to stop asking ChatGPT to write out complete programs in a single prompt.



Interesting, how is writing less code cutting costs for them? Does this get back to the rumor that the board was mad at Altman for prioritizing chatgpt over money going into research/model training?


Code is very token dense, from what I understand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: