Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

3x price drop almost certainly means Opus 4.5 is a different and smaller base model than Opus 4.1, with more fine tuning to target the benchmarks.

I'll be curious to see how performance compares to Opus 4.1 on the kind of tasks and metrics they're not explicitly targeting, e.g. eqbench.com



Why? They just closed a $13B funding round. Entirely possible that they're selling below-cost to gain marketshare; on their current usage the cloud computing costs shouldn't be too bad, while the benefits of showing continued growth on their frontier models is great. Hell, for all we know they may have priced Opus 4.1 above cost to show positive unit economics to investors, and then drop the price of Opus 4.5 to spur growth so their market position looks better at the next round of funding.


Nobody subsidizes LLM APIs. There is a reason to subsidize free consumer offerings: those users are very sticky, and won't switch unless the alternative is much better.

There might be a reason to subsidize subscriptions, but only if your value is in the app rather than the model.

But for API use, the models are easily substituted, so market share is fleeting. The LLM interface being unstructured plain text makes it simpler to upgrade to a smarter model than than it used to be to swap a library or upgrade to a new version of the JVM.

And there is no customer loyalty. Both the users and the middlemen will chase after the best price and performance. The only choice is at the Pareto frontier.

Likewise there is no other long-term gain from getting a short-term API user. You can't train out tune on their inputs, so there is no classic Search network effect either.

And it's not even just about the cost. Any compute they allocate to inference is compute they aren't allocating to training. There is a real opportunity cost there.

I guess your theory of Opus 4.1 having massive margins while Opus 4.5 has slim ones could work. But given how horrible Anthropic's capacity issues have been for much of the year, that seems unlikely as well. Unless the new Opus is actually cheaper to run, where are they getting the compute from for the massive usage spike that seems inevitable.


LLM APIs are more sticky than many other computing APIs. Much of the eng work is in the prompt engineering, and the prompt engineering is pretty specific to the particular LLM you're using. If you randomly swap out the API calls, you'll find you get significantly worse results, because you tuned your prompts to the particular LLM you were using.

It's much more akin to a programming language or platform than a typical data-access API, because the choice of LLM vendor then means that you build a lot of your future product development off the idiosyncracies of their platform. When you switch you have to redo much of that work.


No, LLMs really are not more sticky than traditional APIs. Normal APIs are unforgiving in their inputs and rigid in their outputs. No matter how hard you try, Hyrum's Law will get you over and over again. Every migration is an exercise in pain. LLMs are the ultimate adapting, malleable tool. It doesn't matter if you'd carefully tuned your prompt against a specific six months old model. The new model of today is sufficiently smarter that it'll do a better job despite not having been tuned on those specific prompts.

This isn't even theory, we can observe the swings in practice on Openrouter.

If the value was in prompt engineering, people would stick to specific old versions of models, because a new version of a given model might as well be a totally different model. It will behave differently, and will need to be qualified again. But of course only few people stick with the obsolete models. How many applications do you think still use a model released a year ago?


A Full migration is not always required these days.

It is possible to write adapters to API interfaces. Many proprietary APIs become de-facto standards when competitors start creating those compatibility layers out of the box to convince you it is a drop-in replacement. S3 APIs are good example Every major (and most minor) providers with the glaring exception of Azure support the S3 APIs out of the box now. psql wire protocol is another similar example, so many databases support it these days.

In the LLM inference world OpenAI API specs are becoming that kind of defacto standard.

There are always caveats of course, and switches go rarely without bumps. It depends on what you are using, only few popular widely/fully supported features or something niche feature in the API that is likely not properly implemented by some provider etc, you will get some bugs.

In most cases bugs in the API interface world is relatively easy to solve as they can be replicated and logged as exceptions.

In the LLM world there are few "right" answers on inference outputs, so it lot harder to catch and replicate bugs which can be fixed without breaking something else. You end up retuning all your workflows for the new model.


> But for API use, the models are easily substituted, so market share is fleeting. The LLM interface being unstructured plain text makes it simpler to upgrade to a smarter model than than it used to be to swap a library or upgrade to a new version of the JVM.

Agree that the plain text interface (which enables extremely fast user adoption) also makes the product less sticky. I wonder if this is part of the incentive to push for specialized tool calling interfaces / MCP stuff - to engineer more lock in by increasing the model specific surface area.


Eh, I'm testing it now and it seems a bit too fast to be the same size, almost 2x the Tokens Per Second and much lower Time To First Token.

There are other valid reasons for why it might be faster, but faster even while everyone's rushing to try it at launch + a cost decrease leaves me inclined to believe it's a smaller model than past Opus models


It could be a combination of over-provisioning for early users, smaller model and more quantisation.


It does seem too fast to be a huge model, but it also is giving me the vibes of the typical Opus level of intelligence. So who knows.


It's double the speed. 60t/s Vs 30. Combined with the price drop it's a strong signal that this is a smaller model or more efficient architecture.


Probably more sparse (MoE) than Opus 4.1. Which isn't a performance killer by itself, but is a major concern. Easy to get it wrong.


We already know distillation works pretty well. So definitely would make sense Opus 4.5 is effectively smaller (like someone else said, could be via MoE or some other technique too).

We know the big labs are chasing efficiency cans where they can.


It seems plausible that it's a similar size model and that the 3x drop is just additional hardware efficiency/lowered margin.


Or just pressure from Gemini 3


Maybe it's AWS Inferentia instead of NVidia GPUs :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: