Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think equally impressive is the performance of the OpenAI team at the "AtCoder World Tour Finals 2025" a couple of days ago. There were 12 human participants and only one did better than OpenAI.

Not sure there is a good writeup about it yet but here is the livestream: https://www.youtube.com/live/TG3ChQH61vE.



And yet when working on production code current LLMs are about as good as a poor intern. Not sure why the disconnect.


Depends. I’ve been using it for some of my workflows and I’d say it is more like a solid junior developer with weird quirks where it makes stupid mistakes and other times behaves as a 30 year SME vet.


I really doubt it's like a "solid junior developer". If it could do the work of a solid junior developer it would be making programming projects 10-100x faster because it can do things several times faster than a person can. Maybe it can write solid code for certain tasks but that's not the same thing as being a junior developer.


It can be 10-100x faster for some tasks already. I've had it build prototypes in minutes that would have taken me a few hours to cobble together, especially in domains and using libraries I don't have experience with.


It’s the same reason leet code is a bad interview question. Being good at these sorts of problems doesn’t translate directly to being good at writing production code.


because competitive coding is narrow well described domain(limited number of concepts: lists, trees, etc) with high volume of data available for training, and easy way to setup RL feeback loop, so models can improve well in this domain, which is not true about typical enterprise overbloated software.


All you said is true. Keep in mind this is the "Heuristics" competition instead of the "Algorithms" one.

Instead of the more traditional Leetcode-like problems, it's things like optimizing scheduling/clustering according to some loss function. Think simulated annealing or pruned searches.


Dude thank you for stating this.

OpenAI's o3 model can solve very standard even up to 2700 rated codeforces problems it's been trained on, but is unable to think from first principles to solve problems I've set that are ~1600 rated. Those 2700 algorithms problems are obscure pages on the competitive programming wiki, so it's able to solve it with knowledge alone.

I am still not very impressed with its ability to reason both in codeforces and in software engineering. It's a very good database of information and a great searcher, but not a truly good first-principles reasoner.

I also wish o3 was a bit nicer - it's "reasoning" seems to have made it more arrogant at times too even when it's wildly off ,and it kind of annoys me.

Ironically, this workflow has really separated for me what is the core logic I should care about and what I should google, which is always a skill to learn when traversing new territory.


Not completely sure how your reply relates to my comment. I was just mentioning the competition is on Heuristics which is different from what you find on CF or most coding competitions.

About the performance of AI on competitions, I agree what's difficult for it is different from what's difficult for us.

Problems that are just applying a couple of obscure techniques may be easier for them. But some problems I've solved required a special kind of visualization/intuition which I can see being hard for AI. But I'd also say that of many Math Olympiad problems and they seem to be doing fine there.

I've almost accepted it's a matter of time before they become better than most/all of the best competitors.

For context, I'm a CF Grandmaster but haven't played much with newer models so maybe I'm underestimating their weaknesses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: