Hacker Newsnew | past | comments | ask | show | jobs | submit | bhl's commentslogin

gh pr diff [num]

also works if you have the GitHub cli installed. Would setup an AGENTS.md or SKILL.md to instruct an agent on how to use gh too.


It does not matter what 80-90% of developers do. Code development is heavily tail-skewed: focus on the frontier and on the people who are able to output production-level code at a much higher pace than the rest.

No kidding. Arguing "90% of devs do this" makes it that much more likely that it's something that the bottom 90% of devs do.

Multi-agents.

Not for us to question or answer though.


What’s the solution here, reward code that works without try catch, reward code that errors and is caught, but penalize code that has try catch and never throws an error?


Cursor will pivot to a computer use company.

The gap between coding agents in your terminal and computer agents that work on your entire operating system is just too narrow and will be crossed over quick.


Once this tech is eliminating jobs on a massive scale I'll believe the AI hype. Not to say that couldn't be right around the corner - I have no clue. But being able to perform even just data entry tasks with better-than-human accuracy would be a huge deal.


That’s the risk - a lot of people suddenly flipping their beliefs at once, especially they’re the same people who are losing the jobs. It’s a civil unrest scenario.


The moat is people, data, and compute in that order.

It’s not just compute. That has mostly plateaued. What matters now is quality of data and what type of experiments to run, which environments to build.


This "moat" is actually constantly shifting (which is why it isn't really a moat to begin with). Originally, it was all about quality data sources. But that saturated quite some time ago (at least for text). Before RLHF/RLAIF it was primarily a race who could throw more compute at a model and train longer on the same data. Then it was who could come up with the best RL approach. Now we're back to who can throw more compute at it since everyone is once again doing pretty much the same thing. With reasoning we now also opened a second avenue where it's all about who can throw more compute at it during runtime and not just while training. So in the end, it's mostly about compute. The last years have taught us that any significant algorithmic improvement will soon permeate across the entire field, no matter who originally invented it. So people are important for finding this stuff, but not for making the most of it. On top of that, I think we are very close to the point where LLMs can compete with humans on their own algorithmic development. Then it will be even more about who can spend more compute, because there will be tons of ideas to evaluate.


To put that into a scientific context - compute is capacity to do experiments and generate data ( about how best to build models ).

However I do think you are missing an important aspect - and that's people who properly understand important solvable problems.

ie I see quite a bit "we will solve this x, with AI' from startup's that don't fundamentally understand x.


>we will solve this x, with AI

You usually see this from startup techbro CEOs understand neither x nor AI. Those people are already replacable by AI today. The kind of people who think they can query ChatGPT once with "How to create a cutting edge model" and make millions. But when you go in on the deep end, there are very few people who still have enough tech knowledge to compete with your average modern LLM. And even the Math Olympiad gold medalists high-flyers at DeepSeek are about to have a run for their money with the next generation. Current AI engineers will shift more and more towards senior architecture and PM roles, because those will be the only ones that matter. But PM and architecture is already something that you could replace today.


> Originally, it was all about quality data sources.

It still is! Lots of vertical productivity data that would be expensive to acquire manually via humans will be captured by building vertical AI products. Think lawyers, doctors, engineers.


That's literally what RLAIF has been doing for a while now.


People matter less and less as well.

As more opens up in OSS and academic space, their knowledge and experience will either be shared, rediscovered, or become obsolete.

Also many of the people are coasting on one or two key discoveries by a handful of people years ago. When Zuck figures this out he gonna be so mad.


Not all will become OSS. Some will become products, and that requires the best people.


Hiring is always a sh*tshow. The only thing that matters is survival: keep applying, keep grinding, keep growing.

And if there's any opportunity to show off, don't be shy :)


Stacked diffs is a huge one, and also where improving git would also improve LLM workflows. The bottleneck after code generation is PR reviews, and stacked diffs help break down large PRs into more digest-able pieces.

If you help humans collaborate better, you help LLMs collaborate better.


Well, how about rethinking your workflow instead of stacking branch after branch?


Because i can produce 5 clean, properly sized commits in the time it takes to do one round of reviews, so they have to be stacked. It's important that the CI run independently on each commit, and each commit builds on the work of the previous one.


Mobile has really strong offline-primitives compared to the web.

But the web is primarily where a lot of productivity and collaboration happens; it’s also a more adversarial environment. Syncing state between tabs; dealing with storage eviction. That’s why local first is mostly web based.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: