Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not saying bugs aren't a problem. I'm saying that if an emerging, fast improving tech is only slightly behind a human coder now, it seems conceivable that we're not that far off when they reach parity.




Exactly. I'm sure assembly language programmers from the 1980s could easily write code that ran 2x faster than the code produced by compilers of the time, but compilers only got better and eventually assembly language programming became a rare job, and humans can rarely outperform compilers on whole program compilation.

Assembly experts still write code that runs faster than code produced by compilers. Being slower is predictable and solved with better hardware, or just waiting. This is fine for most so we switched to easier or portable languages. Output of the program remains the same.

Impact of having 1.7x more bugs is difficult to assess and is not solved that easily. Comparison would work if that was about optimisations: code that is 1.7x slower / memory hungry.


> Assembly experts still write code that runs faster than code produced by compilers.

They sometimes can, but this is no longer a guaranteed outcome. Supercompilation optimizers can often put manual assembly to shame.

> Impact of having 1.7x more bugs is difficult to assess and is not solved that easily.

Time will tell. Arguably the number of bugs produced by AI 2 years ago was much higher than 1.7x. In 2 more years it might only be 1.2x bugs. In 4 years time it might be barely measurable. The trend over the next couple of years will judge whether this is a viable way forward.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: