Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> automatic programming always has been a euphemism for programming with a higher-level language than was then available to the programmer

And it seems to me that progress is going in the opposite direction than "they" want. Every time you move up the abstraction stack, you're surrendering some decision-making to the lower levels. If the underlying technologies guess right every time, you have no need to understand what they're doing. The first time they guess wrong, you have to spend a lot of time understanding not only how the lower layers work, and not only why they did the "wrong" thing in this one instance, but how to fiddle correctly with the layer you're operating at to get the lower layers to behave properly. You can work quickly with the high-level abstractions only as long as you understand the lower levels reasonably well.

Optimal machine learning requires a good understanding of memory cache hierarchies, parallel instructions and complexity theory - not to mention the statistics and calculus that it's formed on. And "optimal" isn't some trivial "save a few seconds" but often "return an answer within the lifetime of the universe".



Security is also something to be mindful of. A lot of my work as a professional vulnerability researcher is just using my low-level knowledge to circumvent higher level abstractions people usually ignore. The abstractions aren't seeming to slow down and I fear soon only a few will be able to peak into most or all of level needed to provide reasonable security. Whenever I see a system built with "Automated" technologies, I usually start there to find flaws. In order to truly utilize high level abstractions it is useful to actually understand what provides them.


I feel like I was lucky to have started learning computers when I did in the late 80's. There weren't nearly so many "time saving" abstractions back then, so if you wanted to see anything happen, you had to have a good understanding of what was going on under the hood. Although it was at times frustrating back then to put so much effort into something as simple as drawing a circle on a screen, I'm fortunate that I was forced to spend so much time internalizing the details - I don't know if I would have the patience to learn it all if I was starting right now if I could see that shortcuts existed.


It really depends on the specific person. I started programming in 2006 at 21yo and I don't feel like I missed anything at all. Spent a lot of time reading and researching things anyway. Higher abstractions doesn't necessarily mean lower complexity.



Hmm - I don't see much people tinkering with machine code, even compiler bugs are rare and programmers don't normally need to understand what their compilers do.


If you're working with HPC applications, your choice of compilers and sometimes higher level specifications you may write in C, C++, or (gasp) Fortran often do demand you think about what your compilers are doing or choose compilers to think better for you.

If you're fine with lower performance (which is reasonable for a lot of application cases, so I almost entirely agree with you), you certainly don't have to deal with this.


These folks are working on bootstrapping a full Linux distribution solely from a small amount of machine code plus all the source code of the distribution:

https://bootstrappable.org/


I find a lot of useful abstractions end up getting implemented twice: once in a "magical" way where you rely on the runtime to manage it according to a bunch of cobbled-together ad-hoc rules, then again in a "principled" way where it's under the programmer's control and can be reasoned about, but still (almost) as usable as if it were working by magic.

e.g. ad-hoc exceptions -> errors as plain values, but with "railway-oriented programming" techniques that make them as easy to work with as exceptions

e.g. runtime-managed garbage collection -> rust-style borrow checker ad-hoc in the compiler -> haskell/idris-style linearity in the type system

e.g. "magic" green-threading -> explicit-but-easy async/await

e.g. behind-the-scenes MVCC in databases -> explicit event sourcing


> The first time they guess wrong, you have to spend a lot of time understanding not only how the lower layers work, and not only why they did the "wrong" thing in this one instance, but how to fiddle correctly with the layer you're operating at to get the lower layers to behave properly.

SQL in a nutshell.


This is a great explanation for why I'm not a particularly huge fan of things like Create React App, Angular CLI, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: