For me, it's less the abstraction vs having hard limits on things like memory. Modern software has nearly limitless memory compared to the less than 1MB typical on these projects. It was definitely a lesson I had to learn.
As far as the abstraction, when does it get to a point the compiler can't undo the abstraction? At what point does one need to get to a point where something cannot be done?
On 6502-based systems the available memory was often less than 64 KiB, the maximum addressable by the processor directly. Still, you could squeeze a lot into that small amount of you were clever. For example, Steve Wozniak wrote in BYTE magazine about computing e to over 100K places on an Apple 2:
> I first calculated e to 47 K bytes of precision in January 1978. The program ran for 4.5 days, and the binary result was saved on cassette tape. Because I had no way of detecting lost-bit errors on the Apple (16 K-byte dynamic memory circuits were new items back then), a second result, matching the first, was required. Only then would I have enough confidence in the binary result to print it in decimal. Before I could rerun the 4.5 day program successfully, other projects at Apple, principally the floppy-disk controller, forced me to deposit the project in the bottom drawer. This article, already begun, was postponed along with it. Two years later, in March 1980, I pulled the e project out of the drawer and reran it, obtaining the same results. As usual (for some of us), writing the magazine article consumed more time than that spent meeting the technical challenges.
> As far as the abstraction, when does it get to a point the compiler can't undo the abstraction?
The early 8-bit systems were so constrained in everything from memory to registers, instruction set, and clock speed, that using a high level language wasn't an option if you were trying to optimize performance or squeeze a lot of functionality into available memory. An 8-bit system would have a 64KB address space, but maybe only 16-32KB of RAM, with the rest used by the "OS" and mapped to the display, etc.
The 6502 was especially impoverished, having only three 8-bit registers and a very minimalistic instruction set. Writing performant software for it depended heavily on using "zero-page" memory (special addressing mode to access 1st 256 bytes of memory) to hold variables, rather than passing stack based parameters to functions, etc. It was really a completely different style of programming and mindset to using a high level language - not about language abstraction, but a painful awareness of the bare metal you were running on all the time.
When I asked, it was based on my use of Arduino IDE where one writes higher level code that then gets compiled into machine code. I had a project where I was using multiple sensors where I could not store each of their responses in memory to write to a log in one shot. Instead, I had to write to the log after reading each sensor directly and releasing the memory at the end of each loop. I was originally hoping to do more analysis with the data onboard the Arduino, but in hindsight, that was a pretty dumb idea. The sensing platform should do just that. Do the analysis in post.
I may sound bitter in describing this, but I started to notice about 15 years ago that the then-current crop of developers, trained exclusively on GC'd languages, seemed to have no idea what a memory constraint would look like and thought that a hidden memory allocation is free as long as it occurs a few layers beneath you.
Formal training with something like a CS course definitely starts with limited systems progressing to larger systems. So if you've been through that pain, I could see the bitterness. However, I'd wager the vast majority of coders did not take a CS course, and are self taught or boot camp grads. It's not really their fault they don't know assembly. It's just not something they've ever or will most likely never need to deal with for they day job.
In the mid 2000s, intro CS classes started focusing on Java. These days Python occupies a similar niche. It's not until they start to take an operating systems class, possibly by junior year, that a student might be confronted with manual memory management.
Interesting. My CS course started right out of the gate with assembly. My fledgling computer course in high school started with Pascal instead of BASIC. Clearly, before "mid 2000s". Maybe I was taking classes out of order??? It was way to long ago for me to remember those details, but I was well underwater trying to jump right into the low level language. I remember struggling with pointers in Pascal until one day it finally clicked. Assembly came across to me as a weed out course right at the beginning.
The painful thing is when someone describes themselves as a “full stack developer” but lack any mechanical sympathy for what the processor, memory, and I/O buses are actually doing.
This doesn’t require a formal education. I was self-taught long before studying CS institutionally. And per the article I am super grateful to the 6502 for being the platform that I learned from.
I mean, no, that's mostly not what I'd consider a full stack developer myself and I've been around a long time. You can be full stack grabbing everyone else's libraries and making an app spit data from the db to the UI.
Simply put the vast majority of developers will never need this information nor be resource constrained that they'll need to use their time to understand the issue deeper.
As far as the abstraction, when does it get to a point the compiler can't undo the abstraction? At what point does one need to get to a point where something cannot be done?