Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Systems were simpler, side-effects were better understood and contained.

Now I work on systems that have more complexity in their I/O controllers than the devices I had started on, and no-one understands the full stack. When something goes wrong, the whimsy is the first thing that goes out the window in trying to find the root cause.



Amusingly, an easter egg helped in the investigation of a tricky bug recently. The launch path of Twitter’s iOS app logs a “just setting up my twttr” (which is a reference to the first tweet, of course) and it was quite useful in when trying to find the root cause for a particular crash, because the system had silently changed the launch process for apps in an OS update and we could use its presence in the logs to figure out how far along we were in the startup code.

(To round out the anecdote, I’m a performance engineer on the younger side, so we’re not all bad :P)


I think that this is the problem. No one understands what is going on. So instead of trying to find the root cause of anything, they add another tool or another layer. And then we have 45 micro services running on a lot of expensive hardware (somewhere) doing less and with worse performance than what we used to do 15 years ago in a monolith on a single server.


> And then we have 45 micro services running on a lot of expensive hardware (somewhere) doing less and with worse performance than what we used to do 15 years ago in a monolith on a single server.

And yet, the "expensive" hardware of now is cheaper than your single server from over a decade if you look at the whole picture... a lot of development trends are based on capitalism incentives:

- Outsourcing stuff like server hardware management to AWS is 1-3 FTEs less on your payroll, plus saving of datacenter related cost (climate control, UPS maintenance, Internet uplink, redundant equipment/spare parts)

- "DevOps" aka "let people who have never heard of Unix cobble together Docker containers and CI pipelines until it works Good Enough" is yet another saving of specialized expert staff (SysOp / SRE)

- "microservices" got popular because it's easier to onboard developers and treat them like disposable cogs if your work packages can be really small, with clearly defined interfaces


> "DevOps" aka "let people who have never heard of Unix cobble together Docker containers and CI pipelines until it works Good Enough" is yet another saving of specialized expert staff (SysOp / SRE)

There's plenty of us old UNIX greybeards working in DevOps. And frankly in most fields of IT you're going to see your fair share of younger engineers because that's literally how life works: people get old and retire while younger people look for work. Moaning that kids don't know the tech they don't need to know is a little like trying to piss in the wind: you might get some short term relief but ultimately you're only going to soil yourself.

edit: It's also funny that you moan about AWS as outsourcing while saying "people don't remember Unix" yet Unix itself comes from a heritage of time sharing services which are the same basic principles of cloud computing.

If you're old enough you'll eventually see all trends in computing repeat themselves.


I know people who were running commercial sites from an average PC in a spare bedroom over ADSL in the early 2000s.

A bit graphic design, a bit of PHP, a bit of email management, MySQL, a "pay now" page imported from a payment provider, and they were making good money for a minimal startup cost. All the code written by one or two people, often some external help with the graphics, and sometimes the business concept was someone else's idea.

Obviously the services weren't scalable, they didn't have hundreds of millions of users, and they didn't operate globally.

But they didn't need to. And that's still true of many startups today.

A few of them sold up for large sums.


I agree with everything, just wanted to nitpick that microservices have been a thing at enterprise level since Sun RPC.

The "microservice" name just seems to be the modern way to make all forms of distributed RPC hipster fashionable.


Distributed systems is certainly not new. But the whole notion that we need to use them for absolutely everything is new.


Not really, that was the whole premise behind BPEL (Business Process Execution Language), and SOA (Service Oriented Architecture) from early 2000's.


I'm so old so I consider everything this millennium kind of new ;)


CORBA and DCOM, then?


We are currently paying quite much more than I'm used to, and our throughput is a small fraction. I don't mind managed servers, it just that we didn't use to need so much of it.

Unix isn't that hard. And Docker containers and CI/CD pipelines open other cans of worms, and since no one seems to understand what is going on under the surface (because no one wants to touch Unix), they just add more monitoring tools and scaling.

But we suddenly need five times as many developers because 80% of the code is just dealing with interfaces and communication and handling race conditions and recovering from failed network calls.


> "DevOps" aka "let people who have never heard of Unix cobble together Docker containers

I saw a script written by a devops employee where they zeroed out all the partitions when they added a new web server.


If this web server was running on AWS EC2 with attached EBS volumes then zeroing out a new partition was actually AWS' recommended practice to initialize the disk for performance reasons. EBS no longer requires this.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-init...


These were just plain-jane docker images, but it's interesting that this could be the source of it.


Yeah, and outsourcing sw development to India is a great idea, until ...


Systems were simpler,

This is a HN trope that gets trotted out every time a subject like this comes up. It's gone from amusing cliché to just boring and false.

Yes, the hardware was simpler. But the knowledge wasn't.

For example, when I first started with ray tracing in the 1990's, you used something like Turbo Pascal and had to actually know and understand all of the math behind what you were doing. Today it's just #include some random other person's library and you're off to the races.

Today if a developer wants to display a new font, they just add it to the massive tire fire pile of other abstractions they've copied from the internet. Back in the supposedly "simpler" days, you plotted out a font on graph paper, then actually did the math of converting those pixels into bytes for storage and display. And did the extra math to find ways to do it in the most efficient way possible.

The knowledge has changed, but the amount of knowledge hasn't.

Things are much simpler now than they were then. That's why programmers today think it's OK to waste so much of their computing resources. With the supercomputer power we have in our pockets, this should be a golden age of computing, but instead we use that power to feed people's vanity and addictions.

side-effects were better understood and contained.

If that was true, then retro computer enthusiasts wouldn't still be discovering features and capabilities today.


The first web pages I made I had to type into the OG Notepad. And there were no tools to help, and you had to support 3 resolutions + at least 2 browsers. This was before responsive design (or indeed any kind of HTML/CSS design practices) or media queries, so for each page of content I had to write six different pages. There were no guides to help with understanding Netscape vs. IE, no 24-hr tech updates/phones (so you'd take a vacation for a week, IE would update, and you'd come back to sad emails about how the site is broken), etc. Maintenance and upgrading was a nightmare.

Luckily, we quickly stopped doing that, but the idea that things were easier back then is because people learned from what we were doing. In 20 years, people will talk about how easy it was to make things in 2022.


You're arguing different points on the same broader topic.

The GP was talking about how much of the stack we don't know. Ie if something fails in an abstraction underneath what we develop in, then we're often fscked. And there are so many layers to the stack now that the simplicity of debugging the entire stack has gotten harder -- this is a true statement.

However you are talking about the barrier for entry in software development. It has gotten easy. This is also a true statement but it doesn't make the former statement untrue either.

By making it easier to write higher level code we end up obfuscating the lower layers. Which makes it harder to inspect the lower layers of the stack. So it's literally both simpler and more complex, depending on the problem.

> > side-effects were better understood and contained.

> If that was true, then retro computer enthusiasts wouldn't still be discovering features and capabilities today.

This is a grossly unfair statement because you make a claim for one side of the argument and, without comparing it to the other side (ie are we still discovering features and capabilities of modern systems?) draw a conclusion that the original statement is false.

So lets look at the other side of argument: in fact the reality is people are routinely finding optimizations in modern systems. For example you often see submissions on HN where hashing algorithms, JSON serialization, and such like have been sped up using ASM tricks on newer processors.

Another example is some of the recent Rust code released that outperforms their GNU counterparts.

It is also worth noting that modern hardware is fast so people generally optimize code for developer efficiency (a point you made yourself) rather than CPU efficiency. So fewer people are inclined to look for optimizations in the capabilities of the hardware. However once current gen becomes "retro", you might start seeing a shift were people are trying to squeeze more out of less.


I think what you say is true in a way but also gets to the difference between simple and easy. Pulling in a library that does a lot of heavy lifting is definitely easier. The resulting system most likely isn't gonna be simpler though. You are now trying on a lot of code you don't understand and there is probably also lots of code that you aren't even using. This has very little downside to it till something goes wrong. Is it going wrong because you are using the library wrong or because the library has a bug? This isn't gonna be a big deal and gonna be obvious for the majority of cases, but the weird edge cases are gonna be what wakes you up at night.


> Systems were simpler, side-effects were better understood and contained.

I'm going to disagree, here. Things were as difficult as they ever have been since. Systems and side-effects are better understood only in hindsight.

Coding in assembly because higher-order language wasn't invented yet. Coding in a text editor because IDEs were not a thing. Searching linked lists because what's a data-base, not to mention SQL. Needing to keep things lean because 32kb was an ungodly waste.

In that light, things are easier than ever.

One can be both professional and whimsical. In fact, I'd argue that true mastery will only come with a sense of fun and interest in your craft.

Applying whimsy can be an engineering decision. Using 418 when any error code will do, for example, if it affects nothing else.


There's like 4 or 5 extra layers of abstraction/frame works in most systems these days.


So what? If your tech stack is worth its salt, those layers will be mostly removed at compilation. Reductions in performance often are due to developers mixing up their layers of abstraction and repeating work among several of them, rather than by having several layers adding more powerful abstractions.

It's good that a developer has cursory knowledge of how those layers work (e.g. to avoid re-adding checks at the upper layers of something that is already guaranteed by the lower ones), but there's no need to be an expert at all them.


Can you give an example of a tech stack worth its salt by these criteria?


(tl;dr: the problem is not in the abstractions, is current standard engineering practices of bringing the full layer for each abstraction instead of just the few parts you need to solve your problem).

I'd say something like Docker containers, running a wasm engine, targeted by a lean library with a sane high-level programming language - either a santard-looking imperative language like Haxe, or some esoteric opinionated functional thingie like Julia or Elixir. (It could also be something like Vue, Angular or React, but those aren't exactly 'lean', being specialiced in working with the full browser DOM and web servers).

Each layer abstracts away the lower levels (virtualization & compartmentalization to run on any hardware, bytecode to run on any OS or browser engine...), allowing you to potentially use it without being tied to one specific implementation of that layer.

Higher layers provide increasingly more complex and powerful abstractions, with standardized code that's been created by experts in the field with efficiency in mind, and debugged through hundreds or thousands of client projects; making them likely more performant and robust than anything a single development team could ever build on their own (except for extremely limited domains run on dedicated hardware).

And ideally they have the plus side of working side-by-side with other applications or libraries, running in the same system without being engineered to work only with a single "official stack", allowing you to mix-and-match the best abstraction for each problem being solved, instead of forcing you into a single solution for all your software. That level of flexibility (plus the simplification of the layers below) is worth the runtime penalty imposed by several piled-up abstraction layers. That's why we don't code everything on assembly anymore.


These were all solved problems in the 90s already - we had plenty of high-level languages, including those with integrated IDEs and even databases (dBase comes to mind!). SQL was already around. 32Kb was something from another era.


> Systems were simpler, side-effects were better understood and contained.

But debugability was far harder, you could not do it remotely, and there was no stackoverflow etcetera to help you.

Example: Installed in another country is an embedded controller talking RS232 to devices. The controller is resetting, probably due to a watchdog timeout. The controller software uses a custom RTOS, with no spare hardware timers and it is hard to get logs back, which restricted in size to some kilobytes. You make educated guesses as to what is happening, but are failing. Many weeks of work later, you design your own process profiling technique using a hardware logic analyser and from that you find the root cause. Easily over a man-month of work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: