Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is true, but the problem is how it scales.

You set up a .NET/JVM project and it takes off, and you find out you're facing massive memory bloat of the managed heap. What do you do? The answer is basically: try using this slightly different allocation pattern by flipping a switch in the runtime, OR try to do a bunch of manual memory management anyway. You quickly discover the allocation patterns don't help much, so you turn to manual management.

Often enough, its simple to understand where the memory allocation comes from, and you could probably fix it with an arena allocator or something simple. So you try that, but then you find out that many library functions in .NET/JVM that you need, don't allow you to pass in preallocated buffers, leaving your ability to solve the memory problem crippled. You now where the memory is needed, when it expires, and when it can be reused, but you don't have the tools to apply this information anywhere.

At that point, you can either leave it be and buy more RAM, or rewrite from scratch in another language. Would be cool to have languages that are more of a hybrid, kind of like .NET unsafe, but then in a non-optional way.



In my experience managed languages work well when:

A) The business needs evolve quickly and/or cost cutting dev time is more valuable than having the application perform quickly.

B) the performance of the app isn’t that critical and developers will have a more pleasant experience in a better ecosystem

C) the managed language is mainly responsible for the control plane of a data path.

Anything beyond that, especially high performance code, will struggle. That’s why you see dev cycles spent on encryption algorithms, common low level routines, to the point of hand vectorizing assembly, etc. It’s a very well defined problem that’s fixed and has an outsized payoff because those things are used so often and used in places where the performance will matter.

There’s probably a 10x to 100x reduction in worldwide compute usage possible if you could wave a magic wand and have everything run as optimally as if the best engineers built everything and optimized everything to the levels we know how to do things now (computers have never been faster and never felt slower because of this).

However, it’s just not economical in terms of dev cycles per output and complaining otherwise is tilting at the windmills of market forces that give the edge in many scenarios to those languages (even when you factor in inefficiencies and/or extra time optimizing ). That’s what the person you’re replying to is stating and is something I’m 100% agreed with. This is someone who is a systems engineer who codes primarily in lower level languages and is generally a fan, especially Rust. I have probably written non trivial code in every popular language out there at this point and they’re just faster to get shit done in. Picking the right language is a mixture of figuring out what kind of talent you can attract, how much you can pay them, and what will satisfy your business needs within those constraints. A lot of people do choose incorrectly or suboptimally due to ignorance of how to choose, ignorance of the alternatives, picking because of personal familiarity the original author has, etc. Those choices are often more fatal when you choose a native language if your competitors choose better whereas the converse is less often true.


My point is not that higher level languages are not more productive, this is why I started out my post with saying "this is true". My point is that there is often no clear path forward to solve performance problems after you've invested in these higher level runtimes. The linked article is a good example, it solves the problem by modifying the runtime. Imagine that, having to modify the runtime to solve a performance problem. Its impressive and at the same time not in the realm of achievable engineering for most firms.

However, what is in the realm of achievable engineering is improving the performance of the code you write yourself, but a large part of that performance is opaquely hidden inside the runtime, with no way for you to change anything. If we were to create a language that has a smooth path from "managed-runtime" to "low-level-freedom" we would be able to adapt our codebases as they become more popular and performance starts to matter. What that would look like, no idea.


Eh. Maybe. And yet, here’s Netflix running a service in Java and explicitly not migrating. If it performance were truly so important, surely it would be valuable to start migrating? Like all business decisions it’s a cost tradeoff and centralizing the performance problems to experienced engineers who understand how to do this kind of work is a valid tradeoff. Sure, not shops can do it.

Consider however: * when you’re small your business bottlenecks are other things

* when you get larger you have more resources and can make a choice to switch or hire experts to remove the major bottlenecks

* improvements to the runtime improve your scaling for free

I’m not speaking theoretically. I worked at a startup that indoor positioning. Our stack end to end was written in pure Java. We struggled to attain great performance but we did what we needed to and focused on algorithmic improvements. We managed to get far enough along the way to get acquired by Apple. Then I spent about 4 months porting the entire codebase verbatim to C++. It ran maybe about as fast (maybe slightly faster but not much). Switching to the native math libraries for the ML hot path gave the biggest speed up but that’s more a problem with the Android ecosystem lacking those libraries (at least at the time or us failing to use them if they did exist). Over the course of the next two years we eeked out maybe an overall 5-10x CPU efficiency gain vs the equivalent Java version (if I’m remembering correctly - I regret not taking notes now) but towards the end it was definitely diminishing returns territory (eg changing a vector of shared_ptr objects on the critical path of the particle simulator netted something like a 5% speed up).

This was important work here because we got to a point where battery life actually started to meaningfully matter for the success of the project whereas as a startup we were trying to survive. But we were always conscious that Java was the better tradeoff for velocity and writing the localizer in c++ carried logistical challenges in growing it. In fact, starting in Java meant that we had an easier time because a lot of the initial figuring out of the structure of the code (via many refactorings and optimizations) had already happened in Java where it’s easier to move faster. Even within the startup we had discussions about migrating to C++ and it never felt like the right time.

My point is, good engineers know when to pick the right tool for the job and know what their risks are and what their contingencies are. If there’s necessity you’ll either change tools or fix your existing ones. Of course, not everyone does that, but my hunch is only those that are going to succeed anyway end up being fine. Kind of how the invisible hand of the market ends up working.

I think the idea of a smooth transition is a fantasy. Sometimes the architectures are so different that you’d have to fundamentally restructure your application to get that jump. It’s a map with many mountain ranges and valleys. There’s plenty of local optima and you can easily get stuck which requires you to fundamentally rearchitect things. For example, io_uring is very different. If you want to eke out optimal performance out of it, you need to build your application around that concept. It’s rare you get something like Project Loom in native landed but that’s a point in favor of Java managing things for you - free architectural speed up without you changing anything.


In my experience managed languages scale a lot better than unmanaged ones. A managed language does a small constant factor (2 is often quoted) worse than the theoretically optimal allocation. Manual memory management gets incredibly good results on microbenchmarks but gets worse and worse as a codebase gets larger.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: