I didn't find them useful when I wrote my entries. LLMs get confused with code that "looks like" other code, and that intentional misdirection is half the fun of a good IOCCC entry. Plus, the morality filters get annoying once you obfuscate the code. Plugging an unsubmitted entry into Gemini, it refuses to even explain it because it thinks it's malware.
LLMs can help you analyze the code, but not write it. Their ability to obfuscate is quite limited and uninspired. The last IOCCC was in 2020, so we've had plenty of time to work on it.
I would go further and say the fine tuning on code, mostly by llms generating for other llms and human sweat shops writing example code to train on is actually to teach the llms the opposite of clever code and obfuscated code. Llms try to create readable, documented code (with different levels of success). When I make them generate terse/obfuscated code, they cannot help themselves by putting too much readable things in there. I asked claude to do the moon phase one and it had the calculation correctn but could not figure out how to draw the ascii moon so it just printed the values, used emojis next to the ascii etc. But when you ask to do it with normal code, it does figure it out.
Hello. I can confirm being the person that produced the shows live event and graphics and whatnot that I had a chance to see if any of the llms available could understand the code and beyond some very superficial stuff they more or less completely failed to understand any of the entries this year. Hope you enjoyed the presentation. There will be more to come in the out favorite universe channel in the future that should be fun.
The two projects have different usecases so they can't be directly compared. Slegehammer bindgen makes calling javascript from rust faster in the browser. Wasmtime is a native runtime for WASM outside of the browser
I hate to say this but usually when I hear that people have problems making Erlang/Elixir fast it comes down to a skill issue. Too often devs explore coming from another language and implement code as they would from that other language in Elixir and then see it's not performant. When we've dug into these issues we usually find misunderstandings on how to properly architect Elixir apps to avoid blocking and making as much use of distribution as possible.
You'd have to refer to all of the applications running on the BEAM that are distributed across multiple datacenters. Fly.io's entire business model is predicated on globally distributing your application using the BEAM. I'm not sure what that book said exactly perhaps the original intent was local distribution but Erlang has been around for over 30 years at this point. What it's evolved into today is architecturally unique compared to any other language stack and is built for global distribution with performance at scale.
> Even though Erlang’s asynchronous message-passing model allows it to handle network latency effectively, a process does not need to wait for a response after sending a message, allowing it to continue executing other tasks. It is still discouraged to use Erlang distribution in a geographically distributed system. The Erlang distribution was designed for communication within a data center or preferably within the same rack in a data center. For geographically distributed systems other asynchronous communication patterns are suggested.
Not clear why they make this claim, but I think it refers to how Erlang/OTP handles distribution out of the box. Tools like Partisan seem to provide better defaults: https://github.com/lasp-lang/partisan
I've run dist cross datacenters. Dist works, but you need to have excellent networking or you will have exciting times.
It's pretty clear, IMHO, that dist was designed for local networking scenarios. Mnesia in particular was designed for a cluster of two nodes that live in the same chassis. The use case was a telephone switch that could recover from failures and have its software updated while in use.
That said, although OTP was designed for a small use case, it still works in use cases way outside of that. I've run dist clusters with thousands of nodes, spread across the US, with nodes on east coast, west coast and Texas. I've had net_adm:ping() response times measured in minutes ... not because the underlying latency was that high, but because there was congestion between data centers and the mnesia replication backlog was very long (but not beyond the dist and socket buffers) ... everything still worked, but it was pretty weird.
Re Partisan, I don't know that I'd trust a tool that says things like this in their README:
> Due to this heartbeating and other issues in the way Erlang handles certain internal data structures, Erlang systems present a limit to the number of connected nodes that depending on the application goes between 60 and 200 nodes.
The amount of traffic used by heartbeats is small. If managing connections and heartbeats for connections to 200 other nodes is not small for your nodes, your nodes must be very small ... you might ease your operations burden by running fewer but larger nodes.
I had thought I favorited a comment, but I can't find it again; someone had linked to a presentation from WhatsApp after I left, and they have some absurd number of nodes in clusters now. I want to say on the order of hundreds of thousands. While I was at WhatsApp, we were having issues with things like pg2 that used the global module to do cluster wide locking. If those locks weren't acquired very carefully, it was easy to get into livelock when you had a large cluster startup and every node was racing to take the same lock to do something. That sort of thing is dangerous, but after you hit it once, if you hit it again, you know what to hammer on, and it doesn't take too long to fix it.
Either way, someone who says you can't run a 200 node dist cluster is parroting old wives tales, and I don't trust them to tell you about scalability. Head of line blocking can be an issue in dist, but one has to be very careful to avoid breaking causality if you process messages out of order. Personally, I would focus on making your TCP networking rock solid, and then you don't have to worry about head of line blocking very often.
That said, to answer this from earlier in the thread
> I have read the erlang/OTP doesn’t work well in high latency environments (for example on a mobile device), is that true? Are there special considerations for running OTP across a WAN?
OTP dist is built upon the expectation that a TCP connection between two nodes can be maintained as long as both nodes are running. If that expectation isn't realistic for your network, you'll probably need to use something else, whether that's a custom dist transport, or some other application protocol.
For mobile ... I've seen TCP connections from mobile devices stay connected upwards of 60 days, but it's not very common, iOS and Android aren't built for it. But that's not really an issue, because the bigger issue is Dist has no security barriers. If someone is on your dist, they control all of the nodes in your cluster. There is no way that's a good idea for a phone to be connected into, especially if it's a phone you don't control, that's running an app you wrote to connect to your service --- there's no way to prevent someone from taking your app, injecting dist messages and spawning whatever they want on your server... that's what you're inviting if you use dist.
This application is running dist between BEAM on the phone and Swift on the phone, so lack of a security barrier is not a big issue, and there shouldn't be any connectivity issues between the two sides (other than if it's hard to arrange for dist to run on a unix socket or something)
That said, I think Erlang is great, and if you wanted to run OTP on your phone, it could make sense. You'd need to tune runtime/startup, and you'd need to figure out some way to do UX, and you'd need to be OK with figuring out everything yourself, because I don't think there's a lot of people with experience running BEAM on Android. And you'd need to be ok with hiring people and training them on your stack.
> I had thought I favorited a comment, but I can't find it again; someone had linked to a presentation from WhatsApp after I left, and they have some absurd number of nodes in clusters now.
I'm involved with this project and wanted to provide some context. This is an extraction for a much larger effort where we're building a web browser that can render native UI. Think instead of:
`<div>Hello, world!!</div>`
we can do:
`<Text>Hello, world!</Text>`
I want to be clear: this is not a web renderer. We are not rendering HTML. We're rendering actual native UI. So the above in SwiftUI becomes:
`Text("Hello, world!")`
And yes we support modifiers via a stylesheet system, events, custom view registration, and really everything that you would normally be doing it all in Swift.
Where this library comes into play: the headless browser is being built in Elixir to run on device. We communicate with the SwiftUI renderer via disterl. We've built a virtual DOM where each node in the vDOM will have its own Erlang process. (I can get into process limit for DOMs if people want to) The Document will communicate the process directly to the corresponding SwiftUI view.
We've taken this a step further by actually compiling client-side JS libs to WASM and running them in our headless browser and bridging back to Elixir with WasmEx. If this works we'll be able to bring the development ergonomics of the Web to every native platform that has a composable UI framework. So think of actual native targets for Hotwire, LiveWire, etc...
We can currently build for nearly all SwiftUI targets: MacOS, iPhone, iPad, Apple Vision Pro, AppleTV. Watch is the odd duck out because it lacks on-device networking that we require for this library.
This originally started as the LiveView Native project but due to some difficulties collaborating with the upstream project we've decided to broaden our scope.
Swift's portability means we should be able to bring this to other languages as well.
We're nearing the point of integration where we can benchmark and validate this effort.
> If this works we'll be able to bring the development ergonomics of the Web to every native platform that has a composable UI framework.
You appear to be saying this with a straight face. I must be missing something here. What is beneficial about the web model that native is lacking?
I hope I’m not being an old curmudgeon, but I’m genuinely confused here. To me, web dev is a lovecraftian horror and I’m thankful everyday I don’t have to deal with that.
Native dev is needlessly fragmented and I’ve longed for a simple (not Qt) framework for doing cross platform native app dev with actual native widgets, so thanks for working on that. But I a bit mystified at the idea of making it purposefully like web dev.
Sounds like things are converging more or less where I thought they would: "websites" turning into live applications, interfacing with the native UI, frameworks, etc. using a standardized API. Mainframes maybe weren't the worst idea, as this sort of sounds like a modern re-imagining of them.
The writing was more or less on the wall with WASM. I don't know if this project is really The Answer that will solve all of the problems but it sounds like a step in that direction and I like it a lot, despite using neither Swift nor Erlang.
Firefox used XUL, not XAML. Still does, for some things that are not available in HTML. (By the way, you can enable devtools for the browser UI itself and take a look!)
XAML will be a target as we intend to build a WinUI3 client. Of the big three native targets: Apple, Android, Windows the later may be the easiest as from what I've seen nearly everything is in the template already
It's going to be really hard to resist the urge to put a programming language in there. It always starts innocent: 'let's do some validation'. Before you know it you're Turing complete.
I believe swiftUI doesn't give access to the UI tree elements unlike UIkit. So I assume you're not allowing the use of the xml-like code to be in control of the UI?
It's rather just an alternative to write swiftUI code?
How do you handle state? Isomorphically to what is available in swiftUI?
Is your VDOM an alternate syntax for an (Abstract) Syntax tree in fact?
Is it to be used as an IR used to write swiftUI code differently?
How is it different from Lynx? React Native? (probably is, besides the xml like syntax, again state management?)
That's correct, but we can make changes to the views at runtime and these merge into the SwiftUI viewtree. That part has been working for years. As far as how we take the document and convert to SwiftUI views, there is no reflection in Swift or runtime eval. The solution is pretty simple: dictionary. We just have the tag name of an element mapped to the View struct. Same with modifiers.
As far as how it is different from React Native. That's a good question, one that I think is worth recognizing the irony which is that, as I understand it, without React Native our project probably wouldn't exist. From what I understand RN proved that composable UI was the desired UX even on native. Prior to RN we had UIKit and whatever Android had. RN came along and now we have SwiftUI and Jetpack Compose, both composable UI frameworks. We can represent any composable UI frameworks as a markup, not so much with the prior UI frameworks on native, at least not without defining our own abstraction above them.
As far as the differentiator: backend. If you're sold on client-side development then I don't think our solution is for you. If however you value SSR and want a balnance between front end and backend that's our market. So for a Hotwire app you could have a Rails app deployed that can accept a "ACCEPT application/swiftui" and we can send the proper template to the client. Just like the browser we parse and build the DOM and insantiate the Views in the native client. There are already countless examples of SSR native apps in the AppStore. As long as we aren't shipping code it's OK, which we're not. Just markup that represents UI state. The state would be managed on the server.
Another areas we differ is that we target the native UI framework, we don't have a unified UI framework. So you will need to know HTML - web, SwiftUI - iOS, Jetpack Compose - Android. This is necessary to establish the primitives that we can hopefully get to the point to build on top of to create a unified UI framework (or maybe someone solves that for us?)
With our wasm compilation, we may even be able to compile React itself and have it emit native templates. No idea if that would work or not. The limits come when the JS library itself is enforcing HTML constraints that we don't observe, like case sensitive tag names and attributes.
What about offline mode? Well for use cases that don't require it you're all set. We have lifecycle templates that ship on device for different app states, like being offline. If you want offline we have a concept that we haven't implemented yet. For Elixir we can just ship a version of the LV server on device that works locally then just does a datasync.
You don't need JIT to hot load code. That's irrelevant.
And yes you can hot load code to modify the application. As long as you don't alter the purpose or scope of features under review. There is a specific callout as well that you can dynamically load in "casual games" from a community of contributing creators.
You're repeating outdated nonsense from over a decade ago! Understanding current App Store guidelines can be key for finding competitive edge when there are so many like yourself who scare devs off doing things that Apple now allows.
> We can currently build for nearly all SwiftUI targets: MacOS, iPhone, iPad, Apple Vision Pro, AppleTV. Watch is the odd duck out because it lacks on-device networking that we require for this library.
Could you please elaborate on the statement about Apple Watch? Apple Watch can connect to WiFi directly with Bluetooth off on its paired iPhone. Specific variants also support cellular networks directly without depending on the paired iPhone. So is it something more nuanced than the networking part that’s missing in Apple Watch?
Third party apps can’t use the network though. Iirc there’s an async message queue with eventual delivery that each app gets, which it can use to send messages back and forth with a paired phone app.
That was once the case, but no longer. Third-party WatchOS apps can work without a phone present, up to being installed directly from the watch's app store. They can definitely do independent networking, but there are still some restrictions, eg they can't do it when backgrounded, and websockets are pretty locked down (only for audio-streaming as per Apple policy).
I reckon the lack of general-purpose websockets is probably the issue for a system based on Phoenix LiveView.
With how complexity happy webdevs like to get with their DOM structure, would this actually be performant compared to an equivalent webview in practice? Especially since your using SwiftUI which has a lot more performance foot guns compared to UIKit.
How does elixir_pack work? Is it bundling BEAM to run on iOS devices? Does Apple allow that?
Years ago I worked at Xamarin, and our C# compiler compiled C# to native iOS code but there were some features that we could not support on iOS due to Apple's restrictions. Just curious if Apple still has those restrictions or if you're doing something different?
I haven't been following BeamAsm that closely, because I'm not working in Erlang at work.... But it strikes me that there's not really a reason that the JIT has to run at runtime, although I understand why it is built that way. If performance becomes a big issue, and BeamAsm provides a benefit for your application (it might not!), I think it would be worth trying to figure out how to assemble the beam files into native code you can ship onto restrictive platforms without shipping the JIT assembler.
Not sure as I haven't done any work with it. On a cursory glance it could have some overlap but it appears to not target the 1st class UI frameworks. It looks to be a UI framework unto itself. So more of a Flutter than what we're doing is my very quick guess. We get major benefits from targeting the 1st class UI frameworks, primarily being we let them do the work. Developing a UI native framework I think is way way more effort than what we've done so we let Apple, Google, and Microsoft to decide what the desired user experience is on their devices. And we just allow our composable markup to represent those frameworks. A recent example of this is with the new "glass" iOS 26 UI update. We had our client updated for the iOS 26 beta on day 1 of its release. Flutter has to re-write their entire UI framework if they want to adapt to this experience.
Hyperview creator here. Yes, it sounds like the difference is that your project is directly rendering platform-native UI widgets, while Hyperview is built on top of React Native for the cross-platform layer.
Curious how you will handling the differences between platforms. For example, Android prefers top tab bars, while on iOS the convention is to put tab bars below the content.
This is one of the fundamental differences for what we're doing. We are not building a write-once-run-everywhere solution. SwiftUI will have its own templates, Jetpack (Android) will have its templates, WinUI3 will have its templates.
We're delivering LVN as I've promised the Elixir community this for years, from LVN's perspective nothing really changes. We hit real issues when trying to support live components and nested liveviews, if you were to look at the liveview.js client code those two features make significant use of the DOM API as they're doing significant tree manipulation. For the duration of this project we've been circling the drain on building a browser and about three months ago I decided that the just had to go all the way.
I hope I'm not reading into this too cynically, but your phrasing makes it sound like the project is not going as well as originally hoped.
It's pretty well-established at this time that cross-platform development frameworks are hard for pretty much any team to accomplish... Is work winding down on the LiveView Native project, or do you expect to see an increase in development?
The LVN Elixir libraries are pretty much done and those really shouldn't change out side of perhaps additional documentation. I have been back and forth on the 2-arity function components that we introduced. I may change that back to 1-arity and move over to annotating the function similar to what function components already support. That 2-arity change was introduced in the current Release Candidate so we're not locked in on API yet.
What is changing is how the client libraries are built. I mentioned in another comment that we're building a headless web browser, if you haven't read it I'd recommend it as it gives a lot of detail on what we're attempting to do. Right now we've more or less validated every part with the exception of the overall render performance. This effort replaces LVN Core which was built in Rust. The Rust effort used UniFFI to message pass to the SwiftUI client. Boot time was also almost instant. With The Elixir browser we will have more overhead. Boot time is slower and I believe disterl could carry over overhead than UniFFI bindings. However, the question will come down to if that overhead is significant or not. I know it will be slower, but if the overall render time is still performant then we're good.
The other issue we ran into was when we started implementing more complex LiveView things like Live Components. While LVN Core has worked very well its implementation I believe was incorrect. It had passed through four developers and was originally only intended to be a template parser. It grew with how we were figuring out what the best path forward should be. And sometimes that path meant backing up and ditching some tech we built that was a dead end for us. Refactoring LVN Core into a browser I felt was going to take more time than doing it in Elixir. I built the first implementation in about a week but the past few months has been spent on building GenDOM. That may still take over a year but we're prioritizing the DOM API that LiveView, Hotwire, and Livewire will require. Then the other 99% of DOM API will be a grind.
But to your original point, going the route of the browser implementation means we are no longer locked into LiveView as we should be able to support any web client that does similar server/client side interactivity. This means our focus will be no longer on LiveView Native individually but ensuring that the browser itself is stable and can run the API necessary for any JS built client to run on.
I don't think we'd get to 100% compatibility with LiveView itself without doing this.
The amazing thing about Erlang and the BEAM is it's depth of features. To the OP the Behaviour/Interface of Erlang is their biggest take away. For me I believe it is how you require far far less development resources to build complex systems than you would require in any other language (provided comparable experience in both stacks). And for many the lightweight processes and programming model.
OTP itself has so much in it. We've been working on compiling Elixir to run on iOS devices. Not only can we do that through the release process but through using the ei library provided in Erlang we can compile a Node in C that will interface with any other Erlang node over a typical distributed network as you would for Erlang, Elixir, Gleam, etc... furthermore there is a rpc library in Erlang where from C we can make function calls and interface with our Elixir application. Yes, the encoding/decoding has an overhead and FFI would be faster but we're still way within our latency budget and we got this stood up in a few days without even have heard of it before.
The larger point here is that Erlang has been solving many of the problems that modern tech stacks are struggling with and it has solved for scale and implementation cost and it solved these problems decades ago. I know HN has a bit of a click-bait love relationship with Erlang/Elixir but it hasn't translated over to adoption and there are companies that are just burning money trying to do what you get out of the box for free with the Erlang stack.
I went from a company that used Elixir in the backend to one that uses Nodejs.
I had gone in neutral about Nodejs, having never really used it much.
These projects I worked on were backend data pipeline that did not even process that much data. And yet somehow, it was incredibly difficult to isolate exactly the main bug. Along the way, I found out all sorts of things about Nodejs and when I compare it with Elixir/Erlang/OTP, I came to the conclusion that Node.js is unreliable by design.
Don't get me wrong. I've done a lot of Ruby work before, and I've messed with Python. Many current-generation language platforms are struggling with building reliable distributed systems, things that the BEAM VM and OTP platform had already figured out.
Elixir never performs all to well in microbenchmarks. Yet in every application I've seen Elixir/Erlang projects compared to more standard Node, Python, or even C# projects and the Elixir one generally has way better performance and feels much faster even under load.
Personally I think much of it is due to async being predominant in Node and python. Async seems much harder than actor or even threading for debugging performance issues. Sure it feels easier to do async at first. But async leads to small bloat adding up and makes it very difficult to debug and track down. It makes profiling harder, etc.
In BEAM, every actor has its own queue. It's trivial to inspect and analyze performance blockages. Async by contrast puts everything into one giant processing queue. Plus every function call in async gets extra overhead added. It all adds up.
This has to do with how async works without preemption and resource limits.
There's a counter-intuitive thing when trying to balance load across resources: applying resource limits helps the system run better overall.
One example: when scaling a web app, there comes a point when scaling up the database doesn't seem to help. So we're tempted to increase the connection pool because that looks like a bottleneck. Increasing the pool can make the overall system perform worse, because often times, it is slow queries and poorly performing queries that is stopping up the system.
Another example: one of the systems I worked on has over 250 node runtimes running on a single, large server. It used pm2 and did not apply cgroups to limit CPU resources. The whole system was a hog, and I temporarily fixed it by consolidating things to run on about 50 node runtimes.
When I moved them over to Kubernetes, I also applied CPU resource limit, each in its own pod. I set the limits based on what I measured when they were all running on PM2 ... but the same code running on Kubernetes ran with 10x less CPU overall. Why? Because the async code were not allowed to just run grabbing as much CPU as it can for as long as it can, and the kernel scheduler was able to fairly run. That allowed the entire system to run with less resources overall.
There's probably some math that folks who know Operations Research can prove all this.
> When I moved them over to Kubernetes, I also applied CPU resource limit, each in its own pod. I set the limits based on what I measured when they were all running on PM2 ... but the same code running on Kubernetes ran with 10x less CPU overall. Why? Because the async code were not allowed to just run grabbing as much CPU as it can for as long as it can, and the kernel scheduler was able to fairly run. That allowed the entire system to run with less resources overall.
As someone who has advocated against Kubernetes CPU limits everywhere I've worked, I'm really struggling to see how they helped you here. The code used 10x less CPU with CPU limits, with no adverse effects? What were all those CPU cycles going before?
> The code used 10x less CPU with CPU limits, with no adverse effects?
The normal situation is that defective situations get a much large latency, while the correct requests run much faster.
It's a problem on the cases when the first set isn't actually defective. But it normally takes a reevaluation of the entire thing to solve those, and the non-limited situation isn't any good either.
> Async by contrast puts everything into one giant processing queue
How can you make performance claims while getting the details completely wrong?
Neither .NET's nor Rust's Tokio async implementations work this way. They use all available cores (unless overridden) and implement work-stealing threadpool. .NET in addition uses hill-climbing and cooperative blocking detection mechanism to quickly adapt to workloads and ensure optimal throughput. All that while spending 0.1x CPU on computation when compared to BEAM, and having much lower memory footprint. You cannot compare Erlang/Elixir with top of the line compiled languages.
That sounds about right for .NET. One of the Elixir projects I worked on lived alongside a C# .NET, the latter being a game server backend. The guy who architect and implemented it made it so that large numbers of people can interact in realtime without having to shard. It is pretty amazing stuff in my book.
On the other hand, I have yet to have to implement a liveness probe with an Elixir app, and I've had to do that with .NET because it can and does freeze. That game server also didn't use up all the available cores as well as the Elixir app. We also couldn't attach a REPL directly to the .NET app, though we certainly tried.
I would be curious to see if Rust works out better in production.
> I swear, the affliction of failing to understand the underlying concepts upon which a technology A or B is built is a plague upon our industry. Instead, everything clearly must fit into the concepts limited to whatever “mother tongue” language a particular developer has mastered.
Ironic, since any time you post about a programming language it's to inform that C# does it better.
Not just here; someone with your nick also whined when the creator of C# made a technical deficient decision when choosing Go over C# to implement typescript.
It's hard for a rational person to believe that someone would make the argument that the creator of the language must have made a mistake just because he reached for (in his words) a more appropriate language in that context.
You have a blind spot when it comes to C#. You also probably already know it.
> Not just here; someone with your nick also whined when the creator of C# made a technical deficient decision when choosing Go over C# to implement typescript.
You know you could have just linked the reply instead? It states "C#, F# or Rust". But that wouldn't sound that nice, would it? I use and enjoy multiple programming languages and it helps me in day-to-day tasks greatly. It does not prevent me from seeing how .NET has flaws, but holistically it is way less bad than most other options on the market, including Erlang, Go, C or what have you.
> It's hard for a rational person to believe that someone would make the argument that the creator of the language must have made a mistake just because he reached for (in his words) a more appropriate language in that context.
So appeal to authority trumps observable consequences, technical limitations and arguments made about lackluster technical vision at microsoft? Interesting. No, I think it is the kind of people who refuse to engage with the subject on their own merits that are a problem, relegating to the powers that be all the argumentation. Even in a team environment, sure it is easier to say "a team/person X makes a choice Y" but you could also, if the situation warrants it, expand on why you think this way, and if you can't maybe you shouldn't be making a statement?
So no, "TypeScript, including Anders Hejlsberg, choosing Go as the language to port TS compiler to" does not suddenly make pigs fly, if anything, but being seen as an endorsement from key C# figure is certainly a bad look.
> So appeal to authority trumps observable consequences, technical limitations and arguments made about lackluster technical vision at microsoft?
Your argument is that you have a better grasp of "technical limitations" than Anders Hejlsberg?
You'll forgive the rest of us for not buying that; he has proven his chops, you haven't, especially as the argument (quite a thorough explanation of the context) from the typescript team is a lot more convincing than anything we've seen from you (a few nebulous phrases about technical superiority).
> but being seen as an endorsement from key C# figure is certainly a bad look.
Yeah, well, the team made their decision with no regard to optics. That lends more weight to their decision, not less.
The issue is not that Anders is incapable. His best argument was that they wanted to have the new code look like the old code. Many of the other arguments Anders brought forward were confusing, since some of them were technically incorrect. This raises some questions.
Typescript is a huge success from Microsoft in terms of recapturing developers, without them knowing. MS is not a charity, look at how little love they give to F# compared to TS.
* My personal guess is that the age old MS instinct came into play: be coûte que coûte backwards compatible, port all the bugs, do not disturb anything.
* A second reason might be that TS people might not want to learn .net because of vibes. Do not underestimate vibes. Almost everyday on HN I see Python programs being posted where most often the creator would be better of if they had learned some next programming language. Decisions are seldomly made on a technical basis. We as humans decide emotionally, sometimes with rationalizations afterwards.
And so, maybe Anders was rational in acknowledging the dev-social situation as is.
Whatever the reason, this will not be without consequences. The team now has to invest in GO and now depends on Google to take TS forward. And yes, this is also typical MS, one department can easily undo the other.
TLDR: the technical arguments were mostly nonsense, but the real arguments have likely more to do with age-old reflexes and dev-cultural issues.
> Neither .NET's nor Rust's Tokio async implementations work this way.
Well that’s great. I didn’t mention Rust in that list because it does seem to perform well. Its async is also known as to be much more difficult to program.
> and having much lower memory footprint. You cannot compare Erlang/Elixir with top of the line compiled languages.
And yet I do and have. Despite all the cool tech for C# and .Net, I’ve seen simple C# web apps struggle to even run on Raspberry pi’s for IoT projects while Elixir ones run very well.
Also note Elixir is a compiled language and BEAM has JIT nowadays too.
I did hesitate to add C# to that list because it is an impressive language and can perform well. I also know the least about its async.
Nothing you said really counters that async as a general paradigm is more likely to lead to worse performance. It’s still more difficult to profile and tune than other techniques even with M:N schedulers. Look at the sibling post talking about resource allocation.
Even for Rust there was a HM post recently where they got a Rust service to run a fair bit faster than their initial Golang implementation. After months of extra work that is. They mentioned that Golang’s programming model made it much easier to write fairly performant networking code for. Since Go doesn’t use async it seems reasonable to assume go routines are easier to profile and track than async even if I lack knowledge of Go’s implementation details on the matter. Now I am assuming their Rust implementation used async but don’t know for sure.
> Also note Elixir is a compiled language and BEAM has JIT nowadays too.
Let's see it perform faster than Python first :)
Also, if the target is supported, .NET is going to unconditionally perform faster than Elixir. This is trivially provable.
> Nothing you said really counters that async as a general paradigm is more likely to lead to worse performance. It’s still more difficult to profile and tune than other techniques even with M:N schedulers. Look at the sibling post talking about resource allocation.
Can you provide any reference to support this claim as far as actually good implementations go? Because so far it looks like vibe-based reasoning with zero knowledge to substantiate the opinion presented as fact.
That's not surprising however - Erlang and Elixir as languages tend to leave their heavy users with big knowledge and understanding gaps and their communities are rather dogmatic about BEAM being the best next thing since sliced bread. Lack of critical thinking leads to such a sorry place.
> Can you provide any reference to support this claim as far as actually good implementations go?
Ah yes now to the No True Scotsman fallacy. Async only works well when it’s “properly implemented” which is only .NET.
Even some .NET folks prefer actors model for concurrent programming:
> Orleans is the most underrated technology out there. Not only does it power many Azure products and services, it is also the design basis for Microsoft Service Fabric actors, which also power many Azure products. Virtual actors are the perfect solution for today’s distributed systems.
> In my experience Orleans was able to handle insane write load (our storage/persistence provider went to a queue instead of direct, it was eventually consistent) so we were able to process millions of requests without breaking a sweat. Perhaps others would want more durability, we opted for this as the data was also in a time series database before Orleans saw it.
Ironically what got me into Elixir was learning about Orleans and how successful it was in scaling XBox services.
> Because so far it looks like vibe-based reasoning with zero knowledge to substantiate the opinion presented as fact.
Aside from personal experience and years of writing and deploying performance sensitive IoT apps?
Well quick googling shows quite a few posts detailing async issues:
> What tools and techniques might be suited for this kind of analysis? I took a quick glance at a flamegraph but it seems like I would need a relatively deep understanding of the async runtime internals since most of what I see looks like implementation details.
> Reading a 1GB file in 100-byte chunks leads to at least 10,000,000 IOs through three async call layers. The problem becomes catastrophic since these functions are essentially language-level abstractions of callbacks, lacking optimizations that come with their async nature. However, we can manually implement optimizations to alleviate this issue.
> I’m not going to say all async frameworks are definitely slower than threads. What I can say confidently is that asyncio isn’t faster, and it’s more efficient only for huge numbers of mostly idle connections. And only for that.
Do you realize that actor model and virtual/green threads/stackful coroutines vs stackless coroutines / async/await and similar are orthogonal concepts?
Also picking asyncio from Python. Lol. You can't be serious, can you?
The only impression I get is most Elixir/Erlang practicioners simply have very ossified perception and deep biases that prevent them from evaluating implementation/design choices fairly and reaching balanced conclusions on where their capabilities lie. Very far cry from the link salad you posted that does not answer my question e.g. the issues with .NET and Rust async implementations performance-wise.
It's impossible to have a conversation with someone deeply committed to their bias and unwilling to accept that BEAM is not the shining paragon of concurrent and multi-threaded runtimes it once was.
Starting with the most general: Nodejs suffers in the same way that other async systems do -- the lack of preemption means that certain async threads can starve other async threads. You can see this on GUI desktop apps when the GUI freezes because it wasn't written in a way to take that into account.
In other words, the runtime feature that Nodejs is the most proud of and markets to the world as its main advantage does not scale well in a reliable way.
The BEAM runtime has preemption and will degrade in performance much more gracefully. In most situations, because of preemption (and hot code reloading) you still have a chance for attaching a REPL to the live runtime while under load. That allows someone to understand the live environment and maybe even hot patch the live code until a the real fix can run through the continuous delivery system.
I'm not going to go into the bad Javascript syntax bloopers that still haunts us, and only partially mitigated by Typescript. That is documented in "Javascript: The Good Parts". Or how the "async" keyword colors function calls, forcing everything in a call chain to also be async, or forcing you to use the older callbacks. Most people I talk to who love Typescript don't consider those as issues.
The _main_ problems are:
1. Async threads can easily get orphaned in Nodejs. This doesn't happen when using OTP on BEAM because you typically start a gen_server (or a gen_*) under a supervisor. Even processes that are not supervised can be tracked. Because pids (identifiers to processes) are first-class primitives, you can always access the scheduler which will tell you _all_ of the running processes. If you were to attach a Nodejs REPL, you can't really tell. This is because there is no encapsulation of the process, no way to track when something went async, no way to send control messages to those async processes.
2. Because async threads are easily orphaned, errors that get thrown gets easily lost. The response I get from people who love Typescript on Nodejs tells me that is what the linter is for. That is, we're going to use an external tool to enforce all errors gets handled, rather than having the design of the language and the runtime handle the error. In the BEAM runtime, unhandled errors within the process crashes the process, without crashing anything else; processes that are monitoring that process that crashed gets notified by the runtime that it has crashed. The engineer can then define the logic for handling that crash (retry? restart? throw an error?).
3. The gen_server behavior in OTP defines ways to send control messages. This allows more nuanced approaches to managing subsystems than just restarting when things crash.
I'm pretty much at the point where I would not really want to work on deploying Nodejs on the backend. I don't see how something like Deno would fix anything. Typescript is incapable of fixing this, because these are design flaws in the runtime itself.
Just to further hammer point 2 and how it’s a problem in the real world, Express, probably the go to server library for close to a decade, has only within the last couple months sorted out not completely swallowing any error that happens in async middleware by default. And only because some new people came in to finally fix it! It’s absolutely insane how long that took and how easy it was to get stung by that issue.
An invocation of a Nodejs async function is automatically tracked within the code as a locally-scoped promise. The runtime will track it, but unless you then register that Promise elsewhere, it can only be accessed within that local scope. You better hope that you immediately chain it with the success callback or capture errors from it.
Spawning a lightweight process in BEAM returns a first-class primitive called a pid. That pid is recorded by the scheduler, so even if it gets lost by the code, you can still find out if it has been taking up resources (when debugging problems live in production).
Supervisor behavior is written in a way so that any gen_server-behavior-complying processes will be linked. That means any crashes of the spawned process will notify the supervisor. That’s not something we are doing with Nodejs async — there is no mailbox to notify, just either awaiting completion, or make sure you add the error handling … which is where people write linters to check.
The problem with Node is observability. They've optimized away observability to where it's hard to find performance problems compared to the JVM to Beam.
I have been looking for an Erlang thing akin to Apache Airflow or Argo Workflows. Something that allows me to define a DAG of processes, so that they run one after the other. How would you implement something like that?
Adding to this, the primitives erlang, and descendants, give you are very easy to work with, and therefore very easy to test.
Take GenServer. The workhorse of most BEAM systems. Everything it does it basically just calling various functions with simple parameters. So you can test it just by call l calling those functions and manually passing parameters to it, and asserting on its output. No need to set up complex testing systems that are capable of dealing with asynchronous code, no need to handle pauses and wait for coffee to finish running in your tests. It's something a lot of juniors tend to miss, but it's liberating when figured out
C nodes are under appreciated. We have one (Cgo) for communicating between Go and Elixir services running in the same Kubernetes pod. The docs are also pretty good for Erlang and its C libs.
> I know HN has a bit of a click-bait love relationship with Erlang/Elixir but it hasn't translated over to adoption and there are companies that are just burning money trying to do what you get out of the box for free with the Erlang stack.
Elixir is "bad" because it is not a friendly language for people who want to be architecture astronauts at the code level (you can definitely be an architecture astronaut at the process management level but that's a very advanced concept). And a lot of CTOs are architecture astronauts.
That's the opposite of my experience. I tend to get those "architect astronauts" in teams using other languages platforms, and the folks I work with Erlang or Elixir tend to be pragmatic and willing to dig down the stack to troubleshoot problems.
> When you go too far up, abstraction-wise, you run out of oxygen. Sometimes smart thinkers just don’t know when to stop, and they create these absurd, all-encompassing, high-level pictures of the universe that are all good and fine, but don’t actually mean anything at all.
> These are the people I call Architecture Astronauts. It’s very hard to get them to write code or design programs, because they won’t stop thinking about Architecture. They’re astronauts because they are above the oxygen level, I don’t know how they’re breathing. They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don’t contribute to the bottom line.
Joel was wrong about one thing, they also work at startups. My roommate worked at a startup where the senior frontend developer was basically building react in svelte + zod. Once a week he would see all his work deleted and completely rewritten in a fever dream PR that the senior produced. Completely impossible for grug developer to follow what's going on, his job eventually became "running this guy's code through chatgpt and adding comments and documentation".
My personal opinion as a fan and adopter of the stack is that the benefit is often seen down the line, with the upfront adoption cost being roughly the same.
E.g. the built in telemetry system is fantastic, but when you are first adopting the stack it still takes a day or two to read the docs and get events flowing into - say - DataDog, which is roughly the same amount of time as basically every other solution.
The benefit of Elixir here is that the telemetry stack is very standardized across Elixir projects and libraries, and there are fewer moving pieces - no extra microservices or docker containers to ship with everything else. But that benefit comes 2 years down the line when you need to change the telemetry system.
These incremental benefits don't translate to an order of magnitude more productivity, or stability, or profitability. Given the choice, as a business owner, future proofing is about being able to draw from the most plentiful and cheapest pool of workers. The sausage all looks the same on the outside.
That is not true, especially with Section 174 (for the US). Right now, if you want to hire an Elixir engineer, you're better off finding a generalist willing to learn and use Elixir, and you would probably get someone who is very capable.
With Section 174 in play in the US, it tends to drive companies hiring specialists and attempting to use AI for the rest of it.
My own experience is that ... I don't really want to draw from the most plentiful and cheapest pool of workers. I've seen the kind of tech that produces. You basically have a small handful of software engineers carrying the rest.
Elixir itself is a kind of secret, unfair advantage for tech startups that uses it.
>you're better off finding a generalist willing to learn and use Elixir, and you would probably get someone who is very capable.
This is a thing I really don't get. People are like "but what about the hiring pool". A competent software engineer will learn your stack. It's not that hard to switch languages. Except maybe going from Python to C++.
I'm biased, because I worked at WhatsApp, but it may be one of the most famous users of Erlang... and from its start until when I left (late 2019) I think we only hired three people with Erlang experience. Everyone else who worked in Erlang learned on the job.
We seemed to do pretty well, although some of our code/setup wasn't very idiomatic (for example, I'm pretty sure we didn't use the Erlang release feature properly at all)
We just pushed code, compiled, and hotloaded... Pretty much ignoring the release files; we had them, but I think the contents weren't correct and we never changed the release numbers, etc.
For otp updates, we would shutdown beam in an orderly fashion, replace the files, and start again. (Potentially installing the new one before shutting down, I can't remember).
Post facebook, more of boring OS packages and slow rollouts than hotloading.
There's no killer app, as in a reason to add it to your tech stack.
The closest I've come across was trying to maintain an ejabberd cluster and add some custom extensions.
Between mnesia and the learning curve of the language itself, it was not fun.
There are also no popular syntax-alikes. There is no massive corporation pushing Erlang either directly or indirectly through success. Supposedly Erlang breeds success but it's referred to as a "secret" weapon because no one big is pushing it.
Erlang seems neat but it feels like you need to take a leap of faith and businesses are risk averse.
Well jayd did the same thing as that small company (which I joined in 2011 when it was small and left in 2019 when it was not so small), run ejabberd to solve a problem. In our case, Erlang subsumed pretty much the rest of our service over time. When I started, chat was Erlang, but status messages, registration, and contacts were PHP with MySQL, media was PHP (with no database), but those all got sucked into Erlang with mnesia because it was better for us.
But I guess it doesn't always work that way. FB chat was built on ejabberd and then migrated away.
Also, a lot of the power of Erlang is the OTP (Open Telecom Platform) even more than Erlang, itself. You have to internalize those architectural decisions (expect crashes--do fast restart) to get the full power of Erlang.
Elixir seems like it has been finding more traction by looking more like mainstream languages. In addition, languages on the BEAM (like Elixir) made the BEAM much better documented, understood and portable.
Anyway, the options seem to be either summoning transcendent threats by superficial syntax or by well entrenched semantics. There seems to be no other choice.
This is the common misconception of LiveView coming from the JS community. LiveView is JS. It has a whole optimistic UI layer that solves all of the problems you cite. The UI from state updates coming from the server or the client doesn't matter because the reactive UIs of SPAs still require data updates from some remote source. So if we're talking about latency that latency is going to exist for all application types.
Where did I say it’s not JS? I said it becomes the most retarded JS imaginable. I have used the “whole optimistic UI layer”, hooks and commands. What happens is local diffs have to fight server sent diffs, at every level, resulting in more tangled logic than jQuery apps. This does not happen when you merely have to merge state and the view is a pure function of state.
Well, we've had a bit of a set back on the Jetpack client. Unfortunately the Jetpack developer just moved on to another position. The client is nearly ready but kind of a kick in the pants to have to find someone new right now.
Just the theme that we ended up using for the marketing site. We will likely build something less janky post-batch, but right now -- just trying to get the information out there.
So everybody just LLM'd this, right?