Don't get too excited for SharedArrayBuffer. Every major browser disabled it to help mitigate spectre, and there's no current road map for re-enablement.
Speaking of which, while I can see the usefulness of SharedArrayBuffer and Atomics for certain libraries and creating certain functionality, I have had some concerns about these new modules. Specifically, I think it complicates the simple model that ES had going for it for a few reasons:
* There's already many potential spots for side-effects and mutability in ES as it is. Now we've introduced another one but it works differently than the rest of the model.
* Aside from maintaining order with the event loop and async operations, you really didn't need to worry about shared mutable memory in concurrent environments. Now we need to keep that in mind when we work.
* If one needs to work with a SharedArrayBuffer instance, the functions they write need to assume/test that the argument is a SharedArrayBuffer specifically because it needs to deal with that type using a separate module (Atomics).
* A smaller point, but it does add to the API surface of ES. That could be confusing as time goes on, especially if a developer is unsure of why SharedArrayBuffer is around.
These issues can be dealt with by being very careful and selective of when to use these tools, as well as trying to only use them within libraries or narrow contexts. So they're tolerable for sure, but they are just things that have crossed my mind.
That being said, what future plans are there for SharedArrayBuffer/Atomics? Maybe the future holds some ideas that will make things better for this feature.
Having just watched one of Kevlin Henney's many talks on the subject on Youtube, I'm reminded of his mutability and shared state diagram. Since I can't find a copy of it outside of an hour long video, I'll try to replicate it here (sorry for those on mobile):
Mutable
^
(Good) | (Bad)
|
Non-Shared ----------> Shared
State | State
|
(Good) | (Good)
Immutable
On the top half of the graph you have Mutable state, on the bottom half Immutable. On the left you don't share the state and on the right you do. Everything is fine as long as you don't both mutate the state and share it. Of course, that's what we always feel like we want to do ;-)
Simple example: sometimes it's tremendously easier and more performant to write loops with an increment counter. Test it, stick it in a function, and it works just as well as any thing else.
I agree with you generally. However, using persistent data structures can sometimes be tricky and can complicate the code. As long as you contain the mutability you can get some benefits: increased performance, reduced memory usage, etc.
I once wrote a SIP client for Windows mobile in .Net. Resources were scarce and .Net would spin up threads if you so much as looked at it the wrong way. I had a single thread reading from the network into a circular queue -- clearly that had to be mutable or memory allocation alone would have set the machine on fire. I then used a reactor pattern in the UI thread: basically on the windows event loop being idle, I read from the circular queue. Because I could guarantee that there was never any more than 1 thread reading from the queue and 1 thread writing to the queue, I could get away with no locking (you drop packets that will overrun the queue).
For me, that's the kind of thing that this graph is saying: mutability is problematic, but if you are forced to use it you need to be really careful about concurrency.
I just mean I wouldn't refer to non-shared mutability as "good". You should aim for immutability by default and drop to mutability if it has a clear benefit which depending on the domain can be rare. Most of the time I'm optimising for developer productivity and reduction of bugs first; performance and memory consumption comes later if it's an issue.
I hope they scrap it for a better model. Shared memory + locks is not the only way to handle concurrency and can be difficult to reason about. I don't quite understand what problem it's solving that message passing doesn't already.
Parallel scanning of large data sets, like object graphs. You don't want to copy the entire graph to each thread, nor do fine grained passing around of a node per traversal.
However, if what this exposes is just an array of bytes, that's less useful as you have to cast your own object model on top of it. Large image data might be still useful as just bytes, with threads running convolutions or NNs on it.
It isn't the only way to handle concurrency, but they are the only primitives that can be used to port existing code (C/C++/Rust/etc) into the browser sandbox without killing performance, or introducing crazier and unsound primitives (like stack manipulation).
I've used SharedArrayBuffers to port things like latex into the browser ( https://browsix.org ), and without it you can't use wasm or asm.js, you need to interpret C code in JavaScript to save/restore the stack on system calls.
> It isn't the only way to handle concurrency, but they are the only primitives that can be used to port existing code (C/C++/Rust/etc) into the browser sandbox without killing performance
I thought a big reason why we have WASM is so that that emphatically does not need to be a concern of JS.
Shared memory + locks isn't about concurrency, but probably about parallelism. If you just need concurrency, then messaging between web workers should be good enough.
I think the main use case is to support c++ codebases compiled with emscripten without having to rewrite the whole codebase to not assume shared mutable state.
I wouldn't be surprised if it stays disabled for a long time; there are plenty of non-Spectre-related timing side-channels that are made possible by shared mutable state.
The browser compatibility section of the MDN article[1] lists the various browsers and has footnotes that provide further info on each browser's handling of the situation