It says that it's actor-based, and sending messages to an actor is equivalent to running the actor function under a mutex, and thus reduces parallelism (in other words, you have N threads sending messages, but only 1 thread running the actor code, so it's just as serialized as a mutex), so while it may technically be "fully lock-free", using actors means that there is no parallelization improvement.
Not quite; with mutex based actors an interruption of the actor thread would cause that mutex (and thus that actor code) to remain locked until the original thread resumes. No additional parallelism will allow that actor code to be "restarted" or "resumed" as the mutex owned by the interrupted thread is locked.
This implementation relies heavily on restartable functions in order to allow another parallel thread to pick up and continue the work of an already in progress but otherwise interrupted actor. See page three of the (excellent) design document: https://github.com/eduard-permyakov/peredvizhnikov-engine/bl...
Thus while it might not strictly be "more parallel" (same number of actors), it does seem to be able to make better use of more parallelism for completing the same set of work.
> sending messages to an actor is equivalent to running the actor function under a mutex
Where does it say that? In my understanding actor model means message-passing with asynchronous execution. So quite the contrary, actor model allows N threads executing in parallel given N actors.
Whenever I read any press about actors or goroutines, they say the same thing about preventing races (and the need for explicit locking) through share-nothing concurrency.
It's easy to scatter the computations, but they never go on to explain how to gather them back up.
You're going to render one frame to the screen. Did multiple actors have a write-handle into the frame buffer? What about collisions, does each entity collide with itself, without sharing position information with other actors?
Your right that there is no parallelization improvement, but it does not reduce parallelism either, its just a different(imo easier) way to think about concurrency.
Because it is easier for me to think about it is easier for me to see where things will contend the same resource and actually helps me improve potential parallelism. Once you recognize a particular opportunity where SMP can speed things up, you can stray a way from the actor-model a bit and have multiple threads receiving on your message queue, or if that isn't possible, you can just add more actors and split up the data better.