Had a chance to take a little more of a look, did you investigate querying/indexing within the documents? (Didn't spot any sign in the repository)
Without the ability to index properties, or do ad-hock queries, on the documents there is little reason to to the merging in-DB. It then makes more sense to merge outside the DB and save both the raw CRDT object and a serialised JSON next to it for querying. I assume that if you had found it's performance better, indexing+querying would have been the next step?
However I'm not surprised you found performance not ideal, I suspect a layer in-front of PG throttling writes is the only way forward for a real-time use case.
Personally I find the offline + eventual consistency the most interesting use case. The ability to sync a users document set to a local db (SQLite or a future mini-pg/supabase) with local querying and then merge them back later. This could be as basic as a three column table (guid, doc, clock), the clock being a global Lamport like clock to enable syncing only changed documents.
> This could be as basic as a three column table (guid, doc, clock), the clock being a global Lamport
This would be achievable, but also perhaps quite strict - we're dictating a table structure rather than working with existing CRDT libs. That said, we could probably do this today, without any mods/extensions
> querying/indexing
Yeah, there is nothing in this implementation for that
> Without the ability to index properties, or do ad-hock queries, on the documents there is little reason to to the merging in-DB
One benefit is the ability to send "directly" to the database, but I agree that lack of additional features in the current form outweighs this benefit
> a serialised JSON
This is how we might do it with with "Realtime as an authority" approach (and is the most likely approach tbh).
Without the ability to index properties, or do ad-hock queries, on the documents there is little reason to to the merging in-DB. It then makes more sense to merge outside the DB and save both the raw CRDT object and a serialised JSON next to it for querying. I assume that if you had found it's performance better, indexing+querying would have been the next step?
However I'm not surprised you found performance not ideal, I suspect a layer in-front of PG throttling writes is the only way forward for a real-time use case.
Personally I find the offline + eventual consistency the most interesting use case. The ability to sync a users document set to a local db (SQLite or a future mini-pg/supabase) with local querying and then merge them back later. This could be as basic as a three column table (guid, doc, clock), the clock being a global Lamport like clock to enable syncing only changed documents.