Hacker Newsnew | past | comments | ask | show | jobs | submit | matharmin's commentslogin

If that is your source, then Safari was _way_ behind for all of 2025 up until this month, where it suddenly caught up.

True; I was too fascinated by the big green numbers to pay proper attention to the chart below them. Good for them that they finally caught up.

SBOM may contain similar info to lockfiles, but the purposes are entirely different.

Lockfiles tells the package manager what to install. SBOM tells the user what your _built_ project contains. In some cases it could be the same, but in most cases it's not.

It's more complicated than just annotating which dependencies are development versus production dependencies. You may be installing dependencies, but not actually use them in the build (for example optional transitive dependencies). Some build tools can detect this and omit them from the SBOM, but you can't omit these from your lockfile.

Fundamentally, lockfiles are an input to your developement setup process, while SBOM is an output of the build process.

Now, there is still an argument that you can use the same _format_ for both. But there are no significant advantages to that: The SBOM is more verbose, does not diff will, will result in worse performance.


So the lockfile is a superset, but never a subset?

So it basically is an SBOM then but just sometimes has extra dependencies?


Superset of dependencies, but often a subset of info per depedency.

Ah okay! I know Rust has the transitive dependencies did not think/realise all languages might not, good point!

As mentioned in those threads, there is no SQLite WAL corruption if you have a working disk & file system. If you don't, then all bets are off - SQLite doesn't protect you against that, and most other databases won't either. And nested transactions (SAVEPOINT) won't have have any impact on this - all it does in this form is reduce the number of transactions you have.


> working disk & file system

And a working ECC or non-ECC RAM bus, and [...].

How bad is recovery from WAL checksum / journal corruption [in SQLite] [with batching at 100k TPS]?

And should WAL checksums be used for distributed replication "bolted onto" SQLite?

>> (How) Should merkle hashes be added to sqlite for consistency? How would merkle hashes in sqlite differ from WAL checksums?

SQLite would probably still be faster over the network with proper Merkleization


We're relying on logical replication heavily for PowerSync, and I've found it is a great tool, but it is also very low-level and under-documented. This article gives a great overview - I wish I had this when we started with our implementation.

Some examples of difficulties we've ran into: 1. LSNs for transactions (commits) are strictly increasing, but not for individual operations across transactions. You may not pick this up during basic testing, but it starts showing up when you have concurrent transactions. 2. You cannot resume logical replication in the middle of a transaction (you have to restart the transaction), which becomes relevant when you have large transactions. 3. In most cases, replication slots cannot be preserved when upgrading Postgres major versions. 4. When you have multiple Postgres clusters in a HA setup, you _can_ use logical replication, but it becomes more tricky (better in recent Postgres versions, but you're still responsible for making sure the slots are synced). 5. Replication slots can break in many different ways, and there's no good way to know all the possible failure modes until you've run into them. Especially fun when your server ran out of disk space at some point. It's a little better with Postgres 17+ exposing wal_status and invalidation_reason on pg_replication_slots. 6. You need to make sure to acknowledge keepalive messages and not only data messages, otherwise the WAL can keep growing indefinitely when you don't have incoming changes (depending on the hosting provider). 7. Common drivers often either don't implement the replication protocol at all, or attempt to abstract away low-level details that you actually need. Here it's great that the article actually explains the low-level protocol details.


Yeah I was debating heavily between WAL and L/N. Tried to get WAL set up, struggled; tried to learn more about WAL, failed; tried to persevere, shot myself in the foot.

At the end of the day the simplicity of L/N made it well worth the performance degradation. Still making thousands-to-millions of writes per second, so when the original article said they were 'exaggerating' I think they may have gone too far.

I've been hoping WAL gets some more documentation love in the years/decades L/N will serve me should I ever need to upgrade, so please share more! :D


Probably a security feature. If it can access the internet, it can send your private data to the internet. Of course, if you allow it to run arbitrary commands it can do the same.


FoundationDB also has a Mongo-compatible document layer, but it seems like the last release was 6 years ago, so probably doesn't count anymore.


The project looks great! Object storage is often so much better in terms of cost efficiency than a database on EBS. It's often 10-20x more expensive for EBS after taking into account that you need 3x replicas for a typical MongoDB deployment, and need to over-provision the storage. And being able to scale compute independently from storage is great.

The biggest things I'm missing from the docs (checked the github page and the site) is seeing what MongoDB features are supported or not. I've worked with Azure CosmosDB before, and even though it claims MongoDB compatibility, it has many compatibility issues as soon as you have more than a basic CRUD application. Some examples include proper ChangeStream support, partial index support, multi-key index support, set of supported aggregation pipeline operations, tailable cursor support, snapshot queries.

Another thing that's not clear: What does multi-master/multi-write mean in practice? What happens if you write to the same data at the same time on different nodes?


That's exactly the reason. S3 is better in almost all aspects compared with EBS, except the performance part, and I am glad that our Data Substrate technology solved this issue gracefully [1].

As for the compatibility, we are leveraging some of the code from 4.03 version (the last AGPL version), and we have a very good compatibility (we will show some results in later blog posts). As I mentioned in another reply post, the Mongo APIs are reasonably stable over the last few years, only seeing very minor changes. Most of the later versions improved upon performance and transaction supports, which we support natively with our underlying data substrate technologies. Still, if you have any specific API that you feel is needed, we'd be happy to implement and we welcome community contributions.

Multi-master/multi-writer means it is a fully distributed database. Of course you can run it in single node configurations and get all the single node benefits, but if deployed in a cluster, you do not need to worry about which node to write to, or how data are sharded. If you writes potentially can cause conflicts (i.e. write to the same data at the same time on different nodes), the concurrency-control will handle that for you. In fact, you will encounter the same issue even in a single node configuration, since a single node is still multi-threaded.

[1] https://www.eloqdata.com/blog/2025/07/16/data-substrate-bene...


That is completely wrong for this stage of a company. The ability to make profit in the future is important. Making profit while growing is not.


So exactly how are they going to make a profit if each user causes a marginal lost?

Any idiot can sell a dollar’s worth of value for 90 cents.


Let's say I have an open-source project licensed under Apache 2. The grant allows me to include the extension in my project. But it doesn't allow me to relicense it under Apache 2 or any other compatible license. So if I include it, my project can't be Apache 2-licensed anymore.

Apache 2 is just an example here - the same would apply for practically any open source license.

The one place I imagine it could still work is if the open-source project, say a sqlite browser, includes it as an optional plugin. So the project itself stays open-source, but the grant allows using the proprietary plugin with it.


I don't see why this would infect your project, though. You aren't using the code directly, you're using it as a tool dependency, no? Same way as if your OSS project used an Oracle DB to store data.


Unlike Oracle DB, sqlite gets embedded in your program binary. It's a library, not an external service, and this matters for OSS licenses


Ah true, I forgot because I always use it in Python, where it's built in.


It would help to have more info on what you need to run this. The page mentions "without the need for any new hardware", but doesn't say what existing types of hardware it is compatible with. The apps available for download gives a hint, but are the apps for displaying the content, or for controlling the content?

For example, I recently setup a dashboard using a Raspberry Pi running Chromium - would this work for my use case?

And does the control work over the local network, or does it require an internet connection?


The controller is a web app (https://admin.signagesync.app), and the app is basically a WebView wrapper. The app has 2 "modes", control (opens the website mentioned earlier) and display.

The app requires internet, but the content can be from local network. Eventually if there's enough demand I'll make everything local.

It doesn't work for RPi yet but soon!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: