In my experience, as soon as there is some new data coming in the DB, the .backup command will continue, and if the writes are not stopping, the backup will never stop as well :D
In Gladys case, we put in the application logic a blocking transaction to lock writes during the backup. I haven't found any other way to avoid infinite backups in case of write-heavy databases
> The VACUUM command with an INTO clause is an alternative to the backup API for generating backup copies of a live database....The VACUUM INTO command is transactional in the sense that the generated output database is a consistent snapshot of the original database.
I ran into the case of needing to back up a write-heavy database without blocking anything, and came up with a solution: Writing a VFS ( https://www.sqlite.org/vfs.html ) that makes the application move the Sqlite journals file to a directory for processing instead of deleting them. Another process reads them to see what pages were touched and can very quickly get the changes to update the backup.
In my experience, as soon as there is some new data coming in the DB, the .backup command will continue, and if the writes are not stopping, the backup will never stop as well :D
In Gladys case, we put in the application logic a blocking transaction to lock writes during the backup. I haven't found any other way to avoid infinite backups in case of write-heavy databases