To elaborate, rsync chunks in fixed sizes, so inserting or deleting a few bytes makes all different chunks from that point onward.
If instead you chunk based off of local content (conceptually like chunking text into sentences at periods, but its a binary thing on has an upper size limit and lower size limit and I couldn't find the algorithm specification) so that after an insertion or deletion in a small number of bytes you start getting the same chunks as before.
This drastically reduces the cost of identifying unmodified chunks.
Your description of content defined chunking is exactly right though. There are a number of techniques for doing it. FastCDC is one of them, although not the one used in rsync.
rsync does use fixed-size chunks, but the rolling hash allows them to be identified even at non-integer chunk offsets.
So a change partway through the file doesn't force rsync to actually re-transfer all of the subsequent unmodified chunks, but it does incur a computational cost to find them since it has to search through all possible offsets.
The gotcha of "inserting or deleting a few bytes" is not in detection, it's in replicating this discovery to the target copy.
Say, we have 1GB file and we detected an extra byte at the head of our local copy. Great, what next? We can't replicate this on the receiving end without recopying the file, which is exactly what happens - rsync recreates target file from pieces of its old copy and differences received from the source. Every byte is copied, it's just that some of them are copied locally.
In that light, sync tools that operate with fixed-size blocks have one very big advantage - they allow updating target files in-place and limiting per-sync IO to writes of modified blocks only. This works exceptionally well for DBs, VMs, VHDs, file system containers, etc. It doesn't work well for archives (tars, zips), compressed images (jpgs, resource packs in games) and huge executables.
In other words - know your tools and know your data. Then match them appropriately.
Technically if you update a zip on the remote machine it'll work fine (the data gets appended in an update and the central directory record is always at the end of the zip.
I recall that tar has no end market at all so you can just append a new entry to it as well and when unpacked it'll overwrite the file from earlier in the archive. So they would work fine with rsync unless the tar is also compressed.
The tradeoff between zip and tar.{gz,xz,z} is that zip entries are compressed in the individual file context whereas in a compressed tar the entire archive is compressed in the same context. This may be a slight win for archives with many small files with similar structure.
For reference, here's the paper that describes the in-place update algo in rsync:
https://www.usenix.org/legacy/events/usenix03/tech/freenix03.... I haven't looked into it more deeply, but I think it's possible to apply the same idea to variable sized chunks.
Also, most modern compression tools have an "rsyncable" option that makes the archives play more nicely with rsync.
Still, with modern NVMe SSD speeds, usually the network will be the bottleneck. My system with a budget WD Blue gets a decent 1800MB/s sequential write (which somehow caused Win10 to freeze and my taskbar to disappear for a few seconds :/ ) and 2600MB/s sequential read, so even if everything else is unoptimized and the file has to be copied it will still take <1s for your hypothetical 1GB file. Copying the file over a 1Gb network link will take an order of magnitude longer.
But You can keep CDC choices in memory and if there is update to one chunk, just compute new boundaries from that chunk until next boundaries matches? A bit more code to write but doable?
https://github.com/gotvc/got