Not GP but one disadvantage of updating one huge file is it's harder to do efficient incremental backups. Theoretically it can still be done if your backup software supports e.g. content-defined chunking (there was a recent HN thread about Google's rsync-with-fastcdc tool). If you choose to store your assets as separate files instead though, you can trivially have incremental backups using off-the-shelf software like plain old rsync [1].
If you're using 16 PCI 4.0 lanes you max out at 32GB/s, although commercial drives tends to have much lower throughput than that maximum (~7.5GB/s for a good NVMe drive). Cat6a ethernet tops out at 10 gigabits per second, but plenty of earlier versions have lower caps e.g. 1 gigabit. My guess is you'll most likely be limited by either disk or network hardware before needing CPU parallelism, if all you're doing is copying bytes from one to the other.
Oh, sorry — by "copying bytes from one to the other," I meant copying bytes from the disk to the network interface controller on the same physical computer. It's true that beyond that it'll depend on the network topology connecting you to where you want the data to be, and how fast the machines in between and on the other end are!
I don't know enough about custom fiber to know whether that will help stretch past being network-bottlenecked — most NICs max out at 10 gigabits/second, but I've heard of faster ones. Eventually you might be able to make yourself disk-limited... Either way, backing up one file is probably easier than backing up a zillion files scattered around the filesystem.