Which of the 5 existing kinds of UUID do you consider the One True UUID? No matter which you pick, I guarantee that there's something badly wrong with it for at least one common use-case that can be improved by switching to either v4 (IMHO the only one of the existing types worth using) or to one of the new proposed UUID types.
I'm not worried about collisions and I agree that being able to put metadata in the UUID is a big meh of a feature. The problem with v4 is what happens when people try using them as database keys: the random sort order can really hurt performance. You might argue that the fix for this is to simply not use UUIDs as database keys... but so many people are already doing this, and will continue to do it, that they should probably be given better standard options.
It's much less important in postgres as in some other databases, but it still hurts.
Mainly because indices have to be updated and searched all over the place with UUIDs. The locality of the data on disk itself is fine because its inserted sequential.
For cases where one sensibly used uuids, I don't see natural keys being an alternative. What's e.g. the natural key of an order reference that should be accessible without auth, when the key shouldn't allow enumeration and that should be allocatable without a single point of coordination?
This is a fairly significant issue for mysql, but does the same issue exist when using postgres with a UUID pk type?
At my last job, this was managed by using two ID's on each row - a serial one that was basically only used for database optimization purposes, and the "real" UUIDv4 ID. It felt gross then, and it feels gross now, but it seemed to do the trick for our needs.
It still increases write rate a lot over more sequential values / reduced cache hit ratio.
With sequential-ish values indexed by something like b-trees the same index pages get modified over and over again because new table rows will be in a narrow part of the index. As databases / operating systems typically buffer writes for a while, this reduced the number of writes hitting storage substantially.
Conversely, with random values, every leaf page will be modified with the same probability. That's not problem with a small index, because you'll soon dirty every page anyway. But as soon as the index gets larger you'll see many more writes hitting storage.
FYI - for anyone running across this thread later. This is a unique problem with (a kind of globally) consistent storage on small (single node equivalent) systems. If you have a large scale distributed system, you WANT writes to be well distributed across all nodes, or you’ll end up with problematic hot spots.
All new writes ending up on the same node/page/index is a good way to crush your system in a cascading-never-coming-back-up-until-you-drain-traffic-kind-of-way
The other versions all have use cases other than avoiding collisions.
If your access patterns are such that you often access ids generated around the same time together v6-8 will probably be faster due to locality. If you access data generated on the same node together, than same for v1-2.
If you want a reproducible id based on some other identifier, then only v3 and v5 will work.