Raidz (or preferably raidz2) is good for archival / media streaming / local backup and has good sequential read/write performance, while striped mirrors - raid10 - are better for random access read and write and are a little bit more redundant (i.e. reliable), but costs more in drives for the same usuable space.
Raidz needs to read all of every drive to rebuild after a drive replacement while a striped mirror only needs to read one. However if you're regularly scrubbing zfs then you read it all regularly anyway.
Raidz effectively has a single spindle for random or concurrent I/O since a whole stripe needs to be read or written at a time. Raidz also had a certain amount of wastage owing to how stripes round out (it depends on how many disks are in the array), but you still get a lot more space than striped mirrors.
For a home user on a budget raidz2 usually makes more sense IMO, unless you need more concurrent & random I/O, in which case you should probably build and benchmark different configurations.
I've been using zfs for over 10 years, starting with Nexenta, a defunct oddity with Solaris kernel and Ubuntu userland. These days I use Zfs on Linux. I've never lost data since I started.
Side note, since it's probably a niche use-case, but using NVMe's in Z1 was both pretty effective with VMs and cheap enough on usable capacity (4×1TB for ~3TB usable).
I feel SSDs are in a totally different category that makes Z1 an actual option, whereas I don't trust it for spinners. Key difference being that a failed SSD can usually be read (only) whereas a failed spinner is usually as good as bricked.
I use SATA ssds in RAIDZ2, and have 6*1TB for approx 4TB usable. They're fantastic and I have yet to have an issue replacing a drive. Nor do I need to do it very often as my workloads are pretty low, it's mostly an archival box.
It's been great, right up until one of the sticks of RAM started to fail...
> Raidz needs to read all of every drive to rebuild after a drive replacement while a striped mirror only needs to read one. However if you're regularly scrubbing zfs then you read it all regularly anyway.
Not quite. Each vdev rebuild uses disks within it. A pool with multiple vdev each being raidz does not need to read all disks to rebuild a single raidz vdev. Your statement compares one vdev vs many vdevs. It just happens to be the case folks assume one large vdev with raidz and multiple vdev with mirroring.
If you have a 3- or 4- way mirror wouldn’t ZFS read from all disks in the vdev to rebuild any added disks to the mirror (there can be more than one)?
Sure, you can build striped raidz and 3+ way mirrors and other more exotic variants but the two most typical corners of the configuration space are plain raidz2+ for all the drives vs striped mirrors. You lose usable space not putting all your drives into a single parity array and maximizing usable space is usually why you go with parity. Mixing multiple arrays only makes sense from this perspective if you have a mix of drive sizes.
> You lose usable space not putting all your drives into a single parity array
Well if you're aiming for a specific ratio but want greater capacity without upgrading all disks to larger capacities, your only option is to use more vdevs configured just the same.
Example: 66% usable capacity, 6 disks in a raidz2 (4 usable + 2 parity) or 9 disks in a raidz3 (6 usable + 3 parity). If you want to add capacity but maintain your parity ratio (for a given fault tolerance risk) there is no raidzN with N > 3, so you must add vdev.
Increasing the size of the raidz vdev means you're reducing your failure tolerance.
Raidz needs to read all of every drive to rebuild after a drive replacement while a striped mirror only needs to read one. However if you're regularly scrubbing zfs then you read it all regularly anyway.
Raidz effectively has a single spindle for random or concurrent I/O since a whole stripe needs to be read or written at a time. Raidz also had a certain amount of wastage owing to how stripes round out (it depends on how many disks are in the array), but you still get a lot more space than striped mirrors.
For a home user on a budget raidz2 usually makes more sense IMO, unless you need more concurrent & random I/O, in which case you should probably build and benchmark different configurations.
I've been using zfs for over 10 years, starting with Nexenta, a defunct oddity with Solaris kernel and Ubuntu userland. These days I use Zfs on Linux. I've never lost data since I started.