Drives in a NAS age at about the same rate between them. If you had multiple drives around the same age or from the same manufacturing batch, there’s a higher chance they fail around the same age.
After one disk in the array fails, you can insert a new drive and rebuild the array, but during the rebuild, all your drives are in heavier use than normal operation. If you only have one disk redundancy, you’re vulnerable until that rebuild is complete.
“cascading drive failure” the what now? How do drives die in a domino effect?
three locations seem a bit much, but I totally understand it. Safe storage is tedious, huh.
Drives in a NAS age at about the same rate between them. If you had multiple drives around the same age or from the same manufacturing batch, there’s a higher chance they fail around the same age. After one disk in the array fails, you can insert a new drive and rebuild the array, but during the rebuild, all your drives are in heavier use than normal operation. If you only have one disk redundancy, you’re vulnerable until that rebuild is complete.