Hey everyone! I've got a NAS that uses mdadm for RAID, and I'm in a bit of a pickle. I lost 2 out of 4 disks, but I managed to clone the latest failing disk onto a new one and put it back into the RAID array. However, it still shows as faulty when I check with `mdadm --detail`. What I need to know is whether removing and re-adding the disk from the array will be non-destructive. Is there an alternate approach to resolve this issue? Here's the output from `mdadm --detail` for reference, where `/dev/sdc3` is the cloned disk that I'm trying to add back, while `/dev/sdd4` was removed due to earlier failure.
4 Answers
Listen, RAID5 needs 3 disks. You can't just add a fourth one expecting things to work. The 4th drive is not needed since it's marked as a spare. Make sure you do this correctly: fail the faulty disk first, remove it, then add the new one correctly to kickstart the rebuild.
I'd suggest cloning all your drives to new ones and working off those clones. It might be easier if you create disk images and assemble the RAID from those. This way, you can test forcing the array to start without risking your current setup.
Try setting up a temporary RAID on another Linux machine using files instead of disks. Create 4 files, format them, and configure mdadm to imitate your current setup. This way, you can experiment without any danger!
Just a heads-up: using `--add` is often destructive and could wipe your data. I've found that `--re-add` might be what you need. It's safer, but I haven't personally tried it. If you have another non-array drive, you can simulate a 'dry run' that keeps your original drives safe.

But what if I try to fail/remove/add the drives like you're suggesting? Won't that destroy the array and my data?