Grr.. NAS died

Brian_B

FPS Enthusiast
Joined
May 28, 2019
Messages
7,409
Points
113
Been running a Synology DS412+ since... shortly after they came out. Been a nice unit, software stays upgraded by the company even though it's older, performs well enough for what I use it for, and has been online pretty much 24-7 since I got it. Couple of weeks ago, I notice a degradation notice on one of the drives. I think - yeah probably about time, and we are running out of room (93% of 10.xx TB available), so I ordered a new set of drives that are a bit bigger (6TB).

Go to start swapping out the drives: the old drives were Seagate 4TB... I think "Why did I get Seagates? I never get Seagates..." The drives only had about 17k hours - and the failing drive still worked, just creeping failed sector count, so it's on borrowed time. Looks like in 2019 I drunkenly ordered some Dell pulled units, so no warranty. Shame on me for saving a few bucks I guess. I will use the other drives as offline storage, my typical modus operandi.

I order 4 new 6TB WD Reds. Start swapping them out late last week. It takes about a day per drive for it to rebuild the array back up. This morning I get up, I had put Drive #4 in the oven yesterday afternoon, and when I wake up, it's sitting at 94%, and i know once it completes I need to expand the array to take advantage of the extra space. I go get coffee, read a bit of news, come back to check - unit offline.

Hmm, go check it - flashing power light of death. Dammit. According to Google-Fu, that means "Motherboard failure" - but reading a bit deeper, it could really mean just about any hardware failure that prevents the unit from booting up. I have no idea if Disk 4 finished rebuilding or not.

So I go to work, do that for the day, get back home and tear it apart. Some online searching said you might have good luck with replacing the CMOS battery... so I tore it open, cleaned it out (it was actually amazingly clean considering how filthy my PCs usually get, but then again, it just got cleaned out when I started swapping drives). Redid the HSF compound. Put it all back together - same thing. Dammit.

It's old enough now I'm just going to replace it rather than sit and play with parting things out to see if I can find what's broke. It wasn't the battery, so anything beyond that is going to take some sleuthing for ancient hardware that I'm just not into. Nothing data-wise on there is irreplaceable - my Plex library was far and away the bulk of it, along with Time Machine backups on my current work computer (that's still working, fortunately, and I have an external drive that it also backs up to), and a few odds and ends where I use it as scratch space for transferring junk between PCs.

I'm hoping that I can plug the old drives into a new unit (I ordered a DS420+), and maybe I'll get lucky and it'll pick it up (RAID5) - I've heard hit or miss reports on that. But I'm betting not. And it'll be a summer of rebuilding the Plex library back up. Maybe it's time to jump on a CPU/Mobo upgrade just for the purpose of Handbraking all summer long.

On one hand - RAID is not a backup. Check.

On the other hand - if anyone wants a very used DS412+, as-is, let me know.
 
Do the old drives work?

I have a RS812+ and I had a similar issue a few years ago when I went from 4x3tb to 4x4tb drives. After the last one the unit would no longer boot.
I removed the drives, but still wouldn't boot. I did a factory reset and behold it was alive. Fortunately I had a firmware backup, so I just put the old drives and I was back in business. I had to start all over again, but it worked.
 
Old drives do still work. I should try factory reset - I hit the button once in a half hearted attempt but I didn't actually read up the procedure.

I don't have an HBA card handy, or I would try it. If I have to throw money at it, I figure it may as well go toward the new NAS and give that a shot.
 
Tried the factory reset per the documentation - no bueno. All it does it spin up fans and blink the power led, even the reset button doesn't respond per the Synology documentation. Good idea though.

Gonna ask some neighbors who have techy jobs to see if they happen to have a raid controller to try a set of the disks - presumably I have the data on two sets of disks now - the original 4TB still run, and the newer rebuilt 6TB set, which may or may not have completed the rebuild on the 4th drive.
 
Tried the factory reset per the documentation - no bueno. All it does it spin up fans and blink the power led, even the reset button doesn't respond per the Synology documentation. Good idea though.

Gonna ask some neighbors who have techy jobs to see if they happen to have a raid controller to try a set of the disks - presumably I have the data on two sets of disks now - the original 4TB still run, and the newer rebuilt 6TB set, which may or may not have completed the rebuild on the 4th drive.
It can't be a RAID controller. You need an HBA adapter (or a motherboard with enough SATA ports free to attach all the drives) as the raid that is used by the Synology unit (and QNAP for that matter) is a software based raid implementation.
 
I didn't look any further into it, but apparently Synology isn't using anything exotic, it's just Linux RAID using open-source tools.

In theory, you could attach the drives to a SATA controller and boot say a Ubuntu installer ISO and after selecting the 'Try Ubuntu'-ish option, access the array and ferry the data somewhere else.

Obviously it'd be better if the drives just worked in a new unit and that's exactly what I'd advise you to do. When I started toying with the idea of building a NAS, one of my still-remaining checklist items was a backup for the NAS itself.

As you said, RAID isn't backup, and that goes for the whole NAS. The data may be replaceable, but the time lost reconstructing it isn't!

[Even with my TrueNAS Scale system, I'm still half looking for something small to serve as a NAS backup, and I might as well get a NAS unit to do it just for reliability's sake]
 
Dual nas? Or a nas for your access and a other cheaper nas running slow bulk disks as a nas backup?
 
Hummm a zfs network share, with replication to a bulk storage NAS via a unique network port and small 4 port 1 gigabit switch.
 
Hummm a zfs network share, with replication to a bulk storage NAS via a unique network port and small 4 port 1 gigabit switch.
I mean, that's my plan, because I already have the ZFS network share set up... but I don't recommend that route for most. I'm also set up to run other stuff on the NAS (it's a 7600K) and it has 10Gbit, which again, I don't recommend because it simply doesn't provide a day-to-day benefit for most.

I did it half because I could and half because I wanted to learn more about the related technologies hands-on :)
 
Thank you ALL for your responses.

I put the new disks in the new NAS - it picked it up as an upgrade and .. upgraded. Easy peasy - no data loss!

I put the old disks back in the old NAS just on a whim and... after leting it sit for about 5 minutes (honestly, I just plugged it in and forgot about it as I was working on the other unit, but it was always set to turn on following power loss) - it finally beeped and turned on! It did mark the array as crashed, but I was able to force it back online by dropping to an SSH shell and forcing mdadm to rebuild the array, even with the failed disk that originally caused all the mayhem. So there's a duplicate copy of all the data there as well.

So I guess if I had thought to put the old disks back in early on, I could have saved myself the new NAS, but it was probably needed anyway.

Thank you VERY MUCH!
 
Become a Patron!
Back
Top