Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a 13 years old NAS with 4x1TB consumer drives with over 10y head flying hours and 600,000 head unloads. Only 1 drive failed at around 7 years. The remaining 3 are still spinning and pass the long self test. I do manually set the hdparm -B and -S to balance head flying vs unloads, and I keep the NAS in my basement so everything is thermally cool. I'm kinda of hoping the other drives will fail so I can get a new NAS but no such luck yet :-(


I admire the "use it until it dies" lifestyle. My NAS is at 7 years and I have no plans to upgrade anytime soon!


The problem with setting a nearly maintenance free nas is that you tend to forget about it just running away in the background.

Then a drive fails spectacularly.

And that's the story of how I thought I lost all our home movies. Luckily the home movies and pictures were backed up.


No RAID?


If you are only going to have one of the two, choose backups, preferably off-site, even better soft-offline, over RAID.

Of course both is best if you don't consider the cost of doubling up your storage (assuming R1/R10) and having backup services to be a problem.


RAID isn't a backup, it only handles certain/specific failure scenarios.


Yes, it covers exactly the "Then a drive fails spectacularly." case. Unless you were hit by some subtle silent data corruption across the RAID (but it's pretty rare compared to classic drive failure with buzzing and clicking sound).


True, it does cover that specific case.

But it doesn't cover the your RAID controller dying, your house burning down, burglary, tornado, tsunami, earthquake and other "acts of god", etc.

"A backup is a copy of the information that is not attached to the system where the original information is."

[0] https://www.reddit.com/r/storage/comments/hflzkm/raid_is_not...


> But it doesn't cover the your RAID controller dying

One of the reasons some people ditch the hardware RAID controllers and do everything in software. If you're at the point of pulling the drives from a dead enclosure and sticking them in something new it's really nice to not have to worry about hardware differences.


I agree, RAID is not a backup (and nobody said it is, in this thread). But if you self-host a lot of data, even as a hobby, it will make your life easier in case of disk failure.


I thought it was implied with "No RAID?" in response to data loss (wherein they mentioned that they had a backup :)

I'm personally very skeptical as I have been using/used RAID for 20+ years, and I have lost data due to:

- crappy/faulty RAID controllers: who actually spends money to buy a good hardware controller, when a cheap version is included in most Motherboards built in the last 15+ years? In one case (a build for a friend), the onboard controller was writing corrupt data to BOTH drives in a RAID-0, so when we tried to recover, the data on both drives was corrupt.

- Windows 8 beta which nuked my 8-drive partition during install


It's actually in the name: R = Redundant, i.e. availability.


Just because it's in the name, doesn't mean it should be considered a fact or best practice in accordance with reality. I think this[0] reddit post frames it in the simplest way possible: "A backup is a copy of the information that is not attached to the system where the original information is."

There are many[1], many[2], many[3] articles about why "RAID is not a backup". If you google this phrase, many more people who are considerably more intelligent and wise than myself, can tell you why "RAID is not a backup" and it is a mantra that has saved myself, friends, colleagues and strangers alike a lot of pain.

[0] https://www.reddit.com/r/storage/comments/hflzkm/raid_is_not...

[1] https://www.raidisnotabackup.com/

[2] https://serverfault.com/questions/2888/why-is-raid-not-a-bac...

[3] https://www.diskinternals.com/raid-recovery/raid-is-not-back...

edit: formatting


The I used to stand for "inexpensive" too, until RAID drives turned out to be everything but. They've since made it a backronym as "independent", although the drives really aren't independent either.


Once i made a FreeNAS and i lost all the wedding photo's. The significant other was not amused and i vowed to use a lot of backups. I have a lot of old NASes, from NetGear to Qnap to Synology. Perk of the job.

But these days i use a Synology DS2412 in a SHR RAID6 configuration. Only 1 of the 12 drives failed thus far, but maybe this is because most of the time it's powered off and activated using WakeOnLan. For day to day i use an old laptop with 2 SATA 1TB disks in a Debian configuration. Documentation and photo's get frequently backupped to the big nas and the big nas uses Hyperbackup to a Hetzner storage that costs me around $5 a month. So now they're in three systems, two different media and one other different place. It would be a pain to restore when the house burns down, but its doable.

That reminds me.. i should document the restore process somewhere. There is no way the other family members can do this right now.


>and i lost all the wedding photo's

And you didn't have a backup? Ouch. I'm sorry for you.

>i should document the restore process somewhere. There is no way the other family members can do this right now.

I agree. If I passed away, or something seriously bad happened to me, nobody in my family would be able to recover any memories.

I should document how to recover all the data in a simple way. And probably print the document and store it somewhere easily accessible.


I (and surely others) would love to know the reason(s) for the FreeNAS failure i.e. what kind of configuration did you have and what went wrong?


Oh nothing fancy i'm afraid. The WD disk it was on just died. There was no RAID of any other rebuild back then, just a single disk. Luckily we got about 80% of the pictures back via family and friends. Rebuilding that from various sources was a pain though. It taught me to use backups the hard way :)


I'm confused - you used FreeNAS on a single disk? I wasn't even aware that was possible.

Thankfully you were able to recover! I think almost everyone has learned to make backups the hard way at some point. I am the local IT guy among a lot of friends (and by extension, their friends) and so I was always the go-to when things got bad. At some point 15+ years ago, I bought "Restorer2k" which was able to save a lot of data from not-quite-dead drives, some of had to go into the freezer overnight in ziplock bags to try and unlock a frozen read-head (rarely helped), others I was able to replace the controller via eBay. One friend lost an almost finished PhD dissertation (months of work). I remember the tears when I was able to recover that :)

At some point I got tired of data recovery and started telling people to buy a Mac + cheap external drive and use the Time Machine feature. Interestingly, I haven't gotten many phone calls the last few years ;)


Err yeah. There were more disks, but I wasn't that experienced, so i made Volumes/SMB shares per disk. FreeNAS itself was on a bootable USB stick. Who needs a big huge single volume eh? :) This was back in 2007 as well.


modern hdds should not be stored powered down.

they should be spinning most of the time in indle to lubricate things.

or so I've heard.

i have my nas setup as such and have 10y drivers with constant success (they move from main to spare after 5y). i also aim for the 30w amd cpu (which drawn around 5w in idle)

for drivers i spend $300 every 5yr on new ones, so i can keep growing and renewing. and is a pretty low cost considering cloud alternatives.


I get where you're coming from, but in the field can somehow differ greatly from theory. I have had no real errors since i've used this big NAS.

I do admit i personally have a lucky track record regarding harddisks. I think in the more than 25 years i have used spinning hardddisks, about 3 or 4 ever failed me. I don't know why, but most technology i use just keeps on working pretty long. :)

I stil have lots of 500 and 1TB disks around in various old NAS devices i haven't booted up in ages. When electricity got quite expensive i decided to stop using those.

The amount of data i really really want to protect is less than 1TB in total i think. All the other stuff on the big NAS is 'nice to have' but not life crushing should it be gone forever.


My 15TB DS1511+ from 2011 would like a word.

I only recently replaced a failed HDD and power supply, but otherwise going mostly strong. It will stop responding to the network out of the blue on occasion, but a power cycle gets it back in order.

But I’ve had redundancy for a while with S3, then later (and currently) BackBlaze.

I’ve been looking into replacing it, but I’m hearing Synology hardware and software isn’t as great as it used to be, which is unfortunate, because this thing has been a tank.


I built my home NAS in 2017 the two original drives were replaced after developing bad blocks (4 and 5 years, respectively). The two expansion drives (2018, 2021) are still fine.

I built a NAS for a client, which currently has 22 drives (growing bit by bit over the years) in it (270 GB of raw capacity) and since 2018 has lost only 3 drives.


I’d have thought 2 new drives to replace all that would be worth the investment in power savings alone.


So is that high usage compared to backblaze?

Is the 10y head flying for each head? Is it for heads actually reading/writing, or just for spinning drives/aloft heads?

I only skimmed the charts, they seemed to just measure time/years, but not necessarily drive use over time.


This NAS, a lenovo ix4-300d, came with Seagate drives (ST1000DM003), so its whatever the SMART 240 counter (Head flying hours) means to Seagate I guess. I just interpret it as "not parked", so it could be doing anything, but this NAS is not doing huge amounts of I/O - mostly just for music, movies, and some personal files. I think all the heads for all platters are on one assembly so they are either all parked or all spinning.


Have you powered it down lately? Some of them power down and never come up again.


No the entire thing is on a UPS and uptime routinely will say something like 800 days. I also have a whole-home generator so I'm hoping it stays on forever ;-) I also back it up online through IDrive and take frequent local backups so I don't care if the entire thing fails.


Tha's the most responsible form of sabotage I've ever heard




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: