You can get like 32TB of cheap basically ready to fail HDDs and listen to them click away for dirt cheap. I mean like a couple hundred bucks.
Buy some old tower PCs from a school or something. Low specs are fine.
Install Ubuntu server, set up samba and minidlna set up tunnel with cloudflare. Boom.
You could set this up for like $500. I have a setup like this. My HDD has been clicking since install. 2 years strong. I have two backup 16TB HDDs ready to hot swap should either of them fail. Having those backups on hand brings your cost up to about $800, but again, this is for two 16TB HDDs in a RAID config. If you did like 8TB instead, this is all probably $500 with backups.
Western Digital has a bargain bin.
P.S. y'all have valid concerns about RAID5, I'm no expert so I mirror the whole of it to another 16tb USB HDD I got for anfew hundred bucks.
It's basically everything but flood/fire proof. The odds of my RAID config failing such that It is irrecoverable is low, but the odds my USB HDD fails at the same time is inconceivable! This USB HDD is attached to my main workstation hub, when I log in to windows (shut up I dual boot for development ya dinguses) and my machine is idle for 20 mins, it performs a sync between my network area storage (my ghetto RAID server) and the USB HDD.
It's low cost and fool proof, and kinda beefy. If I upgrade the tower to something modern...? Add 1TB of SSD to everything? Dude... I'll be rocking a rad setup for under 1500.
But 1 disk failing and the array braking aint either.
This is about real time data not backup which should at best happen daily or bi-daily for really important data.
After my experience with raid5 and the WD Green 2TB drives that were so fragile that the vibrations of 6 drives in the same case is enough to kill them resulting in 2 drives dying at same time wiping out my entire media collection...yeah, use raid6, with another server holding a raid6 array as continuous backup.
Not a reason to not read the data spec.
If they not mention it there, it can be expected to run it in a single bay environment and not in an array environment.
These drives had a high failure rate whether they were in a raid array or just on their own. I get that they weren't designed for raid, but just using them in an array didn't cause them to fail.
If you are ready to do some reading I’d recommend ZFS over traditional raid. ZFS makes more guarantees then traditional about file integrity over time.