From a quick skim, the author seems to have not noted that these are host-managed, so they’re not particularly useful individually.
Instead of holding all of the management logic on the drive itself, that’s done at the appliance level to manage load across disks—so these wouldn’t work in standard NAS devices, unless Seagate provides a binary or API for Synology or QNAP to implement in their firmware.
I have lost 12 seagate drives in the last 10 years. I have had good success with Toshiba and WD.
I always check Backblaze drive stats now before buying new drives.
I wonder how they do this. Are the drives even SAS/NVMe/some standard interface, or are they fully proprietary? What "logic" is being done on the controller/backplane vs. in the drive itself?
If they have moved significant amounts of logic such as bad block management and such to the backplane, it's an interesting further example of "full circle" in the tech industry. (e.g. we started out using terminals, then went to locally running software, and now we're slowly moving back towards hosted software via web apps/VDI.) I see no practical reason to do this other than (theoretically) reducing manufacturing costs and (definitely) pushing vendor lock-in. Not like we haven't seen that sorta stuff done with e.g. NetApp messing with firmware on drives though.
However if they just mean that the 29TB disks are SAS drives and the enclosure firmware implements some sort of proprietary filesystem and that the disks are only officially supported in their enclosure, but the disk could operate on its own as just a big 29TB drive, we could in theory get these drives used and stick them in any NAS running ZFS or similar. (I'm reminded of how they originally pitched the small 16/32GB Optanes as "accelerators" and for a short time people weren't sure if you could just use them as tiny NVMe SSDs - turned out you could. I have a Linux box that uses an Optane 16GB as a boot/log/cache drive and it works beautifully. Similarly those 800GB "Oracle accelerators" are just SSDs, one of them is my VM store in my VM box.)
This is all nice but when it takes 3-weeks to check the volume or add a drive that's gonna suck. With spinning media there's a benefit to more smaller drives since you can read/write from many. I'm not saying I wouldn't want these just that if I didn't have petabytes of data I'd stick with more drives of smaller size. Unless of course their speed increases. Spinning media isn't getting faster as quickly as it's getting bigger. So. When your scrubbing or anything I would expect you could scrub 5x20tb a lot quicker than 1x100tb. So. As I see it. This is niche for me.