A couple of years ago there were discussions on how stupid 20+tb harddrives were, mainly because they are so slow that the time it takes for files to transfer to a spinning disk was too long.
Let's say you have a good 20tb drive and it can transfer files at 200MB/s. To fill that drive, it'll take 1 day and 8 hours of continuous transfer. If it's failing, and you're trying to get as much off of it you're screwed.
Now let's think about that micro SD card. It's 4tb, and let's be gracious and give it a v90 speed class. That's 90MB/s. Looking at a calculation for the time it takes to fill it up, we're sitting at about 14h and 14 minutes.
Worst part is that SD cards don't have SMART, meaning you don't know when they'll die.
From my experience, even good SD cards die in my raspberry pi running pihole, and the cards runs idle almost all the time.
Also there's this thing that the higher capacity a storage device gets, the more valueable the data stored on it becomes, not directly because it's high capacity, but because it's more trusted by the user.
Guys, gals and anyone in between, please get a proper storage solution, something that won't fail spontaneously. If you need that kind of capacity, go for a Nas with spare drives, or at least get an ssd.
Not all use-cases require a high speed:capacity ratio.
I mean, I have an 18TB USB hard drive, which sustains transfer at about 50MB/sec in practice. It is nearly full, and its level of performance has never been a show-stopping problem.
It's hard to imagine a use case where a NAS would be a viable alternative to an SD card.
I've had a usage tier for storage that looks like this
Temporary storage
SD cards - unreliable storage you use temporarily to store pictures and videos before inevitably moving them to a more reliable and permanent solution.
USB drives (hdd ssd etc) - used for when you you want to move files faster or more conveniently than over a Lan.
Permanent storage
Nas, internal drives, tape drives, etc - for when you want to store a lot of data with configurations that allow you to use redundancy.
The issue with super high capacity SD cards for me is that they're still fragile and prone to failure. When you allow someone to store that much data, it'll be used as a more permanent medium, and since it has a lot of storage capacity you end up with a bigger data loss when it dies.
Imo having 30 128gb SD cards would be better because if one dies or breaks, you lose 128gb and not 4tb.
I totally get that.. Here's the thing though, at least in Norway a 1tb micro sd card costs 2200kr (~$203). If we extrapolate the price for a 4tb one, that'll be 8800kr(~$813).
If you or a company has the kind of money to spend almost a grand on a storage device, doesn't that mean that the footage/photos are pretty valuable? If you had the kind of money/were going to record super valuable footage, wouldn't you work hard to use cameras/recording systems that were capable of recording to redundant drives?
What I don't get is what market section this product would even fit in. It's too expensive for regular consumers, and also has terrible value. It's not good enough for professional settings because it has no drive monitoring, nor does it have redundancy.
It isn't fast enough for the kind of footage that would require that kind of space(unless you're recording a month long realtime video).
Also imagine how horrible the transfer speeds would be for individual photos when the os has to initiate a file transfer.
If we say each photo is 20mb, that's almost 200k photos. Yikes....
The raspberry pi is about the worst case scenario for SD cards. It may be idle, but an operating system is still making constant reads and writes, which absolutely eat through an SD card
I've started just booting them from USB. I have Home Assistant running on a pi with an ssd in an external enclosure and it's been completely issue free.
And after you spend 14 hours filling it with data, it falls out of your shirt pocket when you lean over to tie your shoe, gets caught by a gust of wind, and is gone forever.
Let’s say you have a good 20tb drive and it can transfer files at 200MB/s. To fill that drive, it’ll take 1 day and 8 hours of continuous transfer. If it’s failing, and you’re trying to get as much off of it you’re screwed.
this is kind of why we have RAID, but arguably, you should literally just not be using RAID as a backup. Failing drives should be prepped for in advance, rather than dealt with in real time at the 20+TB scale.
The primary advantage to such dense HDDs is price, and power efficiency.
Also there’s this thing that the higher capacity a storage device gets, the more valueable the data stored on it becomes, not directly because it’s high capacity, but because it’s more trusted by the user.
also im not sure i agree with the phrasing here, the drive does become "more important" but that's because it stores more data, there is literally more for you to lose in the event it gets destroyed. You should trust nothing ever, yourself included.
I see what you mean. It helps predict that, but not always. This is still a lottery, and the absence of SMART only makes it a little bit more of a lottery.