I'm trying to plan a better backup solution for my home server. Right now I'm using Duplicati to back up my 3 external drives, but the backup is staying on-site and on the same kind of media as the original. So, what does your backup setup and workflow look like? Discs at a friend's house? Cloud backup at a commercial provider? Magnetic tape in an underground bunker?
I don't think this meets the definition of 3-2-1. Which isn't a problem if it meets your requirements. Hell, I do something similar for my stuff. I have my primary NAS backed up to a secondary NAS. Both have BTRFS snapshots enabled, but the secondary has a longer retention period for snapshots. (One month vs one week). Then I have my secondary NAS mirrored to a NAS at my friends house for an offsite backup.
This is more of a 4-1-1 format.
But 3-2-1 is supposed to be:
Three total copies of the data. Snapshots don't count here, but the live data does.
On two different types of media. I.e. one backup on HDD and another on optical media or tape.
I've always understood 2 as 2 physically different media - i.e., copies in different folders or partitions of the same disk is not enough to protect against failure of that disk, but a copy on a different disk does. Ideally 2 physically different systems, so failure/fire in the primary system won't corrupt/damage the backup.
Used to be that HDDs were expensive and using them as backup media would have been economically crazy, so most systems evolved backup media to be slower and cheaper. The main thing is that having /home/user/critical, /home/user/critical-backup, and /home/user/critical-backup2 satisfies 3 copies, but not 2 media.
Hm I wonder why snapshots wouldn't satisfy 3. Copies on the same disk like /file, /backup1/file, /backup2/file should satisfy 3. Why wouldn't snapshots be equivalent if 3 doesn't guard against filesystem or hardware failure? Just thinking and curious to see opinion.
I use Proxmox Backup Server for my backups. Everything backups to 1 system at home. I then sync the data store to a little NAS I have at a family members house across town and also to a cheap storage VPS on the other side of the country. I also do a manual sync of the data store to a single external drive that I manually connect and disconnect.
None of my data hoarding files are backed up as that would cost way too much. That could change if I ever find a killer deal on an LTO8 or better drive and tapes.
I know that Hetzner has some decently priced Storage Boxes that you can mount using rclone and then backup to. Keep in mind that latency will be a factor so it could be slow.
All persistent storage from my dockers are in a folder. All I have to backup everything is backup this one folder along with my docker compose files (in git).
Locally there are zfs snapshots (autosnapshot) and for remote I use borgmatic.
@lka1988@Lem453 Primarily a frontend tool designed to make your life easier, torsion.org/borgmatic , but I tend to avoid macros, frontend scripts, or even GUIs like this. They may obscure Borg-specific configuration details that, hypothetically, could one day hinder your restoration process.
All storage is on a Ceph cluster with 2 or 3 disk/node replication. Files and databases are backed up using Velero and Barman to S3-compatible storage on the same cluster for versioning. Every night, those S3 buckets are synced and encrypted using rclone to a 10tb Hetzner Storage Box that keeps weekly snapshots.
Bit more than 3 copies, but hdd storage is cheap. Majority of my storage is Jellyfin anyways, which doesn't get backed up.
I'm working on setting up some small nvme nodes for the ceph cluster, which will allow me to move my nextcloud from hdd storage into its own S3 bucket with 4+2 erasure coding (aka raid 6). That will make it much faster and also its cut raw storage usage from 4x to 1.5x usable capacity
I've a nightly cronjob that runs backup using rsync for my local, and an external HDD that I stash in my work locker that I bring home once a week or so to connect to the server, run a backup script (more rsync), then take it back to work. It's not super sophisticated, but it works, and I have tested and restored from both the local and offsite backups.
I use hard drives, I can’t imagine trying to put something on a disk or something.
One thing I do recommend, I keep one unencrypted hard drive copy in the safest most hidden part of my house. This is in case encryption software disappears, or I just forget my encryption keys or something.
Other than that, one encrypted copy of files in a thumb drive in my wallet (selected files, not everything). One in my car. One in my firesafe. Then daily cloud backup.
I get all my data to my server, then from there I have borgmatic do incremental backups to a backup drive on the same machine (nightly cronjob).
From there I use Rclone to get the encrypted borg backup to Backblaze B2 for cloud storage.
So for 3 2 1, my 3 copies are the original, the local backup, and the cloud backup.
My 2 media are local hard drives and cloud storage (I think it's fair to consider this a different kind of media).
And my 1 offsite is the cloud backup.
Now I'm dumb and have a fear of screwing something up so I have also started burning M-Discs of my critical data (everything except TV/movie/music stuff I can redownload). Though this was a lot more expensive than I was expecting, because of aforementioned me being dumb I already screwed up two discs (they are write once). I'm also doing two copies of each disc.
Also I have photos/home videos additionally stored in ente, they are super important to me and I wanted a separated copy someone else is looking after.
I use Backblaze B2 for one offsite backup in "the cloud" and have two local HDDs. Using restic with rclone as storage interface, the whole thing is pretty easy.
A cronjob makes daily backups to B2, and once per month I copy the most current snapshot from B2 to my two local HDDs.
I have one planned improvement: Since my server needs programmatic access to B2, malware on it could wipe both the server and B2, leaving me with the potentially one-month old local backups. Therefore I want to run a Raspberry Pi at my parents' place that mirrors the B2 repository daily but is basically air-gapped from the server. Should the B2 repository be wiped, the Raspberry Pi would still retain its snapshots.
My nas is a second copy of all my data, nothing only exists on the nas. The nas is also is slowly uploading to backblaze, data limits are slowing my progress. My photos which I feel are the least replaceable are automatically backed up to my nas , Google photos, and amazon photos, with manual backup to my desktop, and manual backup to an external hard drive that is stored in a fire resistant box.
My main server is backed up via Kopia to a 5 TB Hetzner Storage Box and to a second server at my parents in law‘s place. I‘ve got additional MDisc backups of old photos, Paperless PDFs and work related files that don‘t change at my mother‘s place as well.
My Linux ISO collection is too big to actually back up. So, I regularly create file lists and in the event of data loss, I will have to spend quite some time to rebuild it. At least, my fiber connection will help me with that.
All my video media that's easier to replace than preserve is on my NAS running openmediavault with mergerfs. If I lose a drive I can always just, you know, torrent the tv show again.
My main PC (everything except the Steam game install directory) is backed up through KopiaUI to a folder on that mergerfs array that contains media that's difficult/impossible to replace. Daily incremental backups.
That folder is mounted on my PC through DOKAN, which tells Windows OS that it's a local resource (it does this more thoroughly than just assigning a drive letter to a NAS folder through Windows' built-in system). The PC, including the "sensitive NAS media" folder, is then backed up to Backblaze's personal backup service ($99/yr, unlimited size with one-year versioning). The DOKAN step is required for this, since Backblaze doesn't support mounted NAS drives or non-Windows systems (presumably they don't want to use space on versioned encrypted backups of hundred-terabyte pirate movie collections).
Oh, and my phone does one-way Syncthing to my PC, thus putting its files on the PC for Kopia and Backblaze to do their thing.
I use immich and nextcloud for the clients (my wife and my parents know that I only take care about that data) and on the server side I use borgmatic which has a local repository on the second drive inside my nuc and a remote repository hosted by hetzner called "storage box" which supports borg native.
Yes the remote is out of my physical access, but borg is fully encrypted and for 4$/3.6€/month for 1TB I feel good.
Before I started with borg and hetzner I had a rsync based backup with an odroid hc1 hosted by my parents, but that doesn't feel safe. Due to slow network by my parents I had to sync my local backup instead of a second backup from the real data and the monitoring was also very bad.
From my point of view: You have no backup, if it is not automated and you have no monitoring.
Currently only have pictures and documents stored, so everything easily fits on 1tb. One copy on my homeserver (unencrypted), one copy on my laptop (Luks encrypted), and one copy with rsync and a raspi at my parents (unencrypted). Might change encryption strategies to all luks.
3: RAID-1 pair + manual periodic sync to an external HD, roughly monthly. Databases synced to cloud.
2: external HD is unplugged when not syncing
1: External HD is a rotating pair, swapped in a bank box, roughly quarterly. Bank box costs $45/year.
If the RAID crashes, I lose at most a month. If the house burns down, I lose at most 3 months. Ransomware, unless it's really stealthy, I lose 3 months. If I had ongoing development projects, a month (or 3) would be a lot to lose, and I'd probably switch to weekly syncs and monthly swaps, but for what I actually do - media files, financial and smart-home data, 3 months would not be impossible to recreate.
All of this works because my system is small enough to fit on one HDD. A 3-2-1 system for tens of TB starts to look a lot like an enterprise system.
My day-to-day stuff stays in sync via syncthing on my two laptops, my desktop and my home server. They all run btrfs, so I won't be syncing any flipped bits around.
Home server rsyncs from my VPS once a week. When that's fine, it rsyncs itself over to a hetzner storage over sshfs+gocryptfs.
Daily incremental (and occasionally full) backup to an external HDD - a full image of my PCs, so that I should be able to restore anything back to what it was in the last ~14 days, assuming no ransomware or fire or...
All the data I care about gets synced to my Nextcloud (VPS, not home lab) - somewhat ransomware protected as I could restore VPS backups independently from my PC.
Most precious data (mostly photos) gets backed up regularly to an encrypted zip file and then gets send to a glacier tier S3 bucket. Some manual retention is done on the zip file level, so that I can get a tad older backup restored.
At least monthly a full backup image of my PCs is created on a separate external HDD which is not stored at home, but in a place I could access 24/7 if I really needed to restore something fast.
Phones, etc? Just sync to the mentioned Nextcloud, PC downloads from there and everything gets then into the aforementioned backups.
Homeserver? See "PC" above. With the caveat that some VMs/containers are not in the backup cycle, as they do not store any valuable data besides temp files, etc. For these, only things like docker compose files, custom config, ansible playbooks,... are in my backup.
Maintain three (3) copies of your data: This includes the original data and at least two copies.
Use two (2) different types of media for storage: Store your data on two distinct forms of media to enhance redundancy.
Keep at least one (1) copy off-site: To ensure data safety, have one backup copy stored in an off-site location, separate from your primary data and on-site backups.
You have 3 copies, one on your phone and nvme, one on the backup nvme and one in the cloud.
You have 2 media, internal SSD and cloud (your phone would count as a third if it wasn't auto synced)
You have 1 off-site in the cloud
I might be the weird one, but I never consider the phone copy as valid for 321. I have so many photos that they don't fit, so most are already not there anymore.
Server/htpc + desktop (with delay, I turn it on sparsely) + b2
Everything backs up to a Synology diskstation (with disk redundancy). The Syno's Hyperbackup makes backups of critical stuff stuff to the cloud weekly. In the case of my self-hosted stuff, it's mostly the share storage where all my docker volumes map to. Also workstation backsups, home assistant backups, phone photos, etc.
A back up of the temporally replaceable stuff (everything not covered above) which is hosted from the Diskstation, is made to an external drive a few times a year and stored off-site the rest of the time. This isn't 3-2-1, but its close enough for my needs.