Basically title. I'm in the process of setting up a proper backup for my configured containers on Unraid and I'm wondering how often I should run my backup script. Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent? Whats your schedule and do you strictly backup your appdata (container configs), or is there other data you include in your backups?
I run Borg nightly, backing up the majority of the data on my boot disk, incl docker volumes and config + a few extra folders.
Each individual archive is around 550gb, but because of the de-duplication and compression it's only ~800mb of new data each day taking around 3min to complete the backup.
Borgs de-duplication is honestly incredible. I keep 7 daily backups, 3 weekly, 11 monthly, then one for each year beyond that. The 21 historical backups I have right now RAW would be 10.98tb of data. After de-duplication and compression it only takes up 407.98gb on disk.
With that kind of space savings, I see no reason not to keep such frequent backups. Hell, the whole archive takes up less space than one copy of the original data.
Yep. Even if the data I'm backing up doesn't really change that often. Perhapas I should start to back up files from my laptop and workstation too. Nothing too important is stored only on those devices, but reinstalling and reconfiguring everything back is a bit of a chore.
Proxmox servers are mirrored zpools, not that RAID is a backup. Replication between Proxmox servers every 15 minutes for HA guests, hourly for less critical guests. Full backups with PBS at 5AM and 7PM, 2 sets apiece with one set that goes off site and is rotated weekly. Differential replication every day to zfs.rent. I keep 30 dailies, 12 weeklys, 24 monthly and infinite annuals.
Periodic test restores of all backups at various granularities at least monthly or whenever I'm bored or fuck something up.
This is very similar to how I run mine, except that I use Ceph instead of ZFS. Nightly backups of the CephFS data with Duplicati, followed by staggered nightly backups for all VMs and containers to a PBS VM on a the NAS. File backups from unraid get sent up to CrashPlan.
Slightly fewer retention points to cut down on overall storage, and a similar test pattern.
I would like to play with ceph but I don't have a lot of spare equipment anymore, and I understand ZFS pretty well, and trust it. Maybe the next cluster upgrade if I ever do another one.
And I have an almost unhealthy paranoia after see so many shitshows in my career, so having a pile of copies just helps me sleep at night. The day I have to delve into the last layer is the day I build another layer, but that hasn't happened recently. PBS dedup is pretty damn good so it's not much extra to keep a lot of copies.
Nextcloud data daily, same for the docker configs. Less important/rarely changing data once per week. Automatic sync to NAS and online storage. Irregular and manual sync to an external disk.
7 daily backups, 4 weekly backups, "infinite" monthly backups retained (until I clean them up by hand).
rsync from ZFS to an off-site unraid every 24 hours 5 times a week. on the sixth day it does a checksum based rsync which obviously means more stress so only do it once a week. the seventh day is reserved for ZFS scrubbing every two weeks.
Backup all of my proxmox-LXCs/VMs to a proxmox backup server every night + sync these backups to another pbs in another town.
A second proxmox backup every noon to my nas.
(i know, 3-2-1 rule is not reached...)
Most backup software allow you to configure backup retention. I think I went with some pretty standard once per day for a week. After that they get deleted, and it keeps just one per week of the older ones, for one or two months. And after that it's down to monthly snapshots. I think that aligns well with what I need. Sometimes I find out something broke the day before yesterday. But I don't think I ever needed a backup from exactly the 12th of December or something like that. So I'm fine if they get more sparse after some time. And I don't need full backups more than necessary. An incremental backup will do unless there's some technical reason to do full ones.
But it entirely depends on the use-case. Maybe for a server or stuff you work on, you don't want to lose more than a day. While it can be perfectly alright to back up a laptop once a week. Especially if you save your documents in the cloud anyway. Or you're busy during the week and just mess with your server configuration on weekends. In that case you might be alright with taking a snapshot on fridays. Idk.
(And there are incremental backups, full backups, filesystem snapshots. On a desktop you could just use something like time machine... You can do different filesystems at different intervals...)
Unraid appears gets backed up weekly by a community applications (CA app backup) and I use rclone to back it up to an old box account (100GB for life..) I did have it encrypted but seems I need to fix that..
Parity drive on my Unraid (8TB)
I am trying to understand how to use Rclone to back up my photos to Proton Drive so that's next.
Music and media is not too important yet but I would love some insight
I classify the data according to its importance (gold, silver, bronze, ephemeral). The regularity of the zfs snapshots (15 minutes to several hours) and their retention time (days to years) on the server depends on this. I then send the more important data that I cannot restore or can only restore with great effort (gold and silver) to another server once a day. For bronze, the zfs snapshots and a few days of storage time on the server are enough for me, as it is usually data that I can restore (build artifacts or similar) or is simply not that important. Ephemeral is for unimportant data such as caches or pipelines.
Depends on the application. I run a nightly backup of a few VM's because realistically they dont change much. I have containers on the other hand that run critical (to me) systems like my photo backup and they are backed up twice a day.
Every hour. Could do it more frequently if needed.
It depends on how resource intensive the backup process is.
Consider an 800GB Immich instance.
Using Duplicity or rsync takes 1 hour per backup. 99% of the time is spent in traversing the directory structure and checking which files have changed. 1% is spent into transferring the difference to the backup. Any backup system that operates on top of the file system would take this much. In addition, unless you're using something that can take snapshots of the filesystem, you have to stop Immich during the backup process in order to prevent backing up an invalid app state.
Using ZFS send on the other hand (with syncoid) takes less than 5 seconds to discover the differences and the rest of the time is spent on the data transfer, at 100MB/s in my case. Since ZFS send is based on snapshots, I don't have to stop the service either.
When I used Duplicity to backup, I would backup once week because the backup process was long and heavy on the disk array. Since I switched to ZFS send, I do it once an hour because there's almost no visible impact.
I'm now in the process of migrating my laptop to ZFS on root in order to be able to utilize ZFS send for regular full system backups. If successful, eventually I'll move all my machines to ZFS on root.
I continuous backup important files/configurations to my NAS. That's about it.
IMO people who redundant/backup their media are insane... It's such an incredible waste of space. Having a robust media library is nice, but there's no reason you can't just start over if you have data corruption or something. I have TB and TB of media that I can redownload in a weekend if something happens (if I even want). No reason to waste backup space, IMO.
It becomes a whole different thing when you yourself are a creator of any kind. Sure you can retorrent TBs of movies. But you can't retake that video from 3 years ago.
I have about 2 TB of photos I took. I classify that as media.
Maybe for common stuff but some dont want 720p YTS or yify releases.
There are also some releases that don't follow TVDB aired releases (which sonarr requires) and matching 500 episodes manually with deviating names isn't exactly what I call 'fun time'.
Amd there are also rare releases that just arent seeded anymore in that specific quality or present on usenet.
So yes: Backup up some media files may be important.
Data hoarding random bullshit will never make sense to me. You're literally paying to keep media you didn't pay for because you need the 4k version of Guardians of the Galaxy 3 even though it was a shit movie...
Grab the YIFY, if it's good, then get the 2160p version... No reason to datahoard like that. It's frankly just stupid considering you're paying to store this media.
I honestly don't have too much to back up, so I run one full backup job every Sunday for different directories I care about. They run a check on the directory and only back up any changes or new files. I don't have the space to backup everything, so I only take the smaller stuff and most important. The backup software also allows live monitoring if I enable it, so some of my jobs I have that turned on since I didn't see any reason not to. I reuse the NAS drives that report errors that I replace with new ones to save on money. So far, so good.
Backup software is Bvckup2, and reddit was a huge fan of it years ago, so I gave it a try. It was super cheap for a lifetime license at the time, and it's super lightweight. Sorry, there is no Linux version.
Using Kopia, backups are made multiple times per day to Google drive. Only changes are transferred.
Configurations are backed up once per week and manually, stored 4 weeks. Websites and NextCloud data is backed up every hour and stored for a year (although I'm doing this only 7 months now).
I tried Kopia but it was unstable and janky, so now it's whenever I remember to manually run a bunch of rsync. I backup my desktop to cold storage on the first of the month, so I should get in the habit of backing up my server to the NAS then also.
Daily backups.
Currently using restic on my NixOS servers. To avoid data corruption, I make a zfs snapshot at 2am, and after that restic does a backup of my mutable data dirs both to my local Nas and CloudFlare r3.
The Nas backup folder is synced to backblaze nightly as well for a more cold store.
@Sunny Backups are done weekly, using Restic (and with '--read-data-subset=9%' to verify that the backup data is still valid).
But that's also in addition to doing nightly Snapraid syncs for larger media, and Syncthing for photos & documents (which means I have copies on 2+ machines).