The first in a series of articles describing an effective self-hosting homelab configuration using docker, Tailscale, and dockge.
Recently, I've found myself walking several friends through what is essentially the same basic setup:
Install Ubuntu server
Install Docker
Configure Tailscale
Configure Dockge
Set up automatic updates on Ubuntu/Apt and Dockge/Docker
Self-host a few web apps, some publicly available, some on the Tailnet.
After realizing that this setup is generally pretty good for relative newcomers to self-hosting and is pretty stable (in the sense that it runs for a while and remains up-to-date without much human interference) I decided that I should write a few blog posts about how it works so that other people can set it up for themselves.
Its covered in the introduction what's expected of the reader and server setup, and towards the end of the intro I go over the unattended-upgrades setup.
I haven't tried photoprism in a while, but when I tried it, it wasn't even close.
Photoprism seems more suited if you're a photographer to index your professional work where immich aims to be a google photos/icloud alternative.
Immich has native mobile apps to do the syncing and provide a (great) interface for search, it has much better multi-user support, including sharing albums, and much more features than I'm willing to type out here.
The only thing missing, for me at least, is better support for local files to eliminate the need for another gallery app/file picker.
Yeah a little xD but FWIW this article series is based on what I personally run (and have set up for several friends) and its been doing pretty well for at least a year.
But I have backups which can be used to recover from the issues with breaking updates.
This is very cool, but also very dangerous. Many projects release versions that need some sort of manual intervention to be updated, and automatically updating to new versions on docker can lead to data loss in those situations.
It is my humble opinion that teaching newbies to do automatic updates will cause them to lose data and break things, which will probably sour them from ever self hosting again.
Automatic OS updates are fine, and docker update notifications are fine, but automatic docker updates are just too dangerous.
That's reasonable, however, my personal bias is towards security and I feel like if I don't push people towards automated updates, they will leave vulnerable, un-updated containers exposed to the web. I think a better approach would be to push for backups with versioning. I forgot to add that I am planning a "backups with Syncthing" article as well, I will take this into consideration, add it to the article, and use it as a way to demonstrate recovery in the event of such an issue.
Been in it since the web was a thing. I agree wholeheartedly. If people don't run auto updates and newbies will not run manual updates, You're just teaching them how to make vulnerabilities.
Let them learn how to fix an automatic update failure rather than how to recover from ransomware. No contest here.
My experience after 35 years in IT: I've had 10x more outages caused by automatic updates than everything else combined.
Also after 35 years of running my own stuff at home, and practically never updating anything, I've never had an outage caused by a lack of updates.
Let's not act like auto updates is without risk. Just look at how often Microsoft has to roll out a fix for something an update broke. Inexperienced users are going to be clueless when an update breaks something.
We should be teaching new people how to manage systems, this includes proper update checks on a cycle, with appropriate validation that everything works afterwards, and the ability to roll back if there's an issue.
This isn't an Enterprise where you simply can't manually manage updates across hundreds or thousands of servers, and tens of thousands of workstations - this is a single admin, small environment.
I do monthly update checks, update where I feel it's warranted, and verify systems afterwards.
it'll still cause downtime, and they'll probably have a hard time restoring from backup for the first few times it happens, if not for other reason then stress. especially when it updates the wrong moment, or wrong day.
they will leave vulnerable, un-updated containers exposed to the web
that's the point. Services shouldn't be exposed to the web, unless the person really knows what they are doing, took the precautions, and applies updates soon after release.
exposing it to the VPN and to tge LAN should be plenty for most. there's still a risk, but much lower
"backups with Syncthing"
Consider warning the reader that it will not be obvious if backups have stopped, or if a sync folder on the backup pc is in an inconsistent state because of it, as errors are only shown on the web interface or third party tools
This absolutely can happen to stable projects. This has happened with Mastodon many times, and Mastodon has been stable for years.
It also has happened with Nextcloud many times, and again, Nextcloud has been stable for years.
It’s not a stability thing, it’s an automation thing. We as devs can only automate so much. At a certain point, it becomes up to you, as the administrator, to manually change things. Things like infrastructure changes, and database migrations, where the potential downtime if we automate it is something we need to consider.
I use diun for update notifications. I wish there was something that could send me a notification, and if I gave it an okay or whatever it would apply the update. Maybe with release notes for the latest version so I could quickly judge if I need to do anything besides update.
Just call me Mr. BuzzKill. LOL I learned that there is a fork at https://watchtower.devcdn.net/. Deployed it yesterday, and for the first round of updates, everything went as it should. No runs, no drips, no errors. Time will tell.
In case it’s of help, a common problem I find with guides in general is that they assume I don’t already use Apache (or some other service), and describe as though I’m starting with a clean system. As a newbie, it’s hard to know what damage the instructions will do to existing services, or how to adapt the instructions.
Since docker came along it’s gotten easier, and I’ve learned enough about ports etc to be able to avoid collisions. But it would be great if guides and tutorials in general covered that situation.
Something really fun I found out recently, when my friend lost all access to his system except for a single WebDAV share by accidentally turning off all his remote admin access:
If you write “b” to /proc/sysrq-trigger, it will immediately reboot the system (like holding down the reset button, so inherently a bit dangerous).
He was running Nephele with / mounted as the share, so luckily he just uploaded that file with a single “b” in it, and all his remote admin stuff came back up after the reboot.