I'm having trouble staying on top of updates for my self hosted applications and infrastructure. Not everything has auto updates baked in and some things you may not want to auto update. How do y'all handle this? How do you keep track of vulnerabilities? Are there e.g. feeds for specific applications I can subscribe to via RSS or email?
Yeah, hot take, but basically there's no point to me having to keep track of all that stuff and excessively worry about the dangers of modernity and sacrifice the spare time I have on watching update counter go brrrr of all things, when there's entire peoples and agencies in charge of it.
I just run unattended-upgrades (on Debian), pin container image tags to only the major version number where available, run rebuild of containers twice a week, and go enjoy the data and media I built the containers and installed for software for.
Regarding things like dockers and flatpaks, I mostly "solve" it by only running official images, or at least images from the same dev as the program, where possible.
But also IMO there's little to no reason to fear when using things like flatpaks. Most exploits one hears of nowadays are of the kind "your attacker needs to get a shell into your machine in the first place" or in some cases evn "your attacker needs to connect to an instance of a specific program you are running, with a specific config", so if you apply any decent opsec that's already a v high barrier of entry.
And speaking of Debian, that does bring to mind the one beef I have with their packaging system: that when installing a package it starts the related services by default, without even giving you time to configure them.
95% of things I just don't expose to the net; so I don't worry about them.
Most of what I do expose doesn't really have access to any sensitive info; at most an attacker could delete some replaceable media. Big whoop.
The only thing I expose that has the potential for massive damage is OpenVPN, and there's enough of a community and money invested in that protocol/project that I trust issues will be found and fixed promptly.
Overall I have very little available to attack, and a pretty low public presence. I don't really host any services for public use, so there's very little reason to even find my domain/ip, let alone attack it.
How do I do it? Everything's installed and updated via pacman/the AUR, including python packages and nextcloud apps. The only thing I don't install via that way is Firefox addons.
Unless you have actual tooling (i.e. RedHat erratas + some service on top of that), just don’t even try.
Stop downloading random shit from dockerhub and github. Pick a distro that has whatever you need packaged, install from the repositories and turn on automatic updates. If you need stuff outside of repos, use first party packages and turn on auto updates. If there aren’t any decent packages, just don’t do it. There is a reason people pay RedHat a shitton of money, and that’s because they deal with much of this bullshit for you.
At home, I simply won’t install anything unless I can enable automatic updates. Nixos solves much of it. Two times a year I need to bump the distro version, bump the nextcloud release, and deal with depreciations, and that’s it.
I also highly recommend turning on automatic periodic reboots, so you actually get new kernels running…
I just update every month or two, or whenever I remember. I use Docker/podman, and I set the version to whatever minor release I'm using, and manually bump after checking the release notes to look for manual upgrade steps.
It usually takes 5 min and that's with doing one at a time.
I’ve just started to delve into Wazuh… but I’m super new to vulnerability management on a home lab level. I don’t do it for work so 🤷🏼♂️
Anyways, best suggestion is to keep all your containers, vms, and hosts updated best you can to remediate vulnerabilities that are discovered by others.
Otherwise, Wazuh is a good place to start, but there’s a learning curve for sure.
that's a lot of FUD, topgrade just upgrades using all package managers you have, it doesn't do the upgrades itself bypassing the manager that installed it, or package authors.
I have automatic updates on everything, but if I actually spent time managing updates and vulnerabilities I'd have no time to do anything else in my life.
third party software: subscribe to the releases RSS feed (in tt-rss or rss2email), read release notes, bump version number in my ansible playbook, run playbook, done.
I have stuff in new releases.io and also GitHub release RSS feeds in nextcloud, I then sit down once a week and see what needs an update. Reboot when required.
Keep your software/containers up to date.
You can subscribe to the GitHub repo and configure it to get notified for new releases and security alerts.
Complementary, you can use RSS feeds, newteleases.io and/or WUD (What's Up Docker) and add labels to your docker compose files.
Personally, I check the notification once a week and change the version for all minor tools I'm using. If there is a major release (or new Immich version) I read the changelog and update instructions (if it's the case).
For container security scans, you can use Trivy, but the problem is that you don't have a centralized overview of your scan results. For this you can use DefectDojo.
Depending on the case/threat model, vulnerability management for self-hosted things might be overkill, but highly recommended of you want to learn more about this.
It worth mentioning Trufflehog as secrets scanner and sops as a solution to encrypt sensitive data so you can push it to git/SCM.
What happens if they compromise your device in secret and use it as part of a botnet? Lots of state backed attacks rely on traffic relays provided by compromised devices.
My NAS is behind a firewall and doesn't normally run the types of things you would compromise. (no web browser). They need to break many things at the same time to compromise it. I'm not saying it would be impossible to compromise my NAS, but is is very unlikely just because of how difficult it is. If I'm target of a state level attack I'm sunk anyway.
though offline backups are always a good idea. However they by definition need several days to restore (if they take less than that they are too easy for an attacker to target)
Most critical infrastructure like my mail i subscribe to the release and blog rss feed. My OSs send me Update notifications via Mail (apticron), those i handle manual. Everything else auto updates daily.
You still need to check if the software you use is still maintained and receives security updates. This is mostly done by choosing popular and community drive options, since those are less likely to get abandoned.