As many others have said, not allowing inbound WAN connections into my LAN is an important step. I also run k3s on my server with Calico as the CNI and make heavy use of network policies to keep anything I’m running from misbehaving. That, along with easy ingress makes k3s worth it for me over Docker Compose. I use OpenWRT on my router and force certain devices to run through a VPN and block other devices from the internet entirely.
I assume #2 is just to keep containers/stacks able to talk to each other without piercing the firewall for ports that aren't to be exposed to the outside? It wouldn't prevent anything if one of the containers on that host were compromised, afaik.
One thing I do is instead of having an open SSH port, I have an OpenVPN server that I’ll connect to, then SSH to the host from within the network. Then, if someone hacks into the network, they still won’t have SSH access.
Enable firewall and block all ports you're not using(most firewalls do this by default)
Switch to a LTS kernel(not security related, but it keeps things going smooth... Technically it is safer since it gets updated less often so it is a bit more battle tested? Never investigated whenever a LTS kernel is safer than a standard one)
Use Caddy to proxy to services instead of directly exposing them out
Dropping instead of blocking might technically be better because it wastes a bit more bot time and they see it as "it doesn't exist" rather than an obsticle to try exploits on. Not sure if that is true though.
For me:
ssh server only with keys
absolutely no ssh forwarding, only available to local network via firewall rules
docker socket proxy for everything that needs socket access
drop non-used ports, limit IPs for local-only services (e.g. paperless)
crowdsec on traefik for the rest (sadly it blocks my VPN IPs also)
Authelia over everything that doesn't break the native apps (jellyfin and home assistant are the two that it breaks so far, and HA was very intermittent so I made a separate authelia rule and mobile DNS entry for slightly reduced rules)
proper umask rules on all docker directories (or as much as possible)
main drive FDE with a separate boot drive with FDE keyfile on a dongle that is removed except for updates and booting to make snatch-and-grabs useless and compromising bootloader impractical
full disk encryption with passworded data drives, so even if a smash and grab happens when I leave the dongle in, the sensitive data is still encrypted and the keys aren't in memory (makes a startup script with a password needed, so no automated startups for me)
And no real need for tailscale or cloudflare. If you do not like to depend on a third party service, either port forward and ddns or an external vps+wire guard if you have gcnat
Fail2ban config can get fairly involved in my experience. I'm probably not doing it the right way, as I wrote a bunch of web server ban rules --- anyone trying to access wpadmin gets banned, for instance (I don't use WordPress, and if I did, it wouldn't be accessible from my public facing reverse proxy).
I just skimmed my nginx logs and looked for anything funky and put that in a ban rule, basically.
I have automatic updates enabled and once in a while I scan with Nessus. Also I have backups. Stuff dying or me breaking it is a much greater risk than getting hacked.
I agree - I don’t expose anything to the internet other than the WireGuard endpoint.
I’m only hosting services that my immediate family need to access, so I just set up WireGuard on their devices, and only expose the services on the LAN.
I used to expose services to the internet, until one of my #saltstack clients was exploited through a very recent vulnerability I hadn’t yet patched (only a week or so since it was announced). I was fortunate that the exploit failed due to the server running FreeBSD, so the crontab entry to download the next mailicious payload failed because wget wasn’t available on the server.
That’s when I realised - minimise the attack surface - if you’re not hosting services for anyone in the world to access, don’t expose them to everyone in the world to exploit.
TBF if you want, you can have a bastion server which is solely whitelisted by IP to stream your content from your local server. It's obviously a pivot point for hackers, but it's the level of effort that 99% of hackers would ignore unless they really wanted to target
you. And if you're that high value of a target, you probably shouldn't be opening any ports on your network, which brings us back to your original solution.
I, too, don't expose things to the public because I cannot afford the more safe/obfuscated solutions. But I do think there are reasonable measures that can be taken to expose your content to a wider audience if you wanted.
My new strategy is to block EVERY port except WireGuard. This doesn't work for things you want to host publicly ofc, like a website, but for most self host stuff I don't see anything better than that.
Default block for incoming traffic is always a good starting point.
I'm personally using crowdsec to good results, but still need to add some more to it as I keep seeing failed attacks that should be blocked much quicker.
I expose some stuff through IPv6 only with my Synology NAS (I am CGNATED) and I have always wondered if I still need to use fail2ban in that environment...
My Synology has an auto block feature that from my understanding is essentially fail2ban, what I don't know is if such a feature works for all my exposed services but Synology's.
My Synology has an auto block feature that from my understanding is essentially fail2ban, what I don’t know is if such a feature works for all my exposed services but Synology’s
I'd be surprised if it works for custom services. Fail2ban has to know what's running and haw to have access to its log file to know what is a failed authentication request. The best you can do without log access is to rate limit new tcp connections. But still you should know what's the service behind because 5 new SSH sessions per minute and IP can be reasonable 5 new http1.0 connections likely cannot load a single html page.