I've been running my server without a firewall for quite some time now, I have a piped instance and snikket running on it. I've been meaning to get UFW on it but I've been too lazy to do so. Is it a necessary thing that I need to have or it's a huge security vulnerability? I can only SSH my server from only my local network and must use a VPN if I wanna SSH in outside so I'd say my server's pretty secure but not the furthest I could take it. Opinions please?
IMHO, security measures are necessary. I have a tendency to go a bit heavy on security because I really hate having to mop up after a breach. So the more layers I have, the better I feel. Most of the breaches I've experienced were not some dude in a smokey, dimly lit room, wearing a hoody, and clacking away at a keyboard, while confidently announcing 'I'm In!' or 'Enhance!'. Most are bots by the thousands. The bots are pretty sophisticated now days. They can scan vulnerabilities, attack surfaces, et al. They have an affinity for xmrig too, tho those are easy to spot when your server pegs all resources.
So, for the couple days investment of implementing a good, layered security defense, and then the time it takes to monitor such defenses, is worth it to me, and lets me sleep better. To each their own. Not only are breaches a pain in the ass, they have serious ramifications and can have legal consequences such as in a case where your server became a hapless zombie and was orchestrated to attack other servers. So, even on the selfhosted side of things, security measures are required, I would think.
It takes about 5 minutes to set up UFW which would be the absolute minimum, I would think.
I like to run ufw on all my machines but I'm also a tinfoil-hat wearing wacko who believes that no computer should ever really be trusted. Just trusted enough to do specific tasks.
If someone somehow busts into one of my VLANs, at least the other machines on that net will still have some sort of protection.
You have a firewall. It’s in your router, and it is what makes it so that you have to VPN into the server. Otherwise the server would be accessible. NAT is, effectively, a firewall.
Should you add another layer, perhaps an IPS or deny-listing? Maybe it’s a good idea.
Assuming it's not a 1-1 NAT it does make for a functional unidirectional firewall. Now, a pure router in the sense of simply offering a gateway to another subnet doesn't do much, but the typical home router as most people think of it is creating a snat for multiple devices to reach out to the internet and without port forwarding effectively blocks off traffic from the outside in.
One thing that hasn't been said in this thread is the following:
Do you trust your router? Do you have an isp that can probe your router remotely and access it? In those cases, you absolutely need a firewall
Absolutely. Even if your ISP is firewalling, never trust they will maintain it, and some of these cheapshit routers they use are awful. Use your own router and put it on the ISP routers DMZ.
If your router is setup to only allow in the ports with a service hanging off it, like SSH. Then a firewall wont add anything your router doesnt.
On the flip side, if your running any kind of VPS or directly accessible server, like a VPS or dedicated server. Then a firewall is required.
Now protecting your server from other things on your local network might something you want to do, think IoT stuff getting popped and being used to hack other things on the network
I only bind applications to ports on the Internet facing network interfaces that need to be reachable from outside, and have all other ports closed because nothing is listening on them.
A firewall in this case would bring me no further protection from external threats, because all those ports have to be open in the firewall too.
But Linux comes with a firewall build in, so I use it even if it is not strictly needed with my strict port management regime for my services.
And a firewall has the added benefit to limit outgoing network traffic to only allowed ports/applications.
You don't. Providing you have an upstream gateway that do the firewall for you, provided you don't have an open WiFi, provided you use a reverse proxy, provided you have sane network settings all around, provided you run linux(or similar).
Even better if you are behind CGNAT.
Provided you know what you are doing.
On the other hand, setting up a firewall in a safe way is no easy task either.
I use an opnSense on top of my home network, given all the above "provided".
Before that, I never run a firewall and never had an issue. Always being cg-nat tough.
In your case: no need for a fw if you can trust your local network.
Generally: services can have bugs - reverse proxy them. Not everybody needs to access the service - limit access with a firewall. Limit brute-force/ word-list attempts - MFA / fail2ban.
My personal advice, secure it down to only permitting what needs it, regardless of your trust to the network.
Treat each device as if they've been compromised and the attacker on the compromised device is now trying to move laterally. Example scenario: had you blocked all devices except your laptop or phone to your server, your server wouldn't have been hacked because someone went through a hacked cloud-connected HVAC panel.
I lock down everything and grant access only to devices that should have access. Then on top of that, I enable passwords and 2FA on everything as if it were public... Nothing I self host is public. It's all behind my network firewall and router firewall, and can only be accessed externally by a VPN.
I use OpenWRT on my network and each server I have is on its own VLAN. So in my case, my router is the firewall to my servers. But I do have on my todo list to get the local firewalls working as well. As others have said, security is about layers. You want an attacker to have to jump multiple hurdles.
Why did you put each server in its own vlan? You now have a bunch of separate broadcast domains that need a router to move traffic between them. Switching is much faster since it is done in hardware most of the time.
Mainly for security reasons. Both servers have some limited exposure to the internet. Are you saying doing it that way has performance implications? I haven't noticed any problems its all fast just like before when everything was on the same LAN
That depends. If you have exposed services, you could use some features of the firewall to geoip restrict incoming requests to prevent spam from China and Russia and whatnot.
If you don't have any services running on a publicly accessible port, then what would the firewall protect?
You should, yes. I run a firewall (I usually use ufw) on all of my Internet-connected devices, since all of my devices run Linux. There's not really any good reason not to in 2025.
But is there a good reason to run one on a server? Any port that's not in use won't allow traffic in. Any port that's in use would be added to the firewall exception anyway.
The only reasons I can think of to use a firewall are:
some services aren't intending to be accessible - with containers, this is really easy to prevent
your firewall also does other stuff, like blocking connections based on source IP (e.g. block Russia and China to reduce automated cyber attacks if you don't have users in Russia or China)
Be intentional about everything you run, because each additional service is a potential liability.
Because it's easy to accidentally run services or set up services temporarily and forget that you left them running. With UPnP being able to automatically/dynamically open ports, a firewall is just another layer of protection. You can also configure firewalls to ignore packets silently or log dropped packets, and if applications ever get new versions and end up listening on new ports, you would have to manually allow the ports. Maybe you want to have one part of an application accessible through the firewall but not another part of the application.
Plus, like you said, country blocking is another feature which personally I think is nice to have, and there are also other features too like being able to throttle connections, especially with things like fail2ban.
It's just another layer of protection, and it ensures that everything you run is deliberate.
Is it directly exposed over the Internet? If you only port forward the VPN on your router, I wouldn't worry about it unless you're worried about someone else already on your LAN.
And even then, it's really more like an extra layer of security against accidentally running something exposed publicly that you didn't intend to, or maybe you want some services to only be accessible via a particular private interface. You don't need a firewall if you have nothing to filter in the first place.
A machine without a firewall that doesn't have any open port behave practically the same from a security standpoint: nothing's gonna happen. The only difference is the port showing as closed vs filtered in nmap, and the server refusing to send any response not even a rejection, but that's it.
You do not even need a port based firewall when the server is open on the internet.
When you configure the software to not have unnecessary open ports over the internet connected interface then a port based firewall is providing zero additional security.
A port based firewall has the benefit that you can lock everything down to the few ports you actually need, and do not have to worry about misconfigured software.
For example, something like docker circumvents ufw anyway. And i know ppl that had open ports even tho they had ufw running.
I just went done this road and i'd say it is worth it even only for the learning part.
I've set counter per application in nftable, and via a python script send them in SVG graph format to Glance dashboard.
The result is I can monitor my whole network per application and the best part it all add up very well so I know there is no 'unknown' outgoing or ingoing traffic on my machine.