I don't think I can agree with that, and I'm a pretty agreeable chap.
In the days when people actually cared about the html layout and readability, FP spammed everything hugely, and inserted a lot of terrible cruft. Inventing zillions of new <style> tags for everything, even when the user just wanted to italicise a word. Use a <i> tag? No! We'll invent a whole new style class and embed it in the headers.
A few years ago I rather stupidly agreed to take over hosting of a website for someone that was dying. It had been written with FP and it took me months to de-cruft it using a lot of regexp and scrifting. (Some 8,000 images and around 2000 .html files).
For a server os, do things like consider stability and ease of upgrading between major versions.
Debian does both of those things extremely well.
If you're playing around with changing distros and your data is valuable, I'd try and find somewhere to back it up to, myself.
It's only true if it's enforced, isn't it?
Ok - and what sort of cpu load do they have?
htop will also show the cpu bars and the breakdown of that - whether it's pure cpu or iowait, which is when the cpu can't do anything because it's waiting on disk or network.
And how's your memory usage looking?
I'm guessing you've already turned it off and on again. If not, seriously, do that. It works more time than it doesn't for random weirdness.
Run 'htop' and sort by CPU (it's a friendlier and better version of 'top'. That'll show you what processes are using the most CPU
Whilst you're in there, check the free memory. If that's low, or swap usage is high, then use htop to sort by memory usage to find what's using the most.
If you see processes you don't recognise, hit google and find out why. It's very unlikely they're malicious, but it's far less common on linux than Windows to have random processes doing unknown stuff. If it's using a lot of cpu or memory, there'll be a reason. It might be a dumb reason, but you will be able to find it out.
And then when you know what the guilty process is, if it is that, and it's not critical - you can stop it with systemctl and narrow down what's afoot.
It's for the best, really.
Gosh, I wonder what stirred them up?
Exactly. Yet another truly awful something is about to happen that'll get buried under his new patio.
YES!
UK lawmakers - please take note also. Not just in cities but we find them jammed up in our country lanes too, and regularly crossing the centre line on B-roads.
Before this year, the thought of an entirely arbitrary block to things like American cloud services by America to its European allies would have seemed extremely unlikely. It would make no sense, the damage to America and it's GDP would far outweigh any any political benefit.
All of those reasons still hold true, but I absolutely assure you, European governments and companies all over have that possibility firmly in their risk portfolio now. America tells microsoft to immediately not only stop selling products in Europe, but disable those already in use? Ditto Google. Ditto Apple. Ditto all the hundreds of IT hardware producers that are American. Want to cripple a foreign government that uses MS Office? Remotely disable it. job done. Sure, it would be illegal, but America's government has no respect for law.
(Even before this, several European governments were using open source (Germany, France, Austria, Portugal - there's a list but this is less about idealism and more about protecting themselves from the unpredictable as well as not trusting America with their data any more. Every thing like this can only be seen as non Americans distancing themselves from America every way they can, and with good reason.)
If you know, you know.
Known 35 years next month. Married 35 years in November.
after our kid was born she said I smelled differently and she was repulsed by me.
Oh, man, that's brutal.
Other have answered the runtime and load question very well already.
I have three other points.
-
Batteries degrade over time. Over-speccing your UPS means more likelyhood that things will hold up in three years time as the capacity given is for new ones. Plus, not running your UPS at 100% capacity reduces its stress. Again, more reliable.
-
You can get a much better quality UPS by buying a second hand one without batteries off ebay and replacing them yourself, typically for a fraction of the cost of buying new. Plus you know you have new batteries. UPS is something where quality genuinely matters. I've had to carry a cheap and badly made UPS out of an office whilst it was on fire, so now I spec more carefully. (And ensure they're metal bodied!)
-
Consider what you NEED to power. What sort of power cuts are you expecting? Does it matter if something goes down?
I UPS my servers and my main desktop, but not my routers, nor my wifi or IOT things. My internet provider also goes out when there's a cut (I'm on a mesh system so rely on neighbours, who will typically also be down) and I can't do much without power anyway, but it keeps the disks spinning. We typically get very short automated outages here of less than 10s (yesterday was a bad day, we had 9 within 2 hours)
Because Musk has turned it into somewhere that hate speech is not only tolerated, but encouraged.
Lemmy is literally the antithesis of X, no wonder you're being downvoted.
Why are you cross posting content from a hate site?
Stolen? The employees were paid, weren't they?
The Sustainable Development Report 2024 tracks the performance of all 193 UN Member States on the 17 Sustainable Development Goals.

Under this methodology of all 193 UN Member States – an expansive model of 17 categories, or “goals,” many of them focused on the environment and equity – the U.S. ranks below Thailand, Cuba, Romania and more that are widely regarded as developing countries.
In 2022, America was 41st. Interesting to see where it will be after this term of office, which looks set to be working against many of these aims.


On display at the Stromness museum. Carved from whalebone and believed to be a child's doll.
Was discovered at the famous Skara Brae site, and then spent years forgotten in a box at the museum before being rediscovered.
https://www.bbc.co.uk/news/uk-scotland-north-east-orkney-shetland-36526874
I host a few small low-traffic websites for local interests. I do this for free - and some of them are for a friend who died last year but didn't want all his work to vanish. They don't get so many views, so I was surprised when I happened to glance at munin and saw my bandwidth usage had gone up a lot.
I spent a couple of hours working to solve this and did everything wrong. But it was a useful learning experience and I thought it might be worth sharing in case anyone else encounters similar.
My setup is:
Cloudflare DNS -> Cloudflare Tunnel (Because my residential isp uses CGNAT) -> Haproxy (I like Haproxy and amongst other things, alerts me when a site is down) -> Separate Docker containers for each website. On a Debian server living in my garage.
From Haproxy's stats page, I was able to see which website was gathering attention. It's one running PhpBB for a little forum. Tailing apache's logs in that container quickly identified the pattern and made it easy to see what was happening.
It was seeing a lot of 404 errors for URLs all coming from the same user-agent "claudebot". I know what you're thinking - it's an exploit scanning bot, but a closer look showed it was trying to fetch normal forum posts, some which had been deleted months previously, and also robots.txt. That site doesn't have a robots.txt so that was failing. What was weird is that the it was requesting at a rate of up to 20 urls a second, from multiple AWS IPs - and every other request was for robots.txt. You'd think it would take the hint after a million times of asking.
Googling that UA turns up that other PhpBB users have encountered this quite recently - it seems to be fascinated by web forums and absolutely hammers them with the same behaviour I found.
So - clearly a broken and stupid bot, right? Rather than being specifically malicious. I think so, but I host these sites on a rural consumer line and it was affecting both system load and bandwidth.
What I did wrong:
-
In docker, I tried quite a few things to block the user agent, the country (US based AWS, and this is a UK regional site), various IPs. It took me far too long to realise why my changes to .htaccess were failing - the phpbb docker image I use mounts the root directory to the website internally, ignoring my mounted vol. (My own fault, it was too long since I set it up to remember only certain sub-dirs were mounted in)
-
Figuring that out, I shelled into the container and edited that .htaccess, but wouldn't have survived restarting/rebuilding the container so wasn't a real solution.
Whilst I was in there, I created a robots.txt file. Not surprisingly, claudebot doesn't actually honour whats in there, and still continues to request it ten times a second.
- Thinking there must be another way, I switched to Haproxy. This was much easier - the documentation is very good. And it actually worked - blocking by Useragent (and yep, I'm lucky this wasn't changing) worked perfectly.
I then had to leave for a while and the graphs show it's working. (Yellow above the line is requests coming into haproxy, below the line are responses).
Great - except I'm still seeing half of the traffic, and that's affecting my latency. (Some of you might doubt this, and I can tell you that you're spoiled by an excess of bandwidth...)
- That's when the penny dropped and the obvious occured. I use cloudflare, so use their firewall, right? No excuses - I should have gone there first. In fact, I did, but I got distracted by the many options and focused on their bot fighting tools, which didn't work for me. (This bot is somehow getting through the captcha challenge even when bot fight mode is enabled)
But, their firewall has an option for user agent. The actual fix was simply to add this in WAF for that domain.
And voila - no more traffic through the tunnel for this very rude and stupid bot.
After 24 hours, Cloudflare has blocked almost a quarter of a million requests by claudebot to my little phpbb forum which barely gets a single post every three months.
Moral for myself: Stand back and think for a minute before rushing in and trying to fix something in the wrong way. I've also taken this as an opportunity to improve haproxy's rate limiting internally. Like most website hosts, most of my traffic is outbound, and slowing things down when it gets busy really does help.
This obviously isn't a perfect solution - all claudebot has to do is change its UA, and by coming from AWS it's pretty hard to block otherwise. One hopes it isn't truly malicious. It would be quite a lot more work to integrate Fail2ban for more bots, but it might yet come to that.
Also, if you write any kind of web bot, please consider that not everyone who hosts a website has a lot of bandwidth, and at least have enough pride to write software good enough to not keep doing the same thing every second. And, y'know, keep an eye on what your stuff is doing out on the internet - not least for your own benefit. Hopefully AWS really shaft claudebot's owners with some big bandwidth charges...
EDIT: It came back the next day with a new UA, and an email address linking it to anthropic.com - the Claude3 AI bot, so it looks like a particularly badly written scraper for AI learning.