I'm doing kinda the same thing with my NAS: md raid1 for the SSDs, but only snapraid for the big data drives (mostly because I don't really care if i have to re-download my linux iso collection, so snapraid plus mergerfs is like, sufficient for that data).
Also using Ubuntu instead of Debian, but that's mostly due to it being first built six years ago, and I'd 100% go with Debian if I was doing it now.
KOSA is pretty gross. It's basically giving the government (and, by extension, whomever is in charge of said government) the ability to literally say "This is harmful to children, you're legally liable if children find it." to ANY topic they want.
I'm sure the reasons that's bad don't really need much explanation.
What's going to happen is not that those things will be banned for children, they'll just be flat-out banned. Corporate "trust and safety" teams making these rules do not view things with shades of grey: if there's a material risk to the company, then fuck it, it's not allowed. Don't want kids talking about marijuana? Banned. Abortion? Banned. Drag? Banned. LGBT people in general? Banned. (Source: about a decade of doing that work.)
This is not so much a slippery slope as a greased-up slip-n-slide they want to push everyone down.
It's hilariously annoying, but to address your points:
There's nothing in any of the service logs
It's notifications from services that have external monitoring, but is not always the same service
The local monitoring (which uses the same DNS records for resolution, and uses the same reverse proxy to connect) doesn't flap at all.
It's sites behind cloudflare, ones not behind cloudflare, and one via one of their argo tunnels, so it doesn't seem specific to CF.
The 503s are coming from cloudflare indicating it can't connect to the back end, which makes me think network issue again. Non-CF sites just show timeout errors.
I don't think it's resource related; it's a 10850k with 64gb of ram, and it's currently using uh, 3% cpu and about 15gb of ram so there's more than sufficient idle resources to handle even a substantial spike in traffic (which I don't see any indications of in the logs, but).
It's gotta be some incredibly transient network issue but it's so transient I'm not sure how to actually make a determination as to what happens when it breaks, since it's "fixed itself" by the time I can get near enough to something to take a look.
Yeah, I hope they do, but I'm hardly confident they will.
Part of ASUS's (and most everyone else's) problem is they don't do support, a 3rd party they contract with does.
So you have the stupid shit that happened, where the 3rd party company tries to get extra money out of you for repairs you don't need, because it's directly going to help their bottom line.
It's some fucked up priorities: ASUS wants the cheapest vendor, and the cheapest vendor is going to try to find any other way to get extra money beyond contracted rates for repairs.
I grew up in the Glorious People's Republic of Texas and was in highschool in the 90s. We had a variety of projectile-based clubs: rifle, trap shooting, and archery.
I may or may not have learned how to bow hunt as part of a school club.
However, uh, yeah, I don't think I'd give a bunch of kids guns at this point either.
So here's the thing, really: there's a lot of companies that make good hardware.
The problem is there's not a single remaining AIB that has above shit-tier support if something goes wrong. They're all fucking awful to deal with, slow, and just suck. See: the recent ASUS support kerfuffle, except it's not just ASUS so much as every vendor in those same spaces.
EVGA is missed because their warranty support team was fucking stellar in a universe of otherwise wet diapers.
I'd probably actually use mscp, which is scp but with multiple threads and is shockingly faster moving a ton of little files, which I assume this mostly is.
I also tend to prefer screen/tmux instead of nohup, and then output verbose so I can keep an eye on shit and/or see failures, but that's just a personal preference and doesn't really matter in terms of performance/reliability/anything.
Minor niggle: they're legally compelled to work in the best interests of the shareholders which is usually but not always seeking profit at all costs.
But, in general, I don't disagree, I merely was mentioning that people keep getting suckered by pretty words and meaningless promises of change and then not bothering to make it have actual legal requirements behind it.
The mistake is looking at a CEO going Trust Me Bro, and trusting them. See: frog and scorpion story.
Honestly tech at this point wouldn't bother growing tulips, they'd just scour the planet to find every tulip on earth so that they could steal them and then use those to make new tulips without ever having invested in tulips at all.
Everything a corporation does that's not outright trying to fuck you out of your time or money is 100% a scam they're trying to pull to convince you they care.
I really wish people would stop falling for it, because, well, there's never going to be real progress made unless there's the force of law behind things like DEI.
It's shocking how the techbro company lifecycle is an almost exact copy of various forms of scam, right down to the first person in gets all the money on the way out.
It's less what's in FUTO's license, than what's NOT in the license.
The main problem with those cute little licenses is that if the right is not EXPLICITLY mentioned, you don't have it.
That license is more a list of thou-shalt-nots than outlining your rights to own and use the software: literally half of it is a list of things you cannot do.
It also doesn't require you to provide source code for your modifications, nor does it require you make it available AT ALL.
It also, at no point, says anything whatsoever about source code access - it merely says "the software" which could mean anything they want it to mean.
So basically it's a license telling you you have a license to their software, what you cannot do with it, and zero requirement that ANYONE share the source code.
I have a mix of shucked, new, and used drives in my home server.
WD reds out of some USB enclosures that are pushing 7 years old, some new EXOS drives that are pushing 4, and some refurbed EXOS drives that are pushing 2 years now.
Zero issues, but I'm also running them as basically stand-alone drives with mergerfs and snapraid. I don't really care about 99% of the data, since I can just like, download all the ISOs again, but in x265 encoded versions.
7 drives, zero failures, though I'm expecting the 8tb reds to start dying any minute now.
I'm team Plasma, but mostly just because every time I touch Gnome it feels like I'm using a really bad copy of OS X that they got bored of copying halfway through and said fuck it, good enough.
Granted, yes, you can tweak it and blah blah blah, but Plasma ships and feels complete and functional right out of the box, and Gnome feels incomplete the more I use it.
That was more of a personal story than an analytical assessment of more open vs closed platforms.
I'd say, though, that the person that doesn't care about performance AND doesn't care about price is the rare person, and that the current pricing (which, I'm entirely sure is being driven by having to be in bed with IBM to do this at all) puts it out of the realm of essentially anyone who might be interested that doesn't have a very compelling reason to need a POWER-based system.
Yeah, that's what he means.
I'm doing kinda the same thing with my NAS: md raid1 for the SSDs, but only snapraid for the big data drives (mostly because I don't really care if i have to re-download my linux iso collection, so snapraid plus mergerfs is like, sufficient for that data).
Also using Ubuntu instead of Debian, but that's mostly due to it being first built six years ago, and I'd 100% go with Debian if I was doing it now.