That's a fair assessment. I'll admit to having a severe case of doomerism when it comes to tech lately, and the levels of shit tech bros will go to to monetize shit has me skeptical there's any sort of protocol or technology that could be made bro-resistant for more than a short period of time.
EEE is pretty prevalent and has been a very standard practice with these tech companies for a long time. See: Meta and Threads for a recent example.
Oh sorry; my goal here was for individual metering. I've got an Enphase solar system, so the Envoy is already doing whole-house monitoring.
I'd like to be able to identify and ultimately be able to lower my load to stay under what the solar panels are generating, but that needs data I mostly don't have, and specific equipment to actually turn things on and off.
I knew #1 would be a Dodge Ram even before I clicked and viewed the image.
Stay classy, Dodge drivers.
Too bad the inflation number they're using is exactly what the whole 'lies, damn lies, and statistics' saying is talking about.
Gemini protocol
IDK, but I don't think that the problem is that any particular application protocol is bad so much as it is capitalists going to capitalist, and they've shit all over everything in the Quest to Make a Buck.
It's not like a new protocol, if it becomes as widely adopted, won't see the same vultures swoop in and strip mine any value they can find there, too.
That's just some Unicode stuff; the domain name is non Latin characters so that's how you represent it where unicode isn't properly supported. Doesn't mean mean anything malicious.
Well, I have a new favorite Lemmy client.
Yeah the plan was for the in wall relays. I'm in the US and if I read the specs properly they'll do 16a at 120v, which is also where my breakers would trip anyways so probably shouldn't matter.
I'm wanting to add a bunch of energy monitoring stuff so I can both track costs, and maybe implement automation to turn stuff on and off based on power costs and timing.
I'm using some TPlink based plugs right now which are like, fine, but I'm wanting to add something like 6 to 10 more monitoring devices/relays.
Anyone have experience with a bunch of shelly devices and if there's any weird behavior I should be aware of?
Assume I have good enough wifi to handle adding another 10 devices to it, but beyond that any gotchas?
Eh, scriptable content was probably fine.
Techbros going 'holy shit, we should make EVERYTHING a website!' was the curse that doomed us.
I could have been a little more clear: I don't think the whole must-compete-or-forget-it mindset makes any damn sense.
I'm more than happy to use software that does what I want/need (which, more and more, is simply just not fucking spying on, trying to sell things to, or otherwise annoying me) even if it's not like, the most bestest version of whatever.
I think it's less that it's "impossible" but rather that it's expensive.
Honestly we've in general shoved too much shit into the browser that's not strictly related to just browsing web sites.
And you "have to" support all the layers and layers and layers of added stuff, or you can't "compete".
But, at the same time, the goals of making a good-enough browser that mostly works and isn't completely enshittified and captured by corpo big tech interests is a very worthy project and 100% support what they're doing.
Because most poeple don't care and just want to play the latest $GAME_NAME_HERE?
And I mean, Nintendo has already sued people into essential slavery and nobody said shit, so I don't know what the fuck will get people's attention.
Looks like Debian and Ubuntu have shipped patches, but I'm not seeing them show up in the RHEL-derivatives just yet, but I'm sure that'll be soon(TM).
Honestly it feels like they're trying to get away from being just a file sync platform, and are pushing for more corpo feature sets to compete with gsuite or O365.
Which I mean is great: that's exactly what I needed and why I use it - it let me ditch almost all of my Google services and move it all to selfhosted.
But I bet it also causes incentives to prioritize fixes and features that are focused on that, and pushes stuff like 'make the android sync app work like every other file sync app in history' to the bottom of the list.
Nope, that curl command says 'connect to the public ip of the server, and ask for this specific site by name, and ignore SSL errors'.
So it'll make a request to the public IP for any site configured with that server name even if the DNS resolution for that name isn't a public IP, and ignore the SSL error that happens when you try to do that.
If there's a private site configured with that name on nginx and it's configured without any ACLs, nginx will happily return the content of whatever is at the server name requested.
Like I said, it's certainly an edge case that requires you to have knowledge of your target, but at the same time, how many people will just name their, as an example, vaultwarden install as vaultwarden.private.domain.com?
You could write a script that'll recon through various permuatations of high-value targets and have it make a couple hundred curl attempts to come up with a nice clean list of reconned and possibly vulnerable targets.
Just tested that and uh, yeah, what the hell? Not something my workflows need, but that's a shocking oversight considering damn near everything else 100% does that.
Yeah, no shit. Even my local news which is a top-10 market and has actual money to spend has half of it's shit sourced from fucking Facebook and Twitter. The amount of 'a thing happened today!' that's fucking instagram video is just amazing.
Can't even afford to send someone out with a camera to take a picture anymore.
That's the gotcha that can bite you: if you're sharing internal and external sites via a split horizon nginx config, and it's accessible over the public internet, then the actual IP defined in DNS doesn't actually matter.
If the attacker can determine that secret.local.mydomain.com is a valid server name, they can request it from nginx even if it's got internal-only dns by including the header of that domain in their request, as an example, in curl like thus:
curl --header 'Host: secret.local.mydomain.com' https://your.public.ip.here -k
Admittedly this requires some recon which means 99.999% of attackers are never even going to get remotely close to doing this, but it's an edge case that's easy to work against by ACLs, and you probably should when doing split horizon configurations.
Ugh, not the best marketing for Nextcloud to have a public share not work, lol. It seems like 25% of people just can't see them but they work for everyone else so who knows.
Anyway, have a pastebin instead: https://pastebin.com/zPyvgxYX
Not saying you're wrong, but what doesn't work right? I haven't noticed any behavior that seems wrong to me. Usually interact with nextcloud via the nextcloud section that gets added by the client in the file picker/file manager on the OnePlus Nord I'm using.
I've been running a BBS off and on since the mid-90s, and have tried a variety of methods to do so: OS/2 on real hardware, DosBox on Linux, a VM running OS/2, and more modern software that runs fine on modern Windows without the need of dealing with
![Arca OS, a MiniPC, and running a BBS](https://lemmy.sdf.org/pictrs/image/4bb06731-05b1-4d33-b5e0-aa0ea1ea8b3f.png?format=webp&thumbnail=256)
Saw an older post asking about ArcaOS and BBS stuff, and since I actually just did a rebuild of mine doing exactly that on newer hardware, figured I'd write about all the stupid shit I had to deal with and how to configure the OS in a blog and post it here if anyone is interested.