Trump plans to turn himself in Thursday at Fulton County jail
PriorProject @ PriorProject @lemmy.world Posts 33Comments 726Joined 2 yr. ago
No, Beehaw defederated your instance. The open-source community on lemmy.ml someone else already mentioned is your best bet.
Permanently Deleted
I feel like you're combatively advocating for a specific vision and not collecting and processing feedback as your OP suggests, at any rate... you don't seem to be understanding what I was trying to say at all... but it's not something I'm going to fight about with someone who is questioning if I know what a multi-reddit is and dismissing client-side techniques as nonsense without seeming to understand why they were being discussed in the first place.
I'll leave with these thoughts, do with them what you will:
- I'm not interested in any multireddit feature that reduces sub privacy. I'd consider it a net loss for lemmy.
- On Reddit, multi-reddits personal in nature. Such a personal multireddit for lemmy doesn't require interaction with federation or privacy changes.
- I realize that a shared super-community feature is frequently requested on Lemmy aimed at addressing duplication of communities across instances. I don't think that's more than superficially similar to actual multireddits, and I don't think it's a good idea because it creates moderation problems that are far worse than the community duplication problems it purports to address.
Permanently Deleted
What you've described is one way. It could also be a filtered view based on the subscribed/all feed which provides a single API call that can return material from multiple communities. I'm not suggesting that a client-side only solution is a GOOD solution. But from an information-flow perspective, I'm suggesting that multireddits are a "local" function. Theu are so local that they're possible without server-side support at all, and especially local enough not to require representation in the federated feed... which is a more significant change with potential impacts to other federated projects like kbin and mastodon... and shouldn't require relaxing privacy constraints in any case.
The Beehaw admins made this choice, and documented their rationale here: https://beehaw.org/post/567170
Permanently Deleted
Anyway, what's the feedback on privacy issue with allowing any user to have read-only access to your community subscribe list...
I wouldn't want this in exchange for multi-reddits. You can a little bit infer the communities someone subscribes to from their comment activity, but as it stands one can choose to privately lurk and this would eliminate that... silently for existing users in the absence of some big series of announcements to make it well known.
Why are multi-reddits a thing that involves federation at all? Multi-reddits as they exist on Reddit itself could be implemented entirely client-side, the server side stuff just syncs the behavior of multiple client apps. Why does the concept of a multi-reddit need to extend outside of the user's instance?
Nutbutter sort of covered it.
- Tailscale creates a virtual network.
- That network can be (and is by default) private in that no one can join that you don't allow, and in that respect it's similar to your home network. You can join your laptop, desktop, and phone to your tailnet... but probably you cannot join your Chromecast or smart-television (they don't publish tsilscale clients for these devices).
- If you configure Jellyfin to listen on your tailnet and not on the Internet... then you can access Jellyfin from anywhere using a device that is connected to your tailnet, but attackers on the Internet cannot access Jellyfin without first accessing your tailnet, which is hard to do.
The security/convenience tradeoff of tailscale is pretty good if you want to access a service from anywhere, but only from your own devices and only from supported operating systems (Linux, windows, OSX, android... not sure about iOS). It is another networking layer, which can be mind-bending... but as much as such a layer can be easy to use... tailscale is as easy as any of them.
However, Tailscale's backend is not open-source. They may not log all the data passed through, but they certainly can look at it.
This see sentence is nonsense though.
- Tailscale is end to end encrypted, tailscale cannot quietly see your traffic.
- Tailscale COULD, by default, surreptitiously join a node to your tailnet. If you're super paranoid, they provide a way to disable this but it makes tailscale much less convenient to use: https://tailscale.com/kb/1226/tailnet-lock/
- Tailscale is phenomenally transparent about security and has WAY higher standards than self-hosters: https://tailscale.com/security/.
- Tailscale clients are open source, and they employ the author of Headscale an open source implementation of the Tailscale control protocols.
There is very little to fear from Tailscale as a provider, and they support the headscale project if you want to go that route (which I do... but not because I am concerned about Tailscale's integrity or security posture).
This is a crosspost of a post in a self-hosted community that isn't mine. You'll either have to at-mention OP to get their attention here, or comment in the other post they made.
But I can cover this particular question as well. The "trick" duo uses, is that is sends the 2fa request out of band to the duo app. So the flow goes like this:
- Android app tries to connect to Jellyfin, gets a login prompt.
- Android app collects username and password from user and sends it to Jellyfin as normal.
- Jellyfin takes the username and password and forwards them to the ldap server to find out if the user is authorized.
- The ldap server checks the username and password against it's local DB and they check out... but it doesn't respond to Jellyfin yet.
- The LDAP server asks the Duo API for 2fa verification.
- The DUO service sends a push notification to the DUO app, which prompts the user to see if they really are currently trying to log into Jellyfin or not. User taps "yeah, lemme in to Jellyfin"
- Duo app tells duo service it's go time.
- Duo API tells LDAP server that 2fa check passed.
- LDAP server tells Jellyfin the username/password lookup was successful.
- Jellyfin tells the app the login was successful and things go normally from here.
So the trick with DUO, unlike say TOTP codes... is that the app is none the wiser that 2fa is happening. It thinks it just sent a regular username and password to the slowest damn Jellyfin server in the world that takes 30s to decide if the login is good. But as long as it doesn't timeout the login, the 2fa happens completely transparently to Jellyfin and the app... with the verification happening in a separate app and being a manger by the LDAP server, DUO servers, and DUO app.
So yeah, apps should work as long as they can handle very slow logins.
This is a great approach, but I find myself not trusting Jellyfin's preauth security posture. I'm just too concerned about a remote unauthenticated exploit that 2fa does nothing to prevent.
As a result, I'm much happier having Jellyfin access gated behind tailscale or something similar, at which point brute force attacks against Jellyfin directly become impossible in normal operation and I don't sweat 2fa much anymore. This is also 100% client compatible as tailscale is transparent to the client, and also protects against brute force vs Jellyfin as direct network communication with Jellyfin isn't possible. And of course, Tailscale has a very tightly controlled preauth attack surface... essentially none of you use the free/commercial tailscale and even self-hosting headscale I'm much more inclined to trust their code as being security-concscious than Jellyfin's.
Fair enough, sound like you have a well considered use case for Kuma specifically. Good luck, I don't have much to offer on your OP question.
I'm mostly in the pro-written word camp myself, but I have sought out video tutorials in cases where written docs seem to assume something I don't know. When I'm learning something new, a written doc might have a 3-word throwaway clause like "... add a user and then...". But I've never added a user and don't know how. If it's niche open-source software with a small dev team, this may not be covered in the docs either. I'll go fishing for videos and just seeing that they go to a web-ui or config-file or whatever sets me on the path to figure out the rest myself.
That is to say, video content that shows someone doing a thing successfully often includes unspoken visual information that the author doesn't necessarily value or even realize is being communicated. But the need to do the thing successfully on-screen involves documenting many small/easy factoids that can easily trip someone inexperienced up for hours.
I'm as annoyed as anyone when I want reference material and find only videos, and I generally prefer written tutorials as well. But sometimes a video tutorial is the thing that gets me oriented enough to understand the written worthy I wasn't ready to process previously.
Edit: The ubiquity of video material probably has little to do with it's usefulness though, and everything to do with how easy it is to monetize on YouTube.
This isn't exactly an answer to your question, but an alternative monitoring architecture that elides this problem entirely is to run netdata on each server you run.
- It appears to collect WAY more useful data than uptime Kuma, and requires basically no config. It also collects data on docker containers running on the server so you automatically get per-service metrics as well.
- Health probes for several protocols including ping and http can be custom-defined in config-files if you want that.
- There's no cross server config or discovery required, it just collects data from the system it's running on (though health probes can hit remote systems if you wish).
- If any individual or collection of services is down, I see it immediately in their metrics.
- If the server itself is down, it's obvious and I don't need a monitoring system to show a red streak for me to know. I've never wasted more than minute differentiating between a broken service and a broken server.
This approach needs no external monitoring hosts. It's not as elegant as a remote monitoring host that shows everything from a third-party perspective, but that also has the benefit of not false-positiving because the monitoring host went down or lost its network path to the monitored host... Netdata can always see what's happening because it's right there when it happens.
Those of you who are married, how do you go about privacy if your wife or husband does not care?
I wouldn't say that my partner "doesn't care", but they take a much more pragmatic view than I which results in more exposure. In general, we do the following:
- To a first approximation, they decide what apps and services they use. It's not a monarchy. They'll ask for feedback when comparison shopping, but often the answer is "every dominant ecosystem in this space is terrible, the privacy respecting options don't meet your requirements, this option is 5% worse and this one is 5% better... glhf".
- For social media accounts that share posts about our nuclear family, we come to broad consensus on the privacy settings and practices. There's give and take here, but I make space to use dominant sharing apps and they make space to limit our collective exposure within reason. If I have a desire to "harden" the privacy settings on a service, it's on me to put in the effort to craft the proposed settings changes and get their buy in on the implications.
- I have many fewer privacy raiding accounts than they do. I both benefit from transitive access to the junk they sign up for, and pay a cost in my own privacy by association. This just is what it is. The market for partners that align with my own views perfectly is basically zero though, and honestly I probably wouldn't put up with my shit even if I could find one.
- If I can self-host a competitive option for a use-case that I'm happier with... they'll give it the old college try. But it has to actually be competitive or they'll fail out of the system and fall back to whatever works for them. If we can figure out what's not working we'll sometimes iterate together, but sometimes it's just not good enough and we go back to something I like worse.
It's basically like navigating any other conflict in values. You each have to articulate what your goals are, and make meaningful compromise on how to achieve something that preserves the essentials on both sides. As a privacy outlier, sometimes one also needs to be able to hear "I want to do normal shit and not feel bad about it" and accept it. But if we do want to reach for outlier privacy practices in some specific area, it's on us to break that desire down into actionable steps in realistic directions at a sustainable pace and to not ignore the impacts to our partners of the various tradeoffs we're proposing. Privacy is often uncomfortable and we need to acknowledge the totality of what we're asking for when we ask our partners to accommodate our goals there.
The headline of the article is just The History of the Modern Graphics Processor
, though. OP is having a fever dream with that post title, it has nothing to do with the article title or with the article.
Did the government invent OP to make us question Betteridge's law of headlines? Has the law of headlines become too dangerous to ignore?
It really does look very much like a parking meter of the time, and the joke doesn't really make sense unless it's a parking meter. It's a rent-seeking/capitalism gag.
- The caveman on the left, is creating something transformatively useful, that once finished will change his life with how useful it is. Aka he's inventing the wheel.
- The caveman on the right is creating something that will act as a small but annoying tax on the work of the caveman on the left... doing nothing of intrinsic value but making the real invention a little bit less useful and helpful by charging the very first wheel for parking. Aka he's inventing useless self absorbed beauracracy, mankind's second most significant invention after the wheel.
The gag establishes the relationship between the two cavemen as a doer on the left and a beauracratic leech on the right. If the right is a scale, there's kind of nothing going on here. Not that there aren't far side comics where nothing is going on, but this one has a gimmick.
- If a service supports sqlite, I often will use that option. It provides everything a self-hoster needs from a DB with basically no operational overhead.
- If I do need a proper RDBMS (because the software I'm using doesn't support sqlite), I'm going to use...
- A single Postgres container.
- Configured with multiple logical "databases" (the container for schemas and tables), one DB for each app connecting.
I do this because I'm always memory constrained and the rdbms is generally the most memory-hungry part of any software stack. By sharing one db-process across all the apps that need it I get the most out of my db cache memory, etc. And by using multiple logical db's, I get good separation between my apps, and they're straightforward to migrate to a truly isolated physical DB if needed... but that's never been needed.
... advertisement and push they did on sites like reddit...
The lemmy world admins advertised on Reddit? Can you link an example?
... their listing on join-lemmy.org...
Until recently EVERY lemmy instance was listed on join-lemmy.
And with the name Lemmy.world they did nothing to dissuade anyone from thinking that.
They run a family of servers under the world tld, including at least mastodon, lemmy, and calckey. They're all named similarly.
I also saw nothing from .world not claiming to be the bigger instance(super lemmy)
They ARE the biggest instance, but that happened organically. It's not based on any marketing claims from the admin team about being a flagship/super/mega/whatever instance. People just joined, and the admins didn't stop them (nor should they). It's not a conspiracy to take over lemmy. It's just an instance that... until recently... happened to work pretty well when some were struggling.
I think the issue is that .world has put itself forward as some sort of super lemmy.
Citation needed. All the admins of lemmy world ever purported to do was host a well-run general-purpose (aka not topic-oriented) lemmy instance. It was and remains that, and part of being a well-run general purpose instance is managing legal risk when a small subset of the community generates an outsized portion of it.
Being well run meant that they scaled up and remained operational during the first reddit migration wave. People appreciated that, but continuing to function does not amount to a declaration of being a super lemmy.
World also has kept signups open through good times, and more recently bad. Other instances at various times shut down signups or put irritating steps and purity tests along the way. Keeping signups open is a pretty bare-minimum bar for running a service though, it is again not a declaration of being a super-lemmy.
Essentially lemmy world just... kept working (until recently when it has done a pretty poor job of that). I dunno where you found a declaration that lemmy world is a super-lemmy, but it's not coming from the lemmy world admins, it's likely randos spouting off.
OP is claiming that they agree with lemmy world's defederation choices driven by CSAM, which is unquestionably nonsense. Lemmy world admins have made several in depth posts explaining defederation decisions and none of them had anything to do with CSAM. In some jurisdictions, it would likely be illegal to give such an explanation as it would amount to creating a pointer to a source of CSAM that hasn't yet been taken down. By and large, these things are reported directly to law enforcement and cleaned up quietly, without showing up in modlogs... and in many jurisdictions the law REQUIRES handling CSAM in precisely that fashion in order to prevent it from being archived before it's taken down.
Is there a non-zero amount of CSAM in the Fediverse? Sadly yes. Once you achieve a certain scale, people do all the things... even the bad ones. This research paper (from Stanford, it's reputable and doesn't include or link to CSAM) discusses finding, in a sample of 320k Mastodon posts, over 100 verified samples of CSAM and something like 1k-3k likely adjacent posts (for example that use associated keywords). It's pretty likely that somewhere on Lemmy there are a non-zero number of such posts, unfortunately. But moderators of all major instances are committed to taking appropriate steps to respond and prevent reoccurrence.
Additionally, blahaj.zone defederated from lemmynsfw over the adorableporn community. The lemmynsfw admins take reports of CSAM very seriously, and the blahaj admins stopped short of accusing them of hosting actual CSAM. But they claimed that models of verified age "looked too young" and that the community was courting pederasts. These claims were largely baseless, but there was a scuffle and some of the secondary and tertiary discussion threw around terms like CSAM loosely and incorrectly.
I think OP is probably hearing echoes of these kinds of discussions 3rd hand and just not paying attention to details. There's certainly no well-known and widely federated CSAM communities, and all responsible admins would take immediate action if anything like that was found. CSAM doesn't factor into public federation decisions, because sources of CSAM can't be discussed publicly. Responding to it is part of moderation at scale though, and somewhere some lemmy admin has probably had to do so.
Why would you use LVM to configure the RAID-1 devices? Btrfs supports raid1 natively.
11,263 lbs, huh? It's not a kind estimate, but not unrealistic either.