That’s my hope. Still from where I live I can only hope my specie contributions are used to affect that.
This poll tracking is showing Harris barely ahead on national polls. This millennium, Republicans have won the presidency in 2000, 2004, and 2016.
In 2000 and 2016, the Democratic candidate won the popular vote.
Winning the popular vote doesn’t mean shit. The electoral college is what matters.
That same NYT poll link lists 9 tossup states: Wisconsin, Michigan, Pennsylvania, Arizona, Georgia, Minnesota, North Carolina, Nevada, and Virginia.
You’ll notice all but the first three are in alphabetical order. That’s because all but the first three don’t have enough polling to make a prediction. Of those first three: a statistical tie in Wisconsin and Michigan with a Trump lead in Pennsylvania.
If you include Kennedy, Harris is ahead by 1% in Wisconsin and Pennsylvania but still tied in Michigan.
National polling trends are going in the direction I want, but they really don’t matter.
I write this from a state whose electoral college votes have never gone for a Democrat in my lifetime and won’t ever before my death. I’ll be voting for Harris, but that vote is one of those national votes that won’t actually help my preferred candidate.
The only way I can help is via monetary donation.
And if you’re a Harris voter in a solidly blue state, your vote means as much fuck all as mine does. Yes, it actually makes it to the electoral college, but, like mine, that’s a forgone conclusion. You should be donating money too and hoping it’s used wisely to affect those swing states.
Under the CMB method, it sounds like the calculation gives the same expansion rate everywhere. Under the Cepheid method, they get a different expansion rate, but it’s the same in every direction. Apparently, this isn’t the first time it’s been seen. What’s new here is that they did the calculation for 1000 Cepheid variable stars. So, they’ve confirmed an already known discrepancy isn’t down to something weird on the few they’ve looked at in the past.
So, the conflict here is likely down to our understanding of ether the CMB or Cepheid variables.
Except it’s not that they are finding the expansion rate is different in some directions. Instead they have two completely different ways of calculating the rate of expansion. One uses the cosmic microwave background radiation left over from the Big Bang. The other uses Cepheid stars.
The problem is that the Cepheid calculation is much higher than the CMB one. Both show the universe is expanding, but both give radically different number for that rate of expansion.
So, it’s not that the expansion’s not spherical. It’s that we fundamentally don’t understand something to be able to nail down what that expansion rate is.
As a first book, I think Children of Time is much better than Shards of Earth. I enjoyed both series but would say the third book in each was the weakest. The Final Architecture series had a slightly stronger third entry.
And the article content posted is just an excerpt. The rest of the article focuses on how AI can improve the efficiency of workers, not replace them.
Ideally, you’ve got a learned individual using AI to process data more efficiently, but one that is smart enough to ignore or toss out the crap and knows to carefully review that output with a critical eye. I suspect the reality is that most of those individuals using AI will just pass it along uncritically.
I’m less worried about employees scared of AI and more worried about employees and employers embracing AI without any skepticism.
Thanks. Very interesting. I’m not sure I see such a stark contrast pre/post 9-11. However, the idea that the US public’s approach to the post-9-11 conflict would have an influence makes sense and isn’t something I’d ever have considered on my own.
I’m a guy approaching 60, so I’ll start by saying my perception may be wrong. That could be because the protest songs from the late 60’s and early 70’s weren’t the songs I heard live on the radio but because they were the successful ones that got replayed. More likely, it’s because music is much more fractured than what I was exposed to on the radio growing up. Thus, today, I’m simply not exposed to the same type of protest songs that still exist.
Whatever the reason, I feel that the zeitgeist of protest music is very different from the first decade of my life compared to the last.
I’m curious to know why. My conspiratorial thoughts say that it’s down to the money behind music promotion being very different over those intervening decades, but I suspect it’s much more nuanced.
So, why are there fewer protest songs? Alternatively, why I am not aware of recent ones?
Me too, but I’d put Usenet in there before Slashdot.
Spock, Uhura, Chapel, heck even M’Benga don’t make it a prequel, but a lieutenant Kirk does?
The South. Just below Indiana, the middle finger of the South. And I say this as a Hoosier for much of my life.
As a guy responsible for a 1,000 employee O365 tenant, I’ve been watching this with concern.
I don’t think I’m a target of state actors. I also don’t have any E5 licenses.
I’m disturbed at the opaqueness of MS’ response. From what they have explained, it sounds like the bad actors could self-sign a valid token to access cloud resources. That’s obviously a huge concern. It also sounds like the bad actors only accessed Exchange Online resources. My understanding is they could have done more, if they had a valid token. I feel like the fact that they didn’t means something’s not yet public.
I’m very disturbed by the fact that it sounds like I’d have no way to know this sort of breach was even occurring.
Compared to decades ago, I have a generally positive view of MS and security. It bothers me that this breach was a month in before the US government notified MS of it. It also bothers me that MS hasn’t been terribly forthcoming about what happened. Likely, there’s no need to mention I’m bothered that I’m so deep into the O365 environment that I can’t pull out.
Nice job. Packet loss will definitely cause these issues. Now, you just need to find the source of the packet loss.
In your situation, I’d first try to figure out if it is ISP/Internet before looking inside either network. I wouldn’t expect it to be internal at these speeds. Though, did you get CPU/RAM readings on the network equipment during these tests? Maxing out either can result in packet loss.
I’d start with two pairs of packet captures when the issue happened: endpoint to endpoint and edge router to edge router. Figure out if the packet loss is only happening in one direction or not. That is, are all the UK packets reaching DE but not all the DE making it back? You should clearly be able to narrow into a TCP conversation with dropped packets. Dropped packets aren’t ones that a system never sent, they’re ones that a system never received. Find some of those and start figuring out where the drop happened.
Just curious if you’ve had the chance to dig into this and can report anything back?
If the bandwidth numbers you’ve described are accurate, I’d start looking at CPU and RAM usage on the network device. The Fortigates are going to be doing extra work to handle the VPN. I wouldn’t expect an IPSEC VPN on a Fortigate to top out at 10mbps, but if it’s doing a lot of other work, it’s possible. ACL’s on the Cisco devices? You run the potential of CPU/RAM exhaustion on those. Hopefully, you have remote monitoring on all network devices and you can just look at the history when these transfers are happening.
If nothing obvious there, then I’d try packet captures when this is happening, perhaps to start on the system doing the ssh and on one or two others experiencing issues. What are you seeing? Evidence of dropped packets? High latency? If dropped packets, start capturing the same traffic on the network devices it’s flowing through.
I’m the opposite. I had my subreddits curated to ones that supplied good deals discussion for posts and good articles for links. For link posts, I primarily read the linked article and ignored the discussion. Here, I’ve been doing both.
RBL’s are nothing more than a way to block problematic servers. And some of those problems are nothing more than they don’t have a rdns.
Does the GPL cover having to give redistribution rights to the exact same code used to replicate a certain build of a product?
It does, and very explicitly and intentionally. What it doesn’t say is that you have to make that source code available publically, just that you have to make it available to those you give or sell the binary to.
What Red Hat is doing is saying you have the full right to the code, and you have the right to redistribute the code. However, if you exercise that right, we’ll pull your license to our binaries and you lose access to code fixes.
That’s probably legal under the GPL, though smarter people than me are arguing it isn’t. However, if those writing GPLv2 had thought of this type of attack at the time, I suspect it wouldn’t be legal under the GPL.
Yeah, runaway global warming might not happen. Plant monocultures would begin to disappear. New invasive species wouldn’t happen, though existing ones might have a better time for a bit. Major thoroughfares wouldn’t create barriers to migration. Dams might take centuries to collapse, but I think humans going extinct might have one of the biggest impacts.
I believe you are correct. Any paying Red Hat customer consuming GPL code has the right to redistribute that code. What Red Hat seems to be suggesting is that if you exercise that right, they’ll cut you as a customer, and thus you no longer have access to bug fixes going forward.
I suspect it’s legal under the GPL. I’m certain it violates the spirit of the GPL.
Ok, this is not going to be a well formulated question, because the concerns behind it are nebulous in my own head.
Some assumptions I have, that clearly inform the question that follows: I believe commercial, state, and others have sophisticated methods of influencing what I see on social media and thus, in part, what I think. I also believe that someone more willing to believe in the types of conspiratorial beliefs I’ve just expressed are more likely to be manipulated by information they’re exposed to. And, yes, I fully appreciate the irony of those beliefs.
My child is adult enough that belief patterns I encourage are very unlikely to become deep patterns. That is, I’d have to work to indoctinate my son, and he’d actively resist if my indoctrination was outside of societal norms.
He didn’t grow up exposed to the social media I suspect children do now.
How does a parent inoculate a child to the influence of social media without also creating a mindset willing to believe in a nebulous “them” that controls things—a mindset, I believe, that makes a person more likely to be controlled?
So, I’ve been self-hosting for decades, but on physical hardware. I’ve had things like MythTV and an asterisk voip system, but those have been abandoned for years. I’ve got a web server, but it’s serving static content that’s only viewed by bots and attackers.
My mail server, that’s been active for more than two decades is still in active use.
All of this makes me weird in the self-hosted community.
About a month ago, I put in a beefy system for virtualization with the intent to start branching out the self hosting. I primarily considered Proxmox and xcp-ng. I went with xcp-ng, primarily because it seems to have more enterprise features. I’m early enough in my exploration that switching isn’t a problem.
For those of you more advanced in a home-lab hypervisor, what did you go with and why? Right now, I’m pretty agnostic. I’m comfortable with xcp-ng but have no problems switching. I’m particularly interested in opinions that have a particularly negative view of one or the other, so long as you explain why.
TL;DR: old guy wants logs and more security in docker settings. Doesn’t want to deal with the modern world.
I’m on the sh.itjust.works lemmy instance. I don’t know how to reference another community thread so that it works for everyone, so my apologies for pointing at sh.itjust.works, but my thoughts here are inspired by https://sh.itjust.works/post/54990 and my attempts to set up a Lemmy server.
I’m old school. I’m in my mid-50’s. I was in academia as a student and then an employee from the mid-80’s through most of the 90’s. I’ve been in IT in the private sector since the late 90’s.
That means I was actively using irc and Usenet before http existed. I’ve managed publically facing mail and web servers in my job since the 90’s. I’ve run personal mail and web servers since the early 00’s. I even had a static HTML page that was the number one Google hit for an obscure financial search term for much of the 2000’s. The referer ip’s and search terms could probably have been mined for data.
On the work side, I’ve seen multiple email account compromises. (I’d note zero when it was on premise Lotus Notes. All of the compromises were after moving to O365. Those stopped for years once we moved to MFA, but this year we’ve seen two where the bad actors were able to MitM MFA. That said I don’t regret no longer supporting an on-prem Domino server: https://m.youtube.com/watch?v=Bk1dbsBWQ3k )
I’ve also seen a sophisticated vendor typo squatting email, combined with an internal email compromise cost us significant cash.
Other than email compromise, I’m not aware of any other intrusions. (There are two kinds of companies: those that know they’ve been hacked and those that don’t). I am friends with some IT people in a company where they were ransomwared. I still believe they have a tighter security stack than we do.
I’m paranoid about security because like Farmer’s I’ve seen a thing or two. We keep logs for a year, dumped into a SIEM that is designed to make it unlikely bad actors can get into it even if they take over A/D or VMWare. My home logging is less secure but still extensive. The idea is even if I’m hit, I hope I have the logs to help me understand how and how extensively.
I still have public websites at home, but they don’t contain any content that matters. The only traffic they see is attack attempts and indexers that will index them and then shove them down into oblivion. I’m fine with that.
I still run a mail server at home. It’s mostly used so all my unique email addresses (sh.itjust.works@foo.com) can get forwarded to my personal O365 instance. If I need to reply using a unique address, I use alpine in an ssh session.
Long prolog to explain my experience playing with a Lemmy instance this weekend. I’ve got an xcp-ng instance in the home lab and used it to get a Lemmy docker instance running. It’s not yet exposed to the outside world.
I’m new to docker. I’m new to Lemmy. I’m new to Nginx. (See the “old school” in the title.). At work and at home, I deal with Apache. I’ve got custom mod_rewrite rules and mod_security in place to deal with many attacks. I’m comfortable dealing with the tweaks on both for websites that break because of some rules.
I’ve tried putting an Apache proxy in front of my xcp-ng Lemmy instance, but it won’t work because Lemmy assumes an initial contact via http/1.1 with an http status code of 101 to push to http/2.0. Apache can proxy either but not both. And Lemmy isn’t happy of the initial connection is http/2.0.
I’m also uncomfortable with my lack of knowledge regarding Nginx. I don’t know how to recreate my mod_rewrite rules and I don’t think there’s an equivalent to mod_security.
Worse, I don’t see an easy way to retain docker logs. Yes, I can likely use volumes in a docker-compose.yml to retain them, but it’s far from clear what path that would be.
I know all of these are solveable concerns with some effort, but I suspect few put in that effort.
How do all of you who run containers in a home lab sleep at night knowing all that log data is ephemeral unless you take special effort? How do you sleep knowing the sample configs you are using in containers have little security built in?
It’s not even June 12 for me, yet I suspect many subreddits went dark based on UTC.
I moved to Reddit during the Digg migration. Thus, I got the default subscriptions from back in the day. Over the years, I’ve unsubscribed to things I felt were crap, and I’ve added a number of subreddits.
Already, many have gone dark. My old.Reddit.com homepage already looks much different than normal, and I know that a few subreddits that do show have announced they’ll go dark. I assume they are US based and timing that locally.
I’ve spent more time in the Lemmy fediverse than on Reddit since joining, but I’ve spent time on both.
I’ll admit to cynical skepticism of the impact of the darkening. I still don’t think it will make a difference in Reddit policy, but I now believe it will have a larger impact on Reddit traffic than I imagined.
I still expect it to have no change in Reddit attitude or really in Reddit users.
I signed up and am currently logged in via an iPad. I wanted to browse and post on a computer. I’ve tried multiple browsers and incognito modes. With all of them, when signing into sh.itjust.works, I get nothing but the spinning button after clicking login.
I’m not sure if it’s some capacity issue, or if Lemmy doesn’t allow the same user logged in via multiple browsers.
I’m a bit scared to logout and see if that’s the case.
Anyone have any insight?
So, in thinking about how bad actors might manipulate Lemmy, I have some questions.
In this scenario, I’m an entity that wants to influence social media, a government, a corporation, a collection of dedicated degenerates—pick your boojum. I see this growing Lemmy thing. I figure it’s not a serious threat, but if I’m wrong, I’d like to be placed to influence things via what people see. I want to be able to upvote or downvote posts.
If I’ve got a decent budget, I’d spin up a bunch of new Lemmy instances and encourage signups when there’s this mad rush from Reddit. I’d want as many real users as I can get. I’d also create a bunch of sock puppet accounts on all of my instances. I’d probably have some of them post and comment.
If Lemmy attains critical mass, I’d be able to use those sock puppets to upvote/downvote posts I want to influence.
I (now the OP, not the hypothetical bad actor) imagine this is hard to defend against. I also imagine federation is all or nothing. That is, either you federate everything from a server or you federate nothing.
Are their granular federation options, like allowing post federation but ignoring upvote/downvote federation?
Yes, I’m certain I could final answers to all these questions via research, but I’m coming here as part of the Reddit diaspora. My guess is that there’s a benefit to others like me to have this discussion.
I can vaguely understand the federation concept, the idea that my account is hosted at an individual Lemmy server and that other servers trust that one to validate my account. What’s the network flow like? I’m posting this to the lemmy.ml /asklemmy community, but I’m composing it on the sh.itjust.works interface. I’m assuming sh.itjust.works hands this over to lemmy.ml. How does my browsing work? Is all of my traffic routed through sh.itjust.works?
Assuming there’s a mass influx of redditors, what does it look like as things fail? I’m assuming some servers can keep up under the load and some can’t. If sh.itjust.works goes down under the load, can I still browse other servers? Or, do those servers think I should have some token from sh.itjust.works, because my cookies say I’m still logged in, and I can’t even do that?
Are there easy mechanisms to allow me to grab my post history?
I’m assuming most (all?) Lemmy servers are hosted in home labs? The idea of Lemmy excites me, but the growth pain that could be coming scares me. Anybody using a CDN in front of their servers? That could be good, but with unconstrained growth, that could be costly, which is very bad.
I can imagine lots of different worse case scenarios, but I’m curious what those of you who run servers imagine for the best case scenario? A manageable growth that just gets more vibrant communities, which can’t ever lead to the breadth and variety of Reddit?
Also, for those running servers, have any of you experienced issues during this growth? What scares you?