It’s the first time that a number has been put on the glitch that is still causing problems around the world.
Microsoft says it estimates that 8.5m computers around the world were disabled by the global IT outage.
It’s the first time a figure has been put on the incident and suggests it could be the worst cyber event in history.
The glitch came from a security company called CrowdStrike which sent out a corrupted software update to its huge number of customers.
Microsoft, which is helping customers recover said in a blog post: "We currently estimate that CrowdStrike’s update affected 8.5 million Windows devices."
All i know is that I had to personally fix 450 servers myself and that doesn't include the workstations that are probably still broke and will need to be fixed on Monday
Thankfully I had cached credentials and our servers aren't bitlocker'd. Majority of the servers had iLO consoles but not all. Most of the servers are on virtual hosts so once I got the fail over cluster back, it wasn't that hard just working my way through them. But the hardware servers without iLO required physically plugging in a monitor and keyboard to fix, which is time consuming. 10 of them took a couple hours.
I worked 11+ hours straight. No breaks or lunch. That got our production domain up and the backup system back on. The dev and test domains are probably half working. My boss was responsible for those and he's not very efficient.
So for the most part I was able to do most of the work from my admin pc in my office.
For the majority of them, I'd use the Widows recovery menu that they were stuck at to make them boot into safe mode with network support ( in case my cached credentials weren't up-to-date). Then start a cmd and type out that famous command
Del c:\windows\system32\drivers\crowdstrike\c-00000291*.sys
I'd auto complete the folders with tab and the 5 zero's ... Probably gonna have that file in my memory forever
Edit: one painful self inflicted problem was my password is 25 random LastPass generatied password. But IDK how I managed it, I never typed it wrong. Yay for small wins
For instance, people’s flights were canceled because of this resulting in them having to stay in hotels overnight. I’m sure there’s many other examples.
For businesses, a lot of them are hiring IT companies (consultants, MSPs, VARs, and whoever the hell else they can get) at a couple to a few hundred bucks an hour per person to get boots on the ground to fix it. Some of them have everyone below the C levels with any sort of technical background doing entry level work so there's also lost opportunity cost.
I was in that industry for a long time and still have a lot of colleagues there. There's a guy I know making almost $200k/yr out there at desks trying to help fix it. He moved into an SRE role years ago so that's languishing this week while he's going desk to desk and office to office with support staff and IT contractors.
At least two large companies have an API where they're paying for a pile of compute and currently have a small fraction of use. Companies are paying to use those APIs but can't.
I don't know if there's a good way to actually figure out how much this is costing because there are so many variables. But you can bet there are a few people at the top funneling that money directly to themselves, never to be seen again.
For some of these systems, I don't understand why they are not running openbsd like medical equipment that should be as secure as possible... And more broadly, most of the world depending on one OS and its environment is only a path for disasters (this one, wanna cry, spying from three letters agencies...)
There are a lot of misunderstandings about what happened.
First, the ‘update’ was to a data file used by the crowdstrike kernel components (specifically ‘falcon’.) while this file has a ‘.sys’ name, it is not a driver, it provides threat definition data. It is read by the falcon driver(s), not loaded as an executable.
Microsoft doesn’t update this file, crowdstrike user mode services do that, and they do that very frequently as part of their real-time threat detection and mitigation.
The updates are essential. There is no opportunity for IT to manage or test these updates other than blocking them via external firewalls.
The falcon kernel components apparently do not protect against a corrupted data file, or the corruption in this case evaded that protection. This is such an obvious vulnerability that i am leaning toward a deliberate manipulation of the data file to exploit a discovered vulnerability in their handling of a malformed data file. I have no evidence for that other than resilience against malformed data input is very basic software engineering and crowdstrike is a very sophisticated system.
I’m more interested in how the file got corrupted before distribution.
Microsoft says it estimates that 8.5m computers around the world were disabled by the global IT outage.It’s the first time that a number has been put on the incident, which is still causing problems around the world.The glitch came from a cyber security company called CrowdStrike which sent out a corrupted software update to its huge number of customers.Microsoft, which is helping customers recover said in a blog post: "we currently estimate that CrowdStrike’s update affected 8.5 million Windows devices."
The post by David Weston, vice-president, enterprise and OS at the firm, says this number is less than 1% of all Windows machines worldwide, but that "the broad economic and societal impacts reflect the use of CrowdStrike by enterprises that run many critical services".The company can be very accurate on how many devices were disabled by the outage as it has performance telemetry to many by their internet connections.The tech giant - which was keen to point out that this was not an issue with it’s software - says the incident highlights how important it is for companies such as CrowdStrike to use quality control checks on updates before sending them out.“It’s also a reminder of how important it is for all of us across the tech ecosystem to prioritize operating with safe deployment and disaster recovery using the mechanisms that exist,” Mr Weston said.The fall out from the IT glitch has been enormous and was already one of the worst cyber-incidents in history.The number given by Microsoft means it is probably the largest ever cyber-event, eclipsing all previous hacks and outages.The closest to this is the WannaCry cyber-attack in 2017 that is estimated to have impacted around 300,000 computers in 150 countries.
There was a similar costly and disruptive attack called NotPetya a month later.There was also a major six-hour outage in 2021 at Meta, which runs Instagram, Facebook and WhatsApp.
But that was largely contained to the social media giant and some linked partners.The massive outage has also prompted warnings by cyber-security experts and agencies around the world about a wave of opportunistic hacking attempts linked to the IT outage.Cyber agencies in the UK and Australia are warning people to be vigilant to fake emails, calls and websites that pretend to be official.And CrowdStrike head George Kurtz encouraged users to make sure they were speaking to official representatives from the company before downloading fixes.
"We know that adversaries and bad actors will try to exploit events like this," he said in a blog post.Whenever there is a major news event, especially one linked to technology, hackers respond by tweaking their existing methods to take into account the fear and uncertainty.According to researchers at Secureworks, there has already been a sharp rise in CrowdStrike-themed domain registrations – hackers registering new websites made to look official and potentially trick IT managers or members of the public into downloading malicious software or handing over private details.Cyber security agencies around the world have urged IT responders to only use CrowdStrike's website to source information and help.The advice is mainly for IT managers who are the ones being affected by this as they try to get their organisations back online.But individuals too might be targeted, so experts are warning to be to be hyper vigilante and only act on information from the official CrowdStrike channels.
The original article contains 551 words, the summary contains 552 words. Saved -0%. I'm a bot and I'm open source!
The bug seems to have only affected certain Linux kernels and versions. Of course no one cared, because it didn't simultaneously take out hospital systems and airline systems worldwide to an extent that you'd only think you'd see in movies.
Linux has comparitive advantages for being so diverse. Since there are so many different update channels it would be hard to pull off such a large outage, intentionally or unintentionally. Yet, if we imagine a totally equivalent scenario of a CrowdStrike update causing kernel panics in most Linux distribitions, this is what could be done:
Ubuntu, Redhat, and other organizations who make money from supporting and ensuring reliability of their customers' systems, would be on the case to find a working configuration, as soon as they find out it's not an isolated incident or user error.
If one finds a solution, it will likely quickly be shared to other organizations and adapted.
The error logs, and inner workings of the kernel and where it fails are clearly available to admins, customer support personnel and tech nerds, so they aren't fully at the mercy of the maintainers of the proprietary blobs (both Microsoft and Crowdstrike, for Windows, but only Crowdstrike for Linux) to determine the cause and potential solutions that would be available.
The Linux internet-facing component updates can be rolled back and inspected/installed separately to the Crowdstrike updates. The buggy update to Microsoft Azure and from Crowdstrike happening together on the same day muddied the waters as to what exactly went wrong in the first several hours of the outage.
There's more flexibility to adjust the behaviour of the kernel itself, even in a scenario CrowdStrike was dragging its feet. Emergency kernel patches could just set to ignore panics caused by the faulty configuration files identified, at least as a potential temporary fix.