Fault in CrowdStrike caused airports, businesses and healthcare services to languish in ‘largest outage in history’
Fault in CrowdStrike caused airports, businesses and healthcare services to languish in ‘largest outage in history’
Services began to come back online on Friday evening after an IT failure that wreaked havoc worldwide. But full recovery could take weeks, experts have said, after airports, healthcare services and businesses were hit by the “largest outage in history”.
Flights and hospital appointments were cancelled, payroll systems seized up and TV channels went off air after a botched software upgrade hit Microsoft’s Windows operating system.
It came from the US cybersecurity company CrowdStrike, and left workers facing a “blue screen of death” as their computers failed to start. Experts said every affected PC may have to be fixed manually, but as of Friday night some services started to recover.
As recovery continues, experts say the outage underscored concerns that many organizations are not well prepared to implement contingency plans when a single point of failure such as an IT system, or a piece of software within it, goes down. But these outages will happen again, experts say, until more contingencies are built into networks and organizations introduce better back-ups.
I mean, Microsoft themselves regularly shits the bed with updates, even with Defender updates. It's the nature of security, they have to have that kind of access to stop legit malware. That's why these kind of outages happen every few years. This one just got to much coverage from the banking and airline issues. And I'm sure future outages will continue to get similar coverage.
But the Crowdstrike CEO was also at McAfee in 2010 when they shit the bed and shut down millions of XP machines so it seems like he needs a different career...
The problem is the monoculture. We are fucking addicted to convenience and efficiency at all costs.
A diverse ecosystem, if a bit more work to manage, is much more resilient, and wouldn't have been this catastrophe.
Our technology is great, but our processes suck. Standardization. Just in time. These ideas create incredibly fragile organizations. Humanity is so short sighted. We are screwed.
I’m not sure you can blame the CEO. As much as I despise C-level execs this seems like a failure at a much lower level. Now the question of whether this is a culture failure is a different story because to me that DOES come from the CEO or at least that level.
This happened to me in December 2022/January 2023. Pretty similar problem. Just a regular Windows update caused it. Weirdly it didn't affect everyone (and I'm not on any sort of beta channels). Installing KB5021233 keeps causing BSOD 0xc000021a.
After installing KB5021233, there might be a mismatch between the file versions of hidparse.sys in c:/windows/system32 and c:/windows/system32/drivers (assuming Windows is installed to your C: drive), which might cause signature validation to fail when cleanup occurs.
How difficult would it be for companies to have staged releases or oversee upgrades themselves? I mostly just use Linux but upgrading itself is a relatively painless processing and logging into remote machines to trigger an update is no harder. Why is this something an independent party should be able to do without end user discretion?
Crowdstrike was near ubiquitous because it was the best tool out there.
I understand the reason for it, but that ubiquity comes with potential dangers, as we saw on Friday. But, no, I don't think the solution is "five different cyber security solutions" at every site. However, different cyber security solutions for different industries might not be such a bad idea. Or, I suppose the root of the problem might be the ubiquity of the OS. Should every PC be running the same jack of all trades but master of none OS?
It’s not the best tool out there. It’s the laziest one that works. It’s perfectly possible to securely operate without a rootkit hacked into your kernel.
Modern approaches involve running an ebpf module on rootless immutable images that are scanned on build. My org is PCI, SOC2, and HITRUST and we didn’t go down because we would never take such a sad lax approach to hand off responsibility for security to a third party. The trade off is your head of compliance and security need to actually learn things and work hard to push alternatives with auditors and consultants and most companies put an MBA who can’t critically think their way out of an empty room at the helm.
I'm actually pretty excited to go to work on Monday.
We have spent the past few years hardening our security and simplifying our critical systems. One way to doing that was to move a much off Microsoft as possible.
And since I've been on vacation for the past week, I'm either going to walk into a nightmare shit show or everyone is going to be cheering that we are fully operational since we don't depend on Microsoft.
Following up, our partner/affiliate sites were down. Each partner connects to us to submit data, and half were government contracts that were down. It didn't affect our systems, but it affected how we provide services to them.
This is why "they are the biggest" isn't a good reason to pick a vendor. If all these companies had been using different providers or even different OS, it wouldn't have hit so many systems simultaneously. This is a result of too much consolidation at all levels and one issue with the Microsoft OS monopoly.
The issue, in this case, is more about Crowdstrike's broad usage than Microsoft's. The update that crippled everything was to the Crowdstrike Falcon Sensor software, not to the OS.
Funnily enough, they had a similar issue with an update to the Linux version of the software a few months ago, that didn't have these broad-reaching consequences largely due to the smaller Linux user base. Which means this is starting to look like a pattern, and there are going to need to be some serious process changes at Crowdstrike to prevent things like this in the future.