CrowdStrike Explains What Led to Last Week’s Global Tech Outage

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,125
Points
113
CrowdStrike, the American cybersecurity technology company that is now infamous for having caused a global tech outage last week that delayed flights and disrupted services around the world, has published an update on its website that explains what caused all of those Windows blue screens and what the company will be doing going forward (e.g., stability testing, additional validation checks) to prevent such an incident from happening again.

See full article...
 
No matter how much you try **** will happen.

How you respond is more important that never making a mistake.

But it is not like this is the wisdom of the ages or anything everyone doesn't already know :)

I'm not connected to the IT world well enough to know how/what happened other than a corrupted file or what the response from CrowdStrike was, anyone up for edumacating me?

Thoughts and opinions are good too. :)
 
I don't even care... you know what caused this. Forced NON TIERED UPDATES.

Don't be the next NORTON security Cloudstrike.
 
As someone that manages 1000's of VM's we don't use a single product with forced updates. Every update provided to us goes through our own QC processes in a non-production environment.

We've passed on a lot of good software due to forced updates. Any company or IT department that uses products like that is asking for problems. As we've seen here.
 
As someone that manages 1000's of VM's we don't use a single product with forced updates. Every update provided to us goes through our own QC processes in a non-production environment.

We've passed on a lot of good software due to forced updates. Any company or IT department that uses products like that is asking for problems. As we've seen here.
THIS... SOO MANY TIMES THIS!!! Aggressive updating will lead to problems. It might prevent a few... but is the price worth it. There are SO MANY other ways to secure endpoints beside a practice that could destroy them!
 
As someone wo dealt with this crap, I'll tell you that this line is a load of BS: "We quickly identified the issue and deployed a fix, allowing us to focus diligently on restoring customer systems as our highest priority."

Crowdstrike pulled the initial problematic "C-00000291-00000000-00000029.sys" file and released a new one that wouldn't create more system outages but by the nature of the state it put systems in, did nothing to restore customer systems. It had to be done manually in the vast majority of cases.

What they don't tell you is how awful that was for larger organizations that use checkout systems for privileged accounts which were also effected. Oh, and let me tell you how awesome it is when a local administrator account gets out of sync with that system. (Spoiler, its a pain in the ***.) If you have a Citrix like environment, you had to repair whatever systems the organization depends on for tools and bookmarks for navigating your infrastructure. You need to know the URL's for vCenters and addresses for things like HP OneView, ILO access, iDRACs.

People who didn't have to deal with this **** have no idea how bad this really was. This taking place on a Friday lessened the impact as it gave companies time to recover their systems over the weekend. Larger organizations often have redundant datacenters in multiple locations. Crowdstrike created a "worst case" scenario by rendering that type of redundant datacenter design irrelevant.
 
Crowdstrike needs to.

1. Book out cruise ships at the nearest port for every large customer base they have.
2. Fly said customers out for cruises where they will be presented how this issue happened and how they will take steps to make sure this never happens again.
3. Rotate through their entire customer base 1 week at a time.

10 dollar useless gift cards a for a f u to the people having to solve this and to the billions in damages this caused and is still causing.
 
Become a Patron!
Back
Top