As someone wo dealt with this crap, I'll tell you that this line is a load of BS: "We quickly identified the issue and deployed a fix, allowing us to focus diligently on restoring customer systems as our highest priority."
Crowdstrike pulled the initial problematic "C-00000291-00000000-00000029.sys" file and released a new one that wouldn't create more system outages but by the nature of the state it put systems in, did nothing to restore customer systems. It had to be done manually in the vast majority of cases.
What they don't tell you is how awful that was for larger organizations that use checkout systems for privileged accounts which were also effected. Oh, and let me tell you how awesome it is when a local administrator account gets out of sync with that system. (Spoiler, its a pain in the ***.) If you have a Citrix like environment, you had to repair whatever systems the organization depends on for tools and bookmarks for navigating your infrastructure. You need to know the URL's for vCenters and addresses for things like HP OneView, ILO access, iDRACs.
People who didn't have to deal with this **** have no idea how bad this really was. This taking place on a Friday lessened the impact as it gave companies time to recover their systems over the weekend. Larger organizations often have redundant datacenters in multiple locations. Crowdstrike created a "worst case" scenario by rendering that type of redundant datacenter design irrelevant.