CrowdStrike Update Crisis: Impact, Causes, and Prevention
CrowdStrike is a leading EDR solution, used across a wide range of systems including point-of-sale terminals and ATMs.
What’s the Hype about?
On July 19, 2024, CrowdStrike rolled out a faulty sensor configuration update for Windows systems, causing widespread system crashes and the notorious “blue screen of death” (BSOD). This issue affected Windows hosts with Falcon sensor versions 7.15 and 7.16, while Mac and Linux systems remained unaffected. Although this is not a security breach or cyber-attack, it is a significant complication arising from a routine software update.
The incident is expected to be one of the largest ‘cyber’ events ever in terms of its impact. A diverse range of sectors, including airlines, financial institutions, food and retail chains, hospitals, hotels, news organizations, railway networks, and telecom companies, have been hit. Consequently, CrowdStrike’s shares dropped by 15% in U.S. premarket trading.
What’s the technicality behind it?
Channel files are stored in C:\Windows\System32\drivers\CrowdStrike\ and are named starting with “C-” followed by a unique identifier number. The issue affects Channel File 291, which is responsible for evaluating named pipe execution on Windows systems. The faulty update introduced a logic error, causing operating system crashes.
What to Do to Understand the Problem
Reverse Engineering Why?
- Identify Root Cause: Pinpoints the code/configuration change causing the error.
- Develop a patch or workaround to restore functionality.
- Understand evaluation of named pipe execution.
What Reverse Engineering Revealed
- The problem likely starts with the C-00000291*.sys drivers and how they interact with the csagent driver.
- The system crash was caused by a null pointer dereference in the csagent.sys driver. This means the csagent.sys driver tried to access a memory location that doesn’t exist, causing the system to crash.
- Deleting all drivers with names starting with C-00000291*.sys can prevent the crashes. These drivers are suspected to be problematic.
- The kernel is the core part of the operating system that manages drivers. Any error in the kernel can lead to a system crash.
- The presence of Numerous null bytes (empty data) in the file suggests there might have been mistakes during packaging, disk writing, or final adjustments.
Temporary Steps to Restore Functionality
Workaround Steps for Individual Hosts:
1. Reboot the Host
- Reboot the computer to download the fixed channel file.
- Use a wired (ethernet) connection instead of Wi-Fi for faster internet connectivity.
2. If the Host Crashes Again
- Boot into Safe Mode or Windows Recovery Environment. (Using a wired network and Safe Mode with Networking can help).
3. Delete the Problematic File
- Go to the CrowdStrike directory: Default path = C:\Windows\System32\drivers\CrowdStrike
- In Windows Recovery, the default path is X:\windows\system32\drivers\CrowdStrike.
- Ensure you are on the correct drive (usually C:): Type C: and press Enter. Then type cd windows\system32\drivers\crowdstrike and press Enter.
- Find and delete the file named C-00000291*.sys.
- Do not delete or change any other files or folders.
4. Cold Boot the Host
- Shutdown the computer completely. Turn it back on from an off state.
Workaround Steps for Public Cloud or Virtual Environments:
- Detach the OS Disk Volume: Disconnect the operating system disk from the affected virtual server.
- Create a Backup: Make a snapshot or backup of the disk volume to protect against any unintended changes.
- Attach to a New Virtual Server: Connect the disk volume to a new virtual server.
- Delete the Problematic File
- Go to the CrowdStrike directory:
- Path: %WINDIR%\System32\drivers\CrowdStrike
- Find and delete the file named C-00000291*.sys.
- Disconnect the disk volume from the new virtual server.
- Reconnect the fixed disk volume to the original impacted virtual server.
Impact of this incident on organization and individual:
These malicious actors are exploiting the trust in brand to spread false information and steal sensitive data.
Tactics Used:
a. Phishing Emails: Fraudsters posing as CrowdStrike support, urging you to click on malicious links.
b. Phone Calls: Impersonating CrowdStrike staff to extract personal or financial information.
c. Fake Researchers: Claiming to have evidence linking your technical issues to a cyberattack, offering “solutions” for a fee.
d. Script Selling: Offering scripts that allegedly automate recovery from technical issues.
Please be vigilant and aware of the following suspicious domains:
- crowdstrike.phpartners[.]org
- crowdstrike0day[.]com
- crowdstrikebluescreen[.]com
- crowdstrike-bsod[.]com
- crowdstrikeupdate[.]com
- crowdstrikebsod[.]com
- www.crowdstrike0day[.]com
- www.fix-crowdstrike-bsod[.]com
- crowdstrikeoutage[.]info
- www.microsoftcrowdstrike[.]com
- crowdstrikeodayl[.]com
- crowdstrike[.]buzz
- www.crowdstriketoken[.]com
- www.crowdstrikefix[.]com
- fix-crowdstrike-apocalypse[.]com
- microsoftcrowdstrike[.]com
- crowdstrikedoomsday[.]com
- crowdstrikedown[.]com
- whatiscrowdstrike[.]com
- crowdstrike-helpdesk[.]com
- crowdstrikefix[.]com
- fix-crowdstrike-bsod[.]com
- crowdstrikedown[.]site
- crowdstuck[.]org
- crowdfalcon-immed-update[.]com
- crowdstriketoken[.]com
- crowdstrikeclaim[.]com
- crowdstrikeblueteam[.]com
- crowdstrikefix[.]zip
- crowdstrikereport[.]com
To protect yourself make sure to verify the source, not to click on suspicious links, contact to only official support provided by Cloudstrike support directly and stay vigilant. If you suspect you’ve been targeted, report it immediately to your IT department and CrowdStrike support.
Future Prevention Recommendations
- Establish and enforce governance practices with clear guidelines and accountability frameworks.
- Develop and regularly test well-defined change management processes and efficient rollback procedures to ensure smooth updates and quick recovery from issues.
- Thoroughly assess the security and reliability of cloud service providers to ensure robust and secure infrastructure.
- Use reliable metrics for the ongoing maintenance and optimization of the technology stack.
- Establish plans for incident response and disaster recovery holding regular drills to ensure preparedness and efficiency.
- Maintain clear communication with vendors for issue resolution and conduct rigorous testing of vendor-provided fixes before deployment.
- Conduct post incident reviews, taking corrective measures to prevent future occurrences and enhance overall resilience.



Comments
Post a Comment