As Tanium’s Chief Security Architect, Ryan Kazanciyan has witnessed the internal response processes at dozens of businesses around the world in the wake of the WannaCry ransomware attack. His biggest takeaway? Institutionalized resistance to change – not simple neglect – is one of the biggest reasons so many companies were left vulnerable. Here’s what you can do about it.
(Image: Arek Socha/Pixabay)
As we watch the circle of blame expand in the wake of the WannaCry ransomware attack, it’s natural to question why victims were so slow to implement readily available protections. A patch that would have prevented this outbreak had been available for two months. The malware exploited a Windows feature that had been deprecated for four years. What’s often overlooked is that institutionalized resistance to change – not simple neglect – is one of the biggest reasons so many companies were left vulnerable.
Over the past week, I’ve had the opportunity to glimpse some of the internal response processes at dozens of businesses around the world in the wake of the WannaCry ransomware attack. I saw all-too-frequent disconnects between perception and reality.
Let’s start with the most glaring disconnect: many organizations were operating under the assumption that the majority of their systems had been patched. Instead, they discovered the percentage of susceptible computers in their fleets was in the double-digits. Others had no way of validating whether systems had been rebooted, or whether other mitigating configuration changes had been put into place or taken effect. And still others remained hesitant to “pull the trigger” on fixes because they weren’t sure if they’d be able to quickly detect any unintended consequences, or be able to reverse changes if necessary.
The situation is not unique to the WannaCry attack. Even as annual worldwide IT security approaches $100 billion in 2017, businesses continue to struggle with basic systems management tasks, like patching, which are critical to so-called security hygiene. To understand why this is the case, we first have to examine how most IT organizations operate.
Over the past decade, software developers have embraced the DevOps movement and so-called agile methodologies. These approaches emphasize iterative and fast development, testing, and delivery of solutions in the enterprise. However, engineering and change control processes in many realms of IT and security operations have remained comparatively archaic, and stubbornly resistant to evolving. It’s a clash of philosophies: rapid, incremental, automated change versus methodical, planned-and-vetted broad strokes.
There are very real, practical reasons why IT and security operations are resistant to change. Most IT infrastructure is a hodgepodge of legacy and modern technology jerry-rigged to work together over time. According to Frost & Sullivan, the average organization runs four to six operating systems on endpoints and four to seven different operating systems on servers. Hundreds of business-critical applications – some more sensitive to OS-layer changes than others – may run atop these platforms. And the operational responsibility for managing all of this infrastructure is often highly siloed, with end-user computing, server administration, and application oversight all divided among numerous teams that don’t always move at the same cadence. Adding contractors and outsourcers into the mix introduces further complexity. And no individual or team wants to be the one whose patching efforts bring down a crucial system.
When business requirements mandate uninterrupted system availability, it’s easy to understand why companies take a conservative tack on certain types of IT changes. And, to be fair, modern change control processes all offer provisions for handling emergencies. But even when the urgency of the WannaCry incident demanded the flexibility to make changes, many organizations remained limited by their security operations and systems management technology. This demonstrates a vicious cycle: absent the technical capability to reliably and quickly effect change and monitor outcomes at-scale, people add “drag” in the form of more deliberation, process, and consensus. This further impairs their agility and limits the effectiveness of the solutions they’ve invested in.
Unfortunately, the patch which could have stopped WannaCry from spreading had the “perfect” blend of attributes to slow its uptake:
It affected nearly all versions of Windows, which means more system roles and types requiring testing.
It required a reboot (not all patches do), necessitating schedule coordination to minimize business impact.
It was not originally released for Windows XP and Server 2003 until after the WannaCry outbreak. While both operating systems are end-of-life, they remain present in many environments. Their numbers are dwindling – NHS reported only around 5% of its inventoried hosts ran XP, for example – but those systems are often used to support critical legacy devices.
Against the backdrop of these technical and procedural hurdles, it’s no surprise WannaCry had such a devastating impact, particularly across verticals such as healthcare.
The frightening truth is WannaCry could have been much, much worse. As of this writing, the malware “only” infected a few hundred thousand systems. In contrast, Conficker (which exploited a similarly widespread Windows vulnerability in 2008) spread to over 10 million computers. Many organizations were simply lucky enough to escape the crosshairs of the initial attack campaigns. And yet our dependence on IT systems in 2017 means the effects of this attack were far more acutely felt than others which have come before it.
IT leaders need to seize this opportunity to drive modernization of the processes and technologies underpinning systems management and security operations – not simply focus on improving attack detection and response. When future attacks inevitably evade our first lines of defense, an agile approach to security hygiene can keep small-scale, scattered infections from becoming the next global epidemic.
Like what you see? Click here and sign up to receive the latest Tanium news and learn about our upcoming events.
About the author: In his role as Tanium’s Chief Security Architect, Ryan Kazanciyan brings more than 14 years of experience in incident response, forensic analysis, and penetration testing. Ryan oversees the design and roadmap for Tanium’s Threat Response offerings, and leads the Tanium Endpoint Detection and Response (EDR) team. Prior to joining Tanium, Ryan oversaw investigation and remediation efforts at Mandiant, partnering with dozens of Fortune 500 organizations affected by targeted attacks. Ryan has trained hundreds of incident responders as an instructor for Black Hat and the FBI’s cyber squad. He is a contributing author for “Incident Response and Computer Forensics 3rd Edition” (McGraw-Hill, 2014). Ryan also works as a technical consultant for the television series “Mr. Robot”, where he collaborates with the writers and production team to design the hacks depicted in the show.