Skip to content

Looking Inward for the Best Threat Data

Organizations cannot afford to remain wholly dependent on a lifeline of one-size-fits-all data feeds in order to detect attacks. Tanium’s Chief Security Architect Ryan Kazanciyan offers guidance on how to find the best threat data to address your organization’s unique needs.


(Image: Geralt/Pixabay)

Ask any CISO to identify his or her top priorities and you will consistently hear one item at the top of the list: making better use of threat data. Indeed, threat intelligence has evolved into a multi-billion-dollar chunk of the cybersecurity industry, and a focal point of numerous cybersecurity policy and legislative efforts, such as the Cybersecurity Information Sharing Act of 2015.

But one critical issue is often overlooked: with limited resources and a multitude of threat indicators, where can organizations find the best intelligence about their networks? I spoke on this topic at the Center for Strategic and International Studies’ Cyber Disrupt Summit in Washington, DC, last week.

Sharing threat data has value and limitations—and it’s important for organizations and policymakers to understand what these are, how to account for them, and what additional strategies to prioritize.

The limits of threat data

The threat data shared between organizations has three key, interconnected limits:

  • It’s easy for attackers to change their digital “footprints.” The threat indicators typically shared among organizations—network addresses, file hashes, DNS names and URLs—is brittle and can be quickly changed. The most valuable information, such as patterns of attack and tradecraft, are often held in “walled gardens” by security vendors, maintained in formats that are often not interchangeable among disparate organizations or technologies.
  • It’s hard to validate the quality of threat data. Threat sharing programs focus on getting indicators to the relevant consumers as quickly as possible, rather than validating it. However, few organizations have the capability or resources to perform ongoing qualitative assessments of the threat data they’re ingesting, or the manner in which their network and endpoint monitoring tools consume and parse it. This leads to higher rates of false negatives and false positives, insufficient context to make informed decisions at scale and the risk of misattributing threats. This is particularly challenging as attackers increasingly co-opt legitimate network and endpoint resources to disguise their activities.
  • You’re always playing catch up. You may learn new techniques and detection methods, but it’s always after attackers have already introduced their tactics into the field.

Complementing external threat data by looking inward

The foundation of any organization’s security efforts must be understanding your own network and readily distinguishing “normal” from truly malicious anomalies. In practice, the background noise of most large, heterogeneous networks can make this extremely challenging. The first step in taming the problem is to apply technology and processes which can continuously identify all the assets you’re defending. The next step is to automatically enumerate and group these assets based on the dozens of critical attributes which define their attack surface.

Many organizations still struggle with the first half of this effort. For example, in the Federal space, several agencies undergoing Phase 1 of the Continuous Diagnostics and Mitigation (CDM) program have found they’ve underestimated their number of devices by 200%-300%. Across the public and private sector, it’s common to see reliance on out-of-date systems inventories and Change Management Databases populated with stale data. Such outdated systems and databases fail to provide sufficient system context to adequately assess and monitor risk.

The characteristics affecting a system’s susceptibility to compromise include: its installed applications and services; the users who authenticate to it and from it; its network interfaces; the data it stores; and configuration and key security settings. Automatically categorizing groups of systems based on constantly changing attributes forms the basis for a dynamic inventory which can truly serve as the foundation for detection-and-response efforts.

A blended approach to threat detection

Modern threat detection requires a balanced approach. For all its limitations, externally sourced threat data — if applied with sufficient scope and visibility across an environment — can automatically identify and contextualize a subset of malicious activity. Specialized endpoint and network security monitoring tools can add additional detection mechanisms for broader patterns of attack. Such tools serve as a complement to the practices of deriving intelligence from your own environment, and automatically detecting changes which introduce anomalies or affect the risk profile of your assets.

We can no longer continue along the path we’ve been on for the past 20 years, wholly dependent on a lifeline of one-size-fits-all data feeds from security vendors to detect attacks. Fortunately, policymakers and businesses alike are starting to realize this and pursuing processes and technologies which empower organizations to truly understand and defend their own assets.

If you’re interested in learning more about this topic, view my full remarks at the Center for Strategic and International Studies’ Cyber Disrupt Summit in Washington here (from the 6:28-6:48 mark).


About the Author: In his role as Tanium’s Chief Security Architect, Ryan Kazanciyan brings more than 14 years of experience in incident response, forensic analysis and penetration testing. Ryan oversees the design and roadmap for Tanium’s Threat Response offerings, and leads the Tanium Endpoint Detection and Response (EDR) team. Prior to joining Tanium, Ryan oversaw investigation and remediation efforts at Mandiant, partnering with dozens of Fortune 500 organizations affected by targeted attacks. Ryan has trained hundreds of incident responders as an instructor for Black Hat and the FBI’s cyber squad. He is a contributing author for “Incident Response and Computer Forensics 3rd Edition” (McGraw-Hill, 2014). Ryan also works as a technical consultant for the television series “Mr. Robot”, where he collaborates with the writers and production team to design the hacks depicted in the show.

Tanium Staff

Tanium’s village of experts co-writes as Tanium Staff, sharing their lens on security, IT operations, and other relevant topics across the business and cybersphere.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW