To address questions about where to invest finite resources and how to measure success, our cybersecurity team aimed to develop a mechanism to identify gaps and assign corresponding risk to those gaps. In doing so, we discovered a variety of ways the MITRE ATT&CKTM framework can be applied. We’re sharing our learnings here for you to consider as you look to implement the framework to meet your own cybersecurity needs. In the second of this two-part series, we detail how we began improving our detection capabilities using the framework.
In part one of this two-part series, we shared the three elements of the MITRE ATT&CKTM framework we found most valuable. Having used those elements to identify a path forward, we then tactically started improving our detection capabilities using MITRE ATT&CKTM as a punch-list. Simultaneously, we wanted to take a step back and better understand our current state so we could begin to develop metrics, assess risks, and incorporate MITRE ATT&CKTM into our detection planning and goals. Here are the steps we took, which we believe you can also apply in your own organizations.
Like our customers, and probably most organizations, our security team uses a suite of tools, some of which provide us with out-of-the-box detection capabilities. Rather than trying to understand and document someone else’s detection logic, especially when most out-of-the-box detection mechanisms are not exposed to customers, we recommend starting with something well known. For us, and many of you, this might be internally developed logic monitoring log events within a SIEM.
Unfortunately, there isn’t a magic solution (yet) allowing you to import your detection and map it to the ATT&CKTM components in a pretty interface. There are, however, tools available for interacting with the ATT&CKTM API to export data in different formats and views, including CSV.
Creating a mapping can be tedious and time consuming, so as a team we defined the basic requirements for the information we wanted to capture in hopes we wouldn’t have to go back through and add more to it later. Here are the components we chose to map:
We began with a common approach we have seen others take: using a Google Spreadsheet to get started quickly and for ease of collaboration.
While we had lofty ideas of a complicated system to track the mapping, we wanted to make sure we didn’t try to boil the ocean. By starting small, we were able to make iterative refinements to our process and start showing return on investment for our time right away. Having that ironed out, we visualized expanding out mapping to other detection capabilities and partner solutions we use.
To keep the mapping as modular as we could, we separated the MITRE techniques, a list of detection components, and the mapping between the two into different sheets. In the event MITRE updates or adds new techniques or we do the same to our detection, we only need to make adjustments in one place. Figures 1-4, below, illustrate how we set up each of the spreadsheets.
Figure 1 shows the “ATT&CKTM Matrix” sheet of the Google Spreadsheet. Here, we used ATT&CKTM’s API to generate a CSV of each technique, with some additional context, and pasted it in. In this case, we included The Technique Name, ID, Description, and some additional details including detection methods and corresponding tactics. This is where we will update or add techniques as MITRE releases them.
Figure 1: ATT&CK for Enterprise techniques
Figure 2 shows our “Detection” template, where we track and maintain our detection capabilities. Similar to the “ATT&CKTM Matrix” in Fig. 1, we included fields to help us identify key components of the detection: the type, name, category, platform, and description. We have redacted the actual detection components from this screenshot.
Figure 2: Tanium Detection
Figure 3 shows the mapping between the “ATT&CKTM Matrix” in Fig. 1 and “Detection” template in Fig. 2. We chose two values we do not expect to change often as our “primary keys”- the ATT&CKTM technique ID and our detection name. These are the only two values we add to the “Mapping” manually.
Figure 3: ATT&CKTM for Enterprise techniques to Tanium Detection Mapping
Figure 4 shows how we used the vertical lookup formula to populate additional columns based on our two primary key fields, in order to add additional context to the mapping.
Figure 4: ATT&CKTM for Enterprise techniques to Tanium Detection Mapping Details
By taking small iterative steps in mapping detection capabilities to MITRE ATT&CKTM, our team identified gaps, changed prioritization, and measured and reported success.
With a collaborative mapping initiative underway, we designed visualizations to identify deficiencies in our detection aligned with operating system and adversary lifecycle. With these gaps catalogued, we assigned ratings and estimated effort.
We started generating reports based solely on frequencies. While these reports obviously didn’t tell a complete story, they helped draw our attention to three potentially problematic areas:
Figure 5: Example Tactic & Technique Gap Report (Randomized Numbers)
Figure 6: Example Operating System Gap Report (Randomized Numbers)
With these reports in hand, and the ability to generate more, we are able to assess our gaps and prioritize based on risk and effort. Furthermore, we can use these reports to communicate change to leadership.
By building a process to map between our detection and MITRE ATT&CKTM, we laid a solid foundation to build upon.
While we still have a lot of initial mapping left to do until we reach a maintenance state, we have already seen a return on our investment with basic reporting. As the mapping continues to evolve, we will not only build more reports to address more questions, we’ll also expand the ways we are using the mapping.
For the past few months, we have been improving our ability to store and maintain our mapping data while also experimenting with new visualizations and usage for the data. Here are three things we learned:
Focusing on mapping internally developed detection capabilities to MITRE ATT&CKTM made the most sense as a starting place for us. However, as we expand we find we are mapping thousands of different detection components from various tools to more than 100 techniques. In most cases, this is a many-to-many mapping relationship. While our spreadsheet began as a suitable place to start, it’s now getting tough to handle. Between the number of different mappings we have and the various contributors, we are starting to notice some minor data quality issues, such as duplicate mappings or formatting inconsistencies. It is cumbersome to flip between multiple sheets, and we’re wasting time copying and pasting content from one field to another. Further, some visualizations and reports we want to develop on the data, particularly those showing linkage, are challenging to build and maintain within the constraints of a spreadsheet.
Having built our spreadsheet in a modular format, manipulating or working with the data in another medium was straightforward. By saving each sheet as a CSV and applying some light Python, we were able to import our detection, techniques, and mapping into a SQLite database. The SQLite database was easy for us to pass around and provided an easier way to manipulate the data, albeit requiring some scripting experience.
With the data sitting in a database, we have begun experimenting with a web application front end to visualize and pivot between the data, aid in data quality and validations, and make it even quicker to add or update mappings without the need of scripting skills. Further, by linking to the MITRE ATT&CK API, the application can notify us of new techniques and automatically import them.
Measuring coverage beyond frequency analysis has proven to be challenging. In the initial stages, we relied on frequency analysis to identify and prioritize those MITRE ATT&CKTM techniques where we had little or no detection capabilities. As our detection capabilities expand, the detection mapped to MITRE ATT&CKTM techniques will also grow, but we fear high frequency does not necessarily indicate we have fully and adequately addressed that technique. As such, we are starting to experiment with different mechanisms to check within a technique to identify gaps and attempt to quantify mechanisms for coverage, including indicators to distinguish between higher fidelity and lower fidelity detection.
When we design detection, we also create test cases. We execute these test cases at the time the detection is created, and we periodically execute those tests manually over time. We supplement these test cases with red teaming exercises to not only test narrow cases designed for a single detection, but to identify attacker behaviors we effectively or ineffectively detected.
Now that we have incorporated MITRE ATT&CKTM as a detection framework, we can shift focus to designing scripts and test cases to simulate different MITRE ATT&CKTM techniques. By leveraging the power of the Tanium platform, we can also automate and randomly distribute these tests throughout our environment on a recurring basis and begin to introduce a continuous testing process for our security program. Cyb3rWard0g already has a great start with the ThreatHunter-Playbook, as does Uber with adversary simulation tool Metta. Both projects take approaches to aid in the development and testing of detection aligned to the MITRE ATT&CKTM framework that can be leveraged as a part of a continuous testing program.
We have an ambitious year ahead of us. If you haven’t seen it yet, I recommend reading John Wunder’s post on What’s Next for ATT&CKTM. We are really excited about the improved tools and APIs, particularly the move to Unfetter and the build out of the ATT&CKTM Navigator.
Additional Insight from Pricewaterhouse Coopers (PwC): We have worked closely with our partners at PwC for over four years to build EDR service offerings for our customers. PwC has also provided more information about the application of the MITRE ATT&CK™ framework. They discuss how they leveraged Tanium Signal to enhance their detection capabilities and explain the importance of orchestration in building a strong EDR practice. Learn more by visiting the following links:
About the author: In his role as Principal Security Engineer with Tanium’s cybersecurity team, Mike Middleton focuses on security operations, threat detection, incident response, and automation. Prior to joining Tanium, Mike worked for a hedge fund based in the northeast U.S., where he helped establish an external threat security program. Prior to that, Mike worked in professional services, where he conducted forensics and incident response investigations.