Each year seems to bring cybersecurity professionals the next New Shiny Thing that promises to solve every one of their woes. The list is impressive. Artificial intelligence. Cloud computing. Zero-trust architecture. Passwordless logins.
Turns out, there isn’t any one thing that improves cybersecurity. There is a collection of things. But what are they? Do cybersecurity pros know? Ask a dozen and you’ll get a dozen different answers. Wade Baker studies those answers and will share insights from a survey of 5,100 practitioners at RSA Conference 2022.
Baker is a partner and co-founder at Cyentia Institute, which produces data-driven cybersecurity research and services. He will be one of two panelists exploring security practices and outcomes in his RSA 2022 panel discussion, “What (Actually, Specifically) Makes Security Programs EVEN MORE Successful?” Along with Wendy Nather, head of advisory CISOs at Cisco, Baker will dive into results from Cisco’s recent Security Outcomes Study, Volume 2, which found, among other things, a strong correlation between an organization’s IT architecture and its security resilience. The talk is set for June 7, 1:15 p.m. to 2:05 p.m. PT, at the Moscone Center in San Francisco. Baker gave Endpoint a front-row glimpse of the findings.
Tell me about this cybersecurity research and how this global study began?
I love the idea of this study. I can’t remember exactly when or who landed on the notion of studying cybersecurity outcomes. Ultimately, we asked ourselves if we could indeed measure outcomes. First, let me rewind to volume one of the study.
We surveyed almost 4,800 IT and security professionals from 25 countries. For a security survey,
it was huge. The survey focused
on 11 outcomes by first asking security leaders: What do you
want your security program to do
We had varying responses but condensed them to 11 outcomes. They’re high-level and include: enabling the business, avoiding major incidents and losses, maintaining compliance, managing costs, and so forth. We asked respondents how they achieved those outcomes and whether they were struggling or knocking it out of the park. Then we asked about 25 different security practices, basic stuff like threat detection, incident response, vulnerability management, and the like.
That sounds so very easy: Just do these things we know are good for you.
Right, all that stuff that you just do: Just do it. So, then we came
back this year with volume two of the research, surveying 5,100 practitioners in 27 countries. Instead of 25 practices, we focused
on the five from the previous study that drove the most cybersecurity success.
These included: proactively updating your technology, using technologies that worked well together, fast incident response, fast recovery, and early and accurate threat detection. We wanted to understand what makes these five so unique. So we dug into them and asked lots of questions.
Organizations that had a platform strategy with their IT and security technologies seemed to achieve better outcomes.
For instance, we wanted to know if organizations were really modernizing their tech stack or were just spending a ton of money on upgrading all the time. Were they primarily cloud based? What we found was that organizations that had a platform strategy with their IT and security technologies seemed to achieve better outcomes.
That’s interesting. So, what is improving the outcomes there? Is it simply moving to platforms?
Organizations that had to build their own integrations among systems struggled more with their outcomes. I think that makes sense. Visibility turns out to be a good thing. We found, with threat detection, about half of respondents are successful when they have strong identity management of security programs and systems. So good visibility into threats and assets is critical for doing good things.
We also looked at people, processes, and technology. That essential trio. We wondered: Are any of these more or less important to security outcomes? We asked respondents to rate each of these separately. Respondents who said their organizations were weak in all three did not have good outcomes. Those that said they were strong in all three jumped to the 92nd percentile of successful outcomes. For the middle, we found if they had strength in just one, it’s a statistically insignificant difference between whether they increased strength in the other two.
That was interesting. It told me that if a company has technology as its strength, they can build from that. They don’t have to convert to a people-strong program or vice versa. They can start from their base of strength and then build the next best thing. As long as they’re trying to improve all of them, their route doesn’t matter.
That’s fascinating because in business class, we were told good process doesn’t trump good people, but it can help make up for less skilled people.
Didn’t pan out that way. But we were exploring that by asking if automation was a substitute for people. Such as, if one can automate processes, can that get them farther than people, or could it replace people? We knew the answer would be no.
Only 36% of organizations that said their staff is not strong, have weak people resources, and have zero automated processes rated their security operations as strong. However, 73% of those who said they have strong tech staff in security operations and didn’t have automation rated their security as strong.
I thought it hilarious that 84% of
the organizations that say, “We
don’t use threat intelligence at all,” rated their security operations as very strong.
It’s neat that organizations with a high level of automation, but weak people, had about the same strength of program as organizations with strong people resources but poor automation.
What surprised you most in the survey findings?
We asked about threat intelligence and how extensively respondents and their organizations use threat intelligence. I thought it hilarious that 84% of the organizations that say, “We don’t use threat intelligence at all,” rated their security operations as very strong. But then, among those that do use threat intelligence, only 45% rated their security as very strong. So their ratings drop almost in half.
They start realizing that they don’t know the cyber risks they don’t know.
Exactly. One set is working from a state of ignorance. Everything looks good. They are not incorporating specific threats and have no idea what they look like, so they’re not seeing them. That must mean everything is good.
How should CISOs and their security teams improve their people, processes, or technology?
There is a lot in the data that gives them the freedom to build on strengths rather than having to conform to the only-one-way-forward mentality. That freedom to choose is a big help for security programs that are struggling.
We hear “This is the new paradigm you need to follow to make everything wonderful” a lot. But we haven’t achieved that one thing yet. Maybe there is no one thing. There is a bunch of little things that incrementally improve a security program.
We hear “This is the new paradigm you need to follow…” a lot. But we haven’t achieved that one thing yet. Maybe there is no one thing.
If we can start measuring those little things and get some evidence behind them, we can start putting those on the roadmap. I hope that people take something like that away from this.
The findings seem to show that no matter how good an organization’s cyber hygiene, there are aspects of securing the enterprise that are out of their control—
like the maturity of the tech stack and the quality of system integration.
Yes, I think so, too. It sort of puts the emphasis on We probably need to get outside ourselves and work with other groups within the organization, if they own those pieces that are critical to your success as a security leader. If you know that a lot of that is bound up in the IT you’re securing, then you probably want to work closely with the CIO and make some plans to improve that tech stack together.
Are there certain things that make sense to measure? I can imagine that if teams and CIOs just start measuring things, they’re going to measure many wrong things.
To a certain extent, the 11 program outcomes that we talk about in these reports are an excellent place to start. They’re not measures or metrics. But they are objectives that security programs want to achieve. Then the question becomes: How can our program best support these outcomes? That’s a good place to start.