Skip to content

Fight Fire With Fire: 3 Major Ways AI Fuels Your Cybersecurity Arsenal

With security threats rising beyond the capacity of human analysts, it’s time to harness the power and speed of AI to detect and contain cyberattacks.


A cyberattack on a UK-based energy firm used AI to mimic the CEO’s voice and tricked a staffer into transferring $243,000 to a fake account in 2019. A cyber espionage campaign in 2021 targeted international telecom companies with AI-generated phishing emails. And last year, hackers using AI injected fake video streams into the biometric verification process of crypto exchange Bitfinex, ultimately earning themselves $150 million worth of digital assets.

Cyberattacks deployed with AI are only becoming more sophisticated and evasive with each passing day.

The good news? The power of AI cuts both ways, and an increasing number of enterprises are exploring opportunities to deploy AI (and its subfield, machine learning) in their own cyberdefenses – fighting fire with fire, you might say.

With AI-assisted cyberattacks on the rise, it’s essential to know just how exposed your organization really is. Here’s how – get a comprehensive risk score in just 5 days.

Today, more than two-thirds (69%) of enterprises believe AI is necessary for cybersecurity because threats are rising to levels beyond the capacity of cyber analysts, Deloitte finds.

While the prevalence of AI in cybersecurity programs is still in its relative infancy, the potential benefits are clear: AI has the ability to process vast amounts of data, recognize patterns quickly, and make informed decisions, helping organizations identify vulnerabilities and threats, minimize or eliminate threats, and respond more quickly, says Maria Schwenger, co-chair of AI Governance and Compliance Initiatives at the Cloud Security Alliance (CSA). “AI – and GenAI – are not just helping us protect cybersecurity,” she says. “They’re helping us build a new, resilient world with new, resilient systems.”

As organizations begin to explore applications of AI in cybersecurity programs, experts say the following areas hold great promise.

1. How AI cybersecurity tools will improve vulnerability testing

Software engineers strive to write secure code, but sometimes mistakes happen. They might inadvertently introduce vulnerabilities by using improper error handling or not validating user inputs; complex systems might make it challenging for them to anticipate all potential security vulnerabilities; or software engineers might face tight deadlines to deliver new features quickly, leading to shortcuts or compromises in code quality and security.

AI – and GenAI – are not just helping us protect cybersecurity. They’re helping us build a new, resilient world with new, resilient systems.

Maria Schwenger, co-chair of AI Governance and Compliance Initiatives, Cloud Security Alliance

“Plus,” says Nick Merrill, research scientist and director of the Daylight Lab at the UC Berkeley Center for Long-Term Cybersecurity, “software engineers know surprisingly little about security and how to look for vulnerabilities in the software they write.”

Traditionally, when vulnerabilities are reported in the wild, developers are responsible for finding the bugs and patching them. This can be challenging and tedious, requiring them to navigate through many files and modules to identify the root cause of a bug, or to replicate the specific conditions or scenarios to understand the bug and create a solution. With the use of AI, however, organizations could improve the speed and efficiency with which they can detect and remediate potential vulnerabilities in code, creating a more secure environment, Merrill says.

[Read also: Ultimate guide to AI cybersecurity – benefits, risks, and rewards]

AI-powered tools could, for example, scan through codebases to identify potential vulnerabilities by analyzing patterns to detect common risks such as SQL injections and cross-site scripting. AIs could also be trained on large datasets of known vulnerabilities to identify similar patterns in new code, thereby revealing previously unknown vulnerabilities or zero-day exploits.

“This saves time and effort because then security teams don’t have to spend time retroactively once something has been reported in the wild to then find the bug and patch it,” he says. “Empowering developers to solve security problems would be a huge win these days.”

2. How AI cybersecurity tools will empower threat detection

Identifying potential security threats at an early stage helps prevent unauthorized access to sensitive data, intellectual property, and other valuable assets that make up an organization’s “crown jewels.” This helps organizations avoid costly data breaches, financial losses, and reputational damage.

Empowering developers to solve security problems [with AI] would be a huge win these days.

Nick Merrill, research scientist and director, Daylight Lab, UC Berkeley Center for Long-Term Cybersecurity

At many organizations, security analysts are responsible for manually monitoring system logs, network traffic logs, and application logs for suspicious activity that may indicate a security breach. This process can be time-consuming and straining on individuals, CSA’s Schwenger says. “It can be difficult to identify threats quickly, especially if it’s a very sophisticated threat that a human eye can miss,” she says. “With human analysts, a person can process only so much data, and it’s easy to miss certain patterns. But AI is really good at discovering patterns that we may miss.”

Because AI can analyze vast amounts of data, it can be used to establish a baseline of normal behavior for systems, networks, and users. By detecting deviations or anomalies, AI can help to identify potential security threats, such as unauthorized access attempts, unusual network traffic, or abnormal user behavior, Schwenger says.

[Read also: The 3 biggest GenAI threats (plus 1 other risk) and how to fend them off]

“This is a massive improvement in threat analysis because it can discover those hidden parameters and hidden anomalies in the data quicker, which may have otherwise been missed,” she says. “This gives you scalability because you’re automating tedious tasks and getting real-time information that you can pass to your security engineers, which helps you work faster and be more agile.”

3. How AI cybersecurity tools will accelerate threat containment and response

When a threat has been detected or a security incident has occurred, moving quickly to rectify the situation is crucial. “It’s all about speed when it comes to threats, compromises, breaches, and ransomware attacks,” says Adam Levin, author of Swiped: How to Protect Yourself in a World Filled With Scammers, Phishers and Identity Thieves, and co-host of the What the Hack podcast. “You need to be in a position to move as quickly as possible to plug the hole and stop the problem so you can begin working on the solution. The faster you can contain the threat, the faster you can defend against it.”

You need to be in a position to move as quickly as possible to plug the hole and stop the problem so you can begin working on the solution.

Adam Levin, author and What the Hack podcast co-host

Traditional methods of threat containment and response rely heavily on manual intervention. When a security incident occurs, for example, analysts must manually identify the affected systems, isolate compromised assets, and implement containment measures. Security analysts will manually review security alerts, logs, and forensic data to understand the scope of the incident, then work to patch systems or reset compromised credentials. These processes take time and introduces opportunities for human error, which may further delay resolutions.

[Read also: 3 considerations to make sure your AI is actually intelligent]

With AI, however, algorithms can automatically assess the severity and impact of the threat, identify which assets are impacted, and even orchestrate response actions, Schwenger says. This may include automatically isolating infected endpoints, blocking malicious traffic, or turning off compromised services.

“This really helps to support your security teams to make informed decisions and respond to an incident because AI can give you these insights and make recommendations – it takes out all the blind guesswork,” she says. “And in the future, there’s great potential for GenAI, too, which could be used to generate reports and summaries after an incident, prepare answers, and help keep stakeholders informed.”

While the potential of AI in security is significant, Schwenger is quick to note the enduring need and value of humans in any security program. “AI is only as good as the data it’s based on, trained on, and analyzing. Nothing can replace the human expertise and oversight, which is something that will always be needed,” she says.

Kristin Burnham

Kristin Burnham is a freelance journalist covering IT, business technology, and leadership.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.