Skip to content

What D-Day Can Teach Us About AI and Cyberattacks

This anniversary of the Allied invasion of Normandy should remind us that warfare, especially cyberwarfare, is always evolving. The good news? Surprise attacks aren’t always so surprising.

Perspective

When I think about how artificial intelligence (AI) will impact cyberwarfare, I think about the Allied invasion of Normandy. Or at least I should, as I was reminded this morning on this, the 79th anniversary of that fateful day.

The thing that has always struck me about that sneak attack on June 6, 1944, is that it wasn’t so surprising. We’d announced our entry into the war years before and had been steadily building up our munitions and mobilizing troops. The assault itself involved nearly 160,000 Allied troops (73,000 from the U.S., 83,000 from Great Britain and Canada, among other troops). The Germans had known the attack was coming for months. They just weren’t sure where or when.

The same can be said for AI on the battlefield. And by battlefield, I mean any attackable beach, hill, Maginot Line, munitions depot, or computer network near you.

Investigate and assess threats faster with an incident response solution that provides comprehensive accurate endpoint visibility in real time.

AI-enabled cyberattacks are coming, no question. Launched by a nation-state, cyber gang, or some former employee with a gripe. And it’s up to everyone—military experts, politicians, enterprise leaders, and any of us with a laptop or cellphone—to be ready.

Just as Normandy marked the U.S. entrance into World War II, heralding the eventual defeat of the Nazi regime and the start of a new world order, the entrance of AI into our military planning will have an extraordinary impact.

“Today we are undergoing the most significant and most fundamental change in the character of war,” said Army Gen. Mark A. Milley, chairman of the Joint Chiefs of Staff, on a podcast for the Eurasia Group Foundation in March. “This time,” he added, “[it’s] being driven by technology.”

Milley and Secretary of Defense Lloyd J. Austin III spoke Tuesday at a service at the Normandy American Cemetery and Memorial in France, where more than 9,000 service members are buried just above Omaha Beach. Milley spoke of the valiant sacrifice made by the soldiers who stormed those cold, chaotic beaches.

What he didn’t mention is that the coming change will affect businesses as much as bunkers.

Do we need a “Geneva Convention” on AI?

AI—whether you’re fearing it or drooling over it—is all the buzz these days. A consortium of scientists have called for a pause in AI development, concerned that technologies like OpenAI’s ChatGPT are being dumped into the public domain in a mad dash for profits and power before we fully understand their potential. Scientists have admitted we don’t really get exactly how AI works—and how it might (make that will) evolve of its own accord.

Today we are undergoing the most significant and most fundamental change in the character of war [and it’s] being driven by technology.

Gen. Mark A. Milley, chairman of the U.S. Joint Chiefs of Staff

“AI could be as dangerous as nuclear weapons,” wrote U.S. Rep. Seth Moulton (D-Mass), a former Marine who served in Iraq (and was twice decorated for valor), in an attention-getting op-ed that ran in the Boston Globe last month.

Moulton has called for a “Geneva Convention on AI,” so that world leaders can try to establish some rules of the road before this nascent technology becomes a reality. He discussed this in a “Future of Defense Task Force Report” he co-authored in 2020, and he’s frustrated that the Pentagon “has done almost nothing” in the three years since, he says.

[Read also: What businesses need to know about Biden’s national cybersecurity strategy]

Enterprise leaders may not be invited to any future AI Geneva Convention—though maybe they should be, given the stakes—but there are ways they can fortify their own technological infrastructure, starting now.

Turing, technology, and what enterprise leaders should know

Technology played a key role in the plans for D-Day. The Axis powers’ state-of-the-art code machine, Enigma, once considered unbreakable, was the preeminent machine for communicating military plans in secret code—until Alan Turing and a cohort of Polish and British experts cracked the code in a manner that laid the foundation for the modern computer.

AI could be as dangerous as nuclear weapons.

Seth Moulton, U.S. Representative (D-Mass)

That code-cracking—depicted in The Imitation Game, with Benedict Cumberbatch as Turing—came early in the war, but in a controversial move, the Allies kept it a secret. That allowed them to listen in on German and Japanese military officials, decrypting vital information. German codes intercepted in the run-up to D-Day gave the Allies precise and real-time visibility into the locations of German fighting units in and around Normandy. And on D-Day itself, Allied commanders listened to German communications, which gave quicker and more accurate info on their troops’ progress than our own communication channels did.

That kind of visibility and the evolving mindset it ushered in are essential for today’s enterprise leaders who are combating a tech war all their own.

First, visibility, as in endpoint visibility: Knowing the number and location of all desktops, laptops, tablets, servers, and other endpoints, and the speed with which they are being (or need to be) patched, is a key element in any robust cybersecurity strategy, as important as knowing the location of soldiers on every hill in France was back in the 1940s. It’s your starting point. You can’t move on to more sophisticated strategies without that first step.

[Read also: Here’s your Benchmarking 101, or why it pays to know how your cybersecurity stacks up]

As for mindset? The myth of Enigma’s unbreakability, and the hubris that fueled that misguided assumption, played as much a role in Germany’s eventual defeat as all the tanks, bombs, and bullets. No technology is impregnable. Thinking otherwise (see also “Titanic, unsinkable”) is a recipe for disaster.

That’s a useful reminder for today’s C-suite execs and board members who are (or should be) hearing their security leaders talk about the need for increasing tech budgets and amping up defenses. Some enterprise chiefs may still think we can absolutely prevent cyberattacks if we just find the right technology. Savvy business leaders now accept that it’s not a matter of if but when attacks will happen.

But there are effective ways—through robust threat hunting and incident-response plans, to name just two—to limit the damage.

Slow down

Learning to walk before we run, when it comes to implementing AI into business systems, is also prudent. But slowing down doesn’t mean stopping.

We must meet today’s challenges with our full strength—soldier and civilian alike.

Lloyd J. Austin III, U.S. Secretary of Defense

Moulton, for instance, endorses a pause but not anything like across-the-board interference.

“There’s a lot of AI development that we don’t want to slow down because we want to get cures for cancer as quickly as we can,” Moulton said recently in a radio interview for NPR’s Morning Edition.

But regulations are essential, he notes, if we’re considering AI’s use in warfare, or its ability to promulgate disinformation at an alarming rate. “These are the places where I think Congress needs to focus its regulatory oversight, not to just try to regulate AI overall, but just prevent the worst-case scenarios from happening,” he said.

[Read also: We spoke to one of the key architects of our new national cyber strategy, who says “persistent engagement” will be good for business]

So how likely is a Geneva Convention for AI?

“We had a lot of nuclear arms agreements during the Cold War,” Moulton told Politico in a recent interview. “The Geneva Conventions were negotiated with a lot of tensions in the world. I think that this is hard, but it’s absolutely worth trying.”

If there is such a convention, I vote for inviting enterprise leaders, to help figure this out.

“We must meet today’s challenges with our full strength—soldier and civilian alike,” said Austin Tuesday, in Normandy, referring to the vulnerability of democracy. “If the troops of the world’s democracies could risk their lives for freedom, then surely the citizens of the world’s democracies can risk our comfort for freedom now.”

He wasn’t speaking of AI, per se. But… he kind of was.

Joseph V. Amodio

Joseph V. Amodio is a veteran journalist, television writer, and the Editor-in-Chief of Focal Point. His work has appeared in The New York Times Magazine, Men's Health, Newsday, Los Angeles Times, CNN.com, and Barrons.com, and has been syndicated in publications around the world. His docudramas have aired on Netflix, Discovery, A&E, and other outlets. He also produces Tanium’s new Let’s Converge podcast—listen here.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW