Skip to content

Misinformation / Disinformation, Part 2 – Bringing Old-School Cybersecurity to a New Fight

Bad actors are using inaccuracies, lies, and sophisticated deepfakes to target organizations. Previously, we examined why enterprises of all sizes need to take this seriously. Here, we review some traditional cybersecurity strategies that can help thwart today’s newest, AI-fueled influence operations.

Perspective

Think your customers and employees can spot online scams, deepfakes, and deliberate misinformation and disinformation? Recent studies show people overestimate their ability to discern truth from fiction – and it’s likely to get worse as AI becomes more sophisticated.

This should have enterprise leaders concerned — and turning to their chief information security officers. Those CISOs, it turns out, have some old tricks that can counter these new AI-fueled threats.

First, the research: A Citi survey released in November asked 2,432 online respondents how well they can spot a financial scam, and 90% said they’re good at it, despite the fact that more than a quarter (27%) had fallen prey to such schemes.

Threat hunting starts here, with a way to protect your entire environment that’s simple, flexible, and accurate.

Another study, published in June, found that people are more likely to believe disinformation crafted by AI than fake statements written by humans. The distinction is small – the 697 people surveyed were 3% less likely to spot AI-generated fake tweets compared to human-written lies – but significant given the predicted deluge of such convincing content.

“The fact that AI-generated disinformation is not only cheaper and faster, but also more effective, gives me nightmares,” University of Zurich researcher Giovanni Spitale, Ph.D., who led the study, told MIT Technology Review back in June.

Six months later, he’s still apprehensive.

“I am not worried by technology itself but by its potential to be abused,” he tells Focal Point via email. Spitale, a self-described tech enthusiast and “big nerd” (he started programming on an Olivetti 386 in 1993 when he was six), worries that governments and laws are just too slow to keep pace with disruptive technologies like the large-language models (LLMs) that power ChatGPT.

The EU’s AI Act, passed this summer, is “already old,” he observes. And that leaves individuals and enterprises exposed.

Misinformation, disinformation — whatever you call it, it’s here

Experts have been warning for years about AI’s ability to enhance the frequency and quality of misinformation (mistaken statements), disinformation (outright lies), and malinformation (truths deliberately twisted or taken out of context for malicious purposes), and we’re now seeing it unfold.

I am not worried by technology itself but by its potential to be abused.

Giovanni Spitale, Ph.D., disinformation researcher, University of Zurich

Scammers are increasingly using AI to “turbocharge” fraud, warned Federal Trade Commission chair Lina Khan in June. Pindrop, which monitors billions of calls at many large U.S. banks, noted an uptick this year in deepfake scams where AI was used to simulate customer voices to try to transfer funds.

Small-to-medium-size businesses (SMBs) are just as at risk as big multinationals. Negative online ads – trumped up by disgruntled customers or employees to sully a brand’s reputation, or concocted by cyber gangs to extort money – are already a problem for SMBs. Ottawa Perogies discovered this in 2021, when a cybercriminal flooded Facebook and Google with one-star reviews after the family-run takeout restaurant ignored demands for money. Orders drastically declined within days, and it took weeks for the eatery to mend its online reputation. That same year, businesses lost an estimated $152 billion worldwide due to fake reviews, according to a University of Baltimore study. Experts warn AI will only increase those types of attacks.

The good news is that enterprise leaders can rely on traditional cybersecurity methods – tried-and-true tactics that CISOs and other security pros have advocated for years – to thwart such threats. “For the time being, the best defense we identified is having a playbook not only to respond to incidents but to assess their likelihood – but things will happen,” Spitale concedes.

Fair enough. The following security tactics can’t inoculate an organization completely, but they can diminish a heckuva lot of negative effects:

1. Rewrite incident response playbooks – and replay them

Though originally designed to handle technical crises like data breaches, incident response plans can and should be revised to manage misinformation and disinformation crises. This means adding steps to your playbook to quickly spot inaccurate info – things like deepfake videos and audio clips, rogue hashtags that incorporate your brand name, and other nefarious online content – determine its source, and launch an assertive communications strategy to counteract its effects. (More on that later.)

If someone in your online tribe is sharing fake news, then you feel pressure to share it as well, even if you don’t know whether it’s false or true.

Matthew Asher Lawson, Ph.D., assistant professor of decision sciences, NSEAD

An updated incident response playbook will help you to respond quickly and efficiently when bad actors knowingly or unwittingly steer your brand narrative off course.

That latter scenario – lies spread by unwitting accomplices – is worth underscoring. Research on some 60,000 social media users published this year by the American Psychological Association showed that most people who spread fake news don’t have a specific axe to grind against a person or group; they simply want to fit in.

[Read also: 5 steps to a rock-solid incident response plan]

“If someone in your online tribe is sharing fake news, then you feel pressure to share it as well, even if you don’t know whether it’s false or true,” noted lead researcher Matthew Asher Lawson, Ph.D., an assistant professor of decision sciences at INSEAD, a business school in France.

2. Add deepfakes to phishing tutorials

Cybercriminals are using AI to manipulate actual images and recordings or create wholly fake content. This summer, for instance, some 50 fake videos posted on TikTok, Facebook, and Youtube showed Elon Musk in “interviews” with various TV personalities – Gayle King, Tucker Carlson, Bill Maher – touting a fake investment scheme. They racked up tens of thousands of views.

Phishing attacks are also expected to evolve, with “deepfakes enabling near-perfect impersonation of trusted figures, making a significant leap from the usual method of replicating writing style or mimicking email design,” writes Neil Lappage, a CISO and security advisor at firms like C5 and ITC Secure, in a recent article for the digital trust organization ISACA. Deepfake video and audio can be used to impersonate IT staff or executives, targeting unsuspecting employees or even biometric systems that use voice and facial recognition software.

“Employees should be familiar with standard procedures for responding to suspected manipulated media and understand the mechanisms for reporting this activity within their organization,” advises a joint report published by the NSA, the FBI, and CISA.

Updating your workforce training programs also demonstrates an enterprise’s commitment to evolving cybersecurity risks – something you can and should convey to your cyber insurance agents, who are looking for demonstrable signs of an enterprise’s cyber vigilance.

[Read also: CISO success story – how LA County trains (and retrains) workers to fight phishing]

For more information on how to spot deepfakes and what your training should include, check out these tips from SANS here; the University of Washington’s Center for an Informed Public here; and MIT here and here.

3. Improve DEX – a.k.a. playing the long game

Bolstering your Digital Employee Experience (DEX) is perhaps a less obvious but potent way to fight misinformation and disinformation. Taking time today to enhance DEX through user-friendly interfaces, work tools, and seamless digital communication channels, can pay off in several ways:

In general, employee satisfaction is a significant measure of a company’s overall brand reputation. When a fake news story strikes and casts a shadow on a firm, consumers will be more skeptical if the firm is renowned for the way it treats its workers.

[Read also: For DEX that delivers, check out this solution brief on how to boost worker productivity and satisfaction from one platform]

Inside the workplace, happy employees are more productive, less stressed, and less likely to become insider threats. Those who feel valued will be less inclined to share false information and more engaged in those training programs noted above, and more likely to report misinformation and disinformation when they see (or suspect) it.

4. Empower threat hunting teams to look out – and outward – for “unknown unknowns”

Threat hunts, devised to look inward and search for unprotected entry points within a computer network, can also look to the outside world to identify patterns and detect anomalies related to a brewing misinformation or disinformation crisis.

Threat hunters are trained in the art of exploring “unknown unknowns,” those unidentified vulnerabilities or unanticipated risks that are ripe for exploitation. In the realm of mis- or disinformation, these could include advanced, not-yet-invented deepfake capabilities, novel dissemination channels, or a sudden surge in negative social media mentions that link your brand or executives to a political or social issue your firm has never trafficked in before. At the very least, they can spot an oncoming wave of unearned negative commentary when it’s just an early trickle.

Armed with social media monitoring tools, threat hunters can spot common patterns and tactics used by known threat actors and alert communications, PR, and marketing teams when an uptick in negative messaging about your company is detected. Such monitoring can’t stop an attack, but it can give your spokespeople a few days or hours of advance warning to craft a “prebunking” campaign – counter-messaging that gets out to the public before a flood of lies swamps the internet.

When dealing with misinformation and disinformation, speed is critical. Researchers have found that proactive prebunking is more persuasive and encourages consumers to be more skeptical of lies than after-the-fact debunking of fake news.

5. Apply the rules of effective crisis communications

Once false info goes viral, the fundamental principles of crisis communication – being swift, transparent, and empathetic – should guide an organization’s response.

If the company has a history of operating without empathy and transparency, it becomes much easier for people to believe unfounded assertions about your brand.

Jeff Pollard, VP and principal analyst, Forrester

Prompt action and transparency are crucial. Last year, the U.S. Department of Justice convicted Uber CISO Joe Sullivan for lying and trying to cover up a 2016 hack in which thieves stole data of some 57 million customers. Other firms have fared no better in the court of public opinion when delays of going public have, well, gone public.

“The world tends to be pretty forgiving if you’re upfront about things,” Jenai Marinkovic, CISO at Tiro Security, told Focal Point in an article on successful communication strategies last spring.

Translating engineer-ese into clear, plainspoken messages that customers (and board members) can understand is also vital when responding to an info attack, as it demonstrates empathy for stakeholders. Empathy humanizes a brand, and is one of the seven levers of trust in Forrester’s trust imperative, a set of guidelines to help enterprises build enduring bonds with customers and stakeholders. It’s also something any firm can nourish and encourage starting today.

[Listen also: In this episode from our new podcast, Let’s Converge, we discuss a recent ISACA survey that shows the unexpected cybersecurity benefits when enterprises take “trust” seriously]

Make no mistake: Just like other cyberattacks, AI-fueled misinfo/disinfo attacks are not a matter of if but when. So establishing empathy in business operations and communications is vital, writes Jeff Pollard, a vice president and principal analyst at Forrester, in a blogpost earlier this year. “History matters when it comes to defending against reputational damage from deepfakes,” he explains.

“If the company has a history of operating without empathy and transparency,” he notes, “it becomes much easier for people to believe unfounded assertions about your brand.”


TO LEARN MORE

Check out the first installment in this two-part series, which offers more tips on how to fight misinformation and disinformation and explains why enterprises cannot count on third parties or legal authorities to make this right:

Joseph V. Amodio

Joseph V. Amodio is a veteran journalist, television writer, and the Editor-in-Chief of Focal Point. His work has appeared in The New York Times Magazine, Men's Health, Newsday, Los Angeles Times, CNN.com, and Barrons.com, and has been syndicated in publications around the world. His docudramas have aired on Netflix, Discovery, A&E, and other outlets. He also produces Tanium’s new Let’s Converge podcast—listen here.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW