Insulin is not free. But a single tweet claiming that it was may have cost shareholders billions of dollars.
When a parody of Eli Lilly’s Twitter account promoted free insulin last November, the intriguing fiction went viral and sent the drugmaker’s stock price tumbling nearly 5 percent, even denting the stocks of other insulin makers Novo Nordisk and Sanofi.
The person behind the tweet did it to make a point about corporate greed and the potential consequences of Twitter’s then-new scheme giving a blue “verified” check to anyone willing to spend $8 a month. But he also unintentionally made an additional point: the increasing ability of misinformation (inaccurate statements made by mistake or through carelessness) and disinformation (lies deliberately told) to take down companies and impact entire industries.
When false information goes viral, it can deal severe financial and reputational blows, and many organizations are only just beginning to figure out how to deal with this new reality.
“Businesses need to recognize that their reputations can pivot on a dime,” says Alan Jagolinzer, professor of financial accounting at Cambridge Judge Business School, and director of the Centre for Financial Reporting and Accountability. “That’s especially true when dealing with a belligerent actor with a strong following.”
Spreaders of misinformation and disinformation can take the form of short sellers hoping to manipulate a company’s stock price, or businesses attempting to sow doubt about a competitor’s products. It might be individuals pursuing personal vendettas, opportunists taking advantage of crises to build their audiences, or nation-states attempting to foment chaos and confusion.
Brands can be damaged and business leaders can find themselves at personal risk, thanks to false rumors or AI-generated deepfakes—with Tesla, online retailer Wayfair, and Mexican cement supplier Cemex among the recent high-profile victims.
And no business is immune. While attacks on multinational corporations grab most of the headlines, false information has impacted small and medium-size businesses from plumbers to plastic surgeons, often in the form of negative “revenge” reviews on sites like Yelp and Google Reviews. A 2022 study by the National Bureau of Economic Research found that fake negative reviews can cause small businesses to raise their prices by 12 percent to make up for lost sales.
Misinformation (and disinformation) made easy
Launching an information attack on a company is much easier than phishing or trying to hack into a computer network to plant ransomware, stresses Lisa Kaplan, CEO and founder of misinformation mitigation platform Alethea.
You don’t need to be technically savvy to launch a disinformation campaign, [and] people who are financially or ideologically motivated are increasingly targeting corporations.
“You don’t need to be technically savvy to launch a disinformation campaign the same way you do in order to successfully access big banking systems,” she says. “And while state actors tend to save their best new tricks for geopolitical moments, people who are financially or ideologically motivated are increasingly targeting corporations.”
Alethea’s AI-based Artemis platform is part of an emerging field of technological solutions designed to detect and defeat information attacks. It acts as a virtual analyst – collecting information across the internet and social media, identifying known actors and common narratives, and looking for coordinated attacks. By providing an early warning signal that an attack may be underway, such platforms give companies time to respond before they spread.
Attacks may be timed around critical events, like an upcoming IPO. Getting out in front of a potentially damaging story and taking charge of the narrative is key, says Kaplan. The company can warn its customers and other stakeholders how to identify the misinformation that’s on its way – a technique known as “prebunking.” Studies show that prebunking is more effective than debunking, or fact-checking, false information after it has spread.
“If you’re catching the early warning signals that something is about to happen, you can change the narrative by saying, ‘Here’s accurate information’,” Kaplan says. “You’re able to get to your stakeholders first, and sometimes being first helps you to win.”
Deepfakes are on the rise
Thanks to generative AI, misinformation is about to get much worse, says Jevin West, associate professor at the University of Washington and director of the university’s Center for an Informed Public.
Rumors spread really fast, especially with deepfakes and synthetic media. If companies aren’t already worried about this, they should be.
“The ability of large language models to generate high-quality messages very quickly is making it much harder for companies to respond to false rumors,” says West. “Rumors spread really fast, especially with deepfakes and synthetic media. If companies aren’t already worried about this, they should be.”
The damage from AI-generated misinformation can have a ripple effect. When an online ad for dental insurance plans featuring a deepfake version of Tom Hanks appeared earlier this year, it didn’t just diminish Hanks’ reputation; it also hurt legitimate insurers offering similar products, as well as companies the Oscar-winning actor might actually be in business with.
“It reduces the public’s level of trust across the entire industry,” West says.
The ability to generate deepfakes has gotten dramatically better in just a couple of years, says Rijul Gupta, founder and CEO of DeepMedia, which markets a platform that can detect synthetic media (fakes produced by AI).
“Eighteen months ago, you might have needed 10 minutes of someone’s actual face and voice in order to generate a fake,” he says. “Now you need much less. I can almost guarantee you that there will be publicly available applications that can take five minutes of someone’s face and voice and make them say anything.”
DeepMedia’s DeepIdentify platform reverse-engineers audio and video to determine if they were assembled by generative AI. The company works with the U.S. Department of Defense and major social media platforms to identify synthetic media, and is planning to release an enterprise version of its platform later this year.
“If a deepfake of the CEO of a Fortune 500 company gets posted on TikTok, our AI-containment solutions will search for it and determine whether it’s fake,” says Gupta. “If it’s an AI manipulation, we’ll reach out to the platform and request they take it down.”
The deepfakes problem is likely to get much worse. Nina Schick, author of Deep Fakes: The Coming Infocalypse, has predicted that as much as 90 percent of the content on the internet could be synthetically generated by 2026.
Defending against the inevitable
Experts agree that organizations cannot rely on third parties or legal authorities to step in and fix the misinformation problem for them.
Kaplan, for one, notes how recent cuts to trust and safety teams at major social platforms puts the onus on organizations to take the lead in policing their own reputations. “Having early insight into what’s being said out there leads to smarter business decisions,” she says.
Not that the tech sector is (or should be) completely off the hook. Technology companies must do their part to establish consumer trust and reestablish it when it is lost. Establishing ethical frameworks that govern the creation and use of new technology is a significant first step that all tech firms, from startups to megabrands, need to take right now. That means baking transparency and accountability into designs from the start.
As for those outside the tech sector? Jagolinzer urges more action to hold the platforms accountable. He says large corporations should get behind legislation like the EU’s Digital Services Act, and business leaders need to be more vocal about bringing their concerns directly to media organizations.
“If I were leading a major corporation, I’d be all over the media companies, saying, ‘If you can’t get a handle on the information flowing through your infrastructure, you’re placing us all at risk’,” he says.
Misinformation is not a problem organizations can afford to ignore, adds UW’s West. It will take a coordinated effort between cybersecurity teams, legal counsel, and media relations personnel to keep close watch over attacks on a company’s reputation, through social media listening, detection technology, and incident-response plans.
“You need to invest in a team that’s paying attention to this stuff,” he says. “Because things are going to get a lot worse before they get better.”
TO LEARN MORE
Check out the second installment in this two-part series, which reveals how updated phishing tutorials, threat hunts and even better digital employee experience (DEX) strategies can be effective at countering this new threat: