Skip to content

Cyber Whistleblowing Is Putting Security Chiefs in the Hot Seat

A 2022 law may spawn the next generation of security whistleblowers. Greater transparency and shared accountability can help make them less necessary.

Perspective

When Twitter hired Peiter “Mudge” Zatko as its new security lead in November 2020, he was warmly embraced by then-CEO Jack Dorsey, who had personally recruited him.

As a member of the legendary L0pht hacker collective in the 1990s, Mudge helped invent the concepts of ethical (“white hat”) hacking and responsible disclosure of vulnerabilities. He had worked for the Defense Advanced Research Projects Agency (DARPA) and Google before joining Twitter.

After a devastating hack four months earlier allowed attackers to take over accounts belonging to former President Barack Obama and Tesla CEO Elon Musk, among others, the company needed someone with Mudge’s towering reputation to restore its credibility. But less than two years later, the now ex-security lead was testifying before Congress about Twitter’s “grossly negligent” approach to security, after company leaders allegedly ignored his many warnings.

Compare and prescriptively improve your IT risk metrics against your industry peers.

As Mudge discovered, the path from security savior to whistleblower is slippery and steep. For a high-profile executive to go public about unaddressed security vulnerabilities is exceedingly rare. But that may soon change with federal cybersecurity reporting rules about to go into effect.

The floodgates open to whistleblowing

In March 2022, the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) was signed into law. Once final rules go into effect, the act will require cybersecurity professionals in critical infrastructure companies to promptly report incidents and ransomware payments to federal authorities.

There are times when I’ve put my badge on the table and said, ‘This is not happening on my watch.’

Malcolm Harkins, chief security and trust officer, Epiphany Systems

According to legal experts, the act creates an opportunity for whistleblowers who have knowledge of a failure to disclose cybersecurity breaches to the Cybersecurity and Infrastructure Security Agency (CISA) in a timely manner to report a breach through a lawsuit under the False Claims Act.

The SEC has also proposed new rules requiring publicly traded companies to disclose “material cybersecurity incidents” within four days of their occurrence, and already offers significant financial incentives for employees who report organizations that fail to comply. As a result of these new requirements, the number of employees declaring whistleblower status is expected to increase.

It doesn’t have to be this way. By embracing a culture of transparency and shared accountability, organizations can create fewer circumstances that lead to whistleblowing, while also doing a better job of managing risk.

CISOs feel the squeeze

The pressure on CISOs to suppress information about serious cyber risks is all too common, notes Malcolm Harkins, chief security and trust officer for Epiphany Systems, and a former chief security and privacy officer at Intel.

In late 2020, Harkins posted a poll on the Pulse.QA platform (now Gartner Peer Insights) asking security professionals if they’d ever been asked to “whitewash” real security risks. Of the more than 100 respondents, nearly 60% said yes. Although the survey was unscientific and had a small sample size, it reflects a growing reality among cybersecurity leaders, Harkins says.

“We have all experienced integrity-related challenges that have tested our resolve,” he wrote in an essay published earlier this year. “We could anticipate when pressure was likely to arise…. We could smell it, feel it like a sixth sense.”

When confronted with unethical pressure from above, security leaders have four options: They can push back harder and try to change their boss’s mind. They can manage the risk as best they can and hope to avert disaster. They can turn in their badges and find more security-aware employers. Or they can go public with the news, alerting regulators while also endangering their careers.

“There are times when I’ve put my badge on the table and said, ‘This is not happening on my watch,'” says Harkins. “Personally, I would do that before ever becoming a whistleblower. If you think the company is making a material mistake, you should be willing to leave.”

Security is a team sport

Security professionals deal with hundreds of potentially serious vulnerabilities each day, says Steve Zalewski, former CISO for Levi Strauss and now a security adviser with S3 Consulting. They have to decide which ones present a big enough threat to the business and what to do if the CEO or the board refuses to respond appropriately.

CISOs are like firefighters—they run towards the problem, expecting to be treated like heroes. But more and more they’re realizing that’s not how their company is treating them.

Steve Zalewski, security adviser, S3 Consulting

“Often you’re put in the position of accepting a lot more risk than you’re comfortable with but have no way of managing that risk,” he says. “CISOs are like firefighters: They run toward the problem, expecting to be treated like heroes. But more and more they’re realizing that’s not how their company is treating them.”

And sometimes security pros make the wrong decision.

In November 2016 hackers breached a database at Uber, accessing the records of 57 million customers. Hackers threatened to make the data public unless Uber paid them $100,000. After discussing the situation with Uber’s in-house security lawyer and then-CEO Travis Kalanick, chief security officer Joseph Sullivan agreed to pay the attackers via the company’s bug bounty system. He then failed to report the incident to the Federal Trade Commission, which was already investigating the company for an earlier breach.

[Read also: Why bug bounty hunters are earning huge payouts]

Last October, Sullivan was convicted of a felony for concealing the breach. As of publication time, a date for a sentencing hearing had not been set.

“Joe made a mistake based on bad advice from others in the company, and in hindsight I think he’d do things differently,” says Rob Chesnut, former general counsel and chief ethics officer at Airbnb, who served on Uber’s Advisory Board on Safety in 2016 and now consults on legal and ethical issues. “But is it fair that Joe is taking sole responsibility, in a criminal case, for the company’s failure? I don’t think so.”

Sullivan’s conviction captured the attention of the cybersecurity community, notes Chenxi Wang, founder and managing general partner of venture capital firm Rain Capital, and an investor in multiple security startups.

“There’s been a lot of discussion around who in an organization has the responsibility to make decisions that have cybersecurity implications,” she says. “Where does the burden fall? The CISO should not be the only one [responsible].”

[Read also: Companies are scrambling to find CISOs amid rise in hacking threats]

Wang says organizations need to establish a process for making key decisions about how cyber incidents are handled and which other executives are accountable and need to be included in that process. It starts with the general counsel, adds Zalewski.

“General counsels (GCs) are a CISO’s best friend, because they’re the only ones who are legally obligated to put the best interests of the company first,” he says. “When you bring the GC into the discussion, the board has an obligation to listen to you, and you have a written record of what happened.”

Culture is key

Ultimately, whether a company becomes an unintentional breeding ground for whistleblowers boils down to corporate culture. Do employees feel comfortable speaking truth to power? Are business leaders willing to listen?

“If you’re in a company where you’re just supposed to salute and execute, you’re going to have more of these issues,” notes Harkins. “If employees lack the right to disagree and challenge, you’ll see more of these ethical dilemmas.”

The key is encouraging transparency and communication from the bottom up and the middle out, notes Marene Allison, who is set to retire as CISO for Johnson & Johnson.

[Read also: How CISOs can talk cyber risk so that CEOs actually listen]

After the passage of the Sarbanes-Oxley Act in 2002, publicly traded companies were required to maintain ethics hotlines where employees can anonymously report potential wrongdoing. Allison says every company needs a way for employees to sound the alarm when they see something wrong.

“Bad news early is good news,” she says. “If a company wants to be transparent, it needs to provide an opportunity for employees and third-party contractors to comment on practices. Yes, you’re going to get the person who wants to be on the 5 o’clock news, but you’ll also get the person who sees something and is trying to tell you but no one will listen.”

Security leaders have a moral imperative to go public if a cyber incident could end up endangering lives, argues Harkins. But in less dire circumstances, a more common response is to walk away. He advises security pros to document everything first and share it with the general counsel and other top executives. That tactic can create a paper trail of accountability in case regulators come knocking.

A gray line

The threats are stark and impossible to ignore. But cybersecurity exists in varying shades of gray. Whistleblower scenarios can arise when there’s a misalignment between how security leaders and the rest of the C-suite perceive risk.

The best way to address whistleblowers is to create an environment where they’re not needed.

Rob Chesnut, former general counsel and chief ethics officer, Airbnb

For decades, cybersecurity has been sold using fear, and security pros can get a bit overzealous when talking about risk, says Harkins. This stance can help when seeking budget approval, but it can also damage a CISO’s credibility when the cataclysm they predicted fails to materialize.

Business leaders, on the other hand, may be unable to fully grasp how a vulnerability affects the company’s bottom line. And while it’s relatively easy to measure the damage that a serious incident causes, it’s much harder to measure the value of the investment that prevents the incident from ever happening. It’s the CISO’s job to make that case without overhyping it.

“Many businesses don’t understand cybersecurity; they just know they need to have it,” says Zalewski. “CISOs need to help companies understand the true magnitude of the risks they are undertaking.”

[Read also: To use cyber metrics well, show boards the money]

But it’s impossible to protect every asset equally. Companies can be doing everything aboveboard from a security perspective and still get breached. How they respond matters most. And the rules about how to do that in a uniform way have yet to be defined.

“Just because a company has a data breach doesn’t mean it’s a bad company,” says Chesnut. “Are they transparent about what happened and do they seem sincerely committed to rectifying the problem? The best way to address whistleblowers is to create an environment where they’re not needed.”

Dan Tynan

Dan Tynan is an award-winning journalist whose work has appeared in Adweek, Fast Company, The Guardian, Wired, and too many other publications to mention.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW