Skip to content

The Surgeon General’s Social Media Warnings Ignite Controversy – and an Opportunity for Business Leaders

Dr. Murthy’s new demand for warning labels is more than just a way to protect kids. It provides an opportunity for enterprise leaders and board members to discuss where their organization sits in the social media landscape, and the ways that puts their brand and employees – of any age – at risk.


The U.S. surgeon general’s urgent new proposal for warning labels on social media might not go anywhere – it met with skepticism, and would require congressional action – but it has jumpstarted a cultural conversation that enterprise leaders shouldn’t ignore.

Dr. Vivek Murthy issued a groundbreaking proposal Monday to put Surgeon General warning labels on social media platforms to protect children’s safety online amid the nation’s growing youth mental health crisis.

While this might be easy to dismiss as merely a kid issue, something relevant for families, educators, and those businesses who target young people, the proposal provides C-suite leaders from all sectors with an opportunity to examine how they operate in the social media landscape and the risks that exposure presents to their brand.

Don’t just react to vulnerabilities – take a proactive stance with solid intel backed by precise real-time data and autonomous power.

“Yes, indeed, this should have ripple effects [in the business realm and beyond],” says Kathleen M. Carley, Ph.D., a professor of computer science at Carnegie Mellon University.

As director of the school’s Center for Computational Analysis of Social and Organizational Systems (CASOS), Carley employs artificial intelligence and machine learning techniques to examine the influence of bots, trolls, and other forces operating throughout the social media landscape, forces that can influence the behavior and well-being of adults as well as children.

Given how past Surgeon General warning labels on tobacco and alcohol products have influenced social behavior, enterprise leaders would be wise to consider the potential business impacts from a new social media label. At the very least, boards should start asking some serious questions – which, let’s face it, they probably haven’t – about their organization’s use of social media in general and the ways to best prepare for coming regulation.

Because, labels or no labels, that regulation is coming.

Social media warnings and the power of reminders

As thorns in the side of Big Tech go, this is Murthy at his thorniest yet. His new demand for warning labels comes a little over a year after his major health advisory highlighting young people’s rampant use of social media, its addictive nature, and the “profound risk of harm” to kids.

Yes, indeed, this should have ripple effects.

Kathleen M. Carley, Ph.D., professor of computer science, Carnegie Mellon University

“A surgeon general’s warning label would regularly remind parents and adolescents that social media has not been proved safe,” he wrote in his recent op-ed for The New York Times. Evidence from tobacco studies, he continued, shows that “warning labels can increase awareness and change behavior.” Of course, in that case, the labels were part of a multi-pronged effort that included things such as mass-media campaigns and reduction of smoking on TV shows, Carley tells Focal Point.

[Read also: Warning labels? TikTok bans? New regs are coming and here are 3 ways to prep]

On the social media front today, supporting legislation has already been advanced in some states and in Congress, such as the Kids Online Safety Act, which would require platforms to take reasonable measures to prevent harm, and the Protecting Kids on Social Media Act, which would prohibit anyone under 13 from setting up a social media account. “There is consensus across both parties that more action must be taken,” writes Gabriel R. Sanchez, Ph.D., a senior fellow at the Brookings Institution, in a report on family support for warning labels.

And social media warning labels for kids will affect businesses… how, exactly?

It starts with concern for kids, something we can all agree on. But it won’t stop there.

Warning labels generally reduce belief and sharing of falsehoods.

Cameron Martel, Ph.D. student and warning-label researcher, MIT’s Sloan School of Management

Demands are ramping up for more guardrails on online content for users of all ages, over concerns about misinformation and disinformation – which the World Economic Forum cites as the world’s biggest short-term risk. Braced for the rapid deployment of generative AI (GenAI) technology, policymakers worldwide are exploring watermarking and other ways to flag AI-generated content. China has taken steps to ban unidentified AI content, President Biden’s executive order last fall strongly urged industry to create and use watermarks (Google and OpenAI have signed on to develop them), and the EU’s new AI Act obliges AI providers and users to do the same.

“Warning labels generally reduce belief and sharing of falsehoods,” writes Cameron Martel, a Ph.D. student at the Massachusetts Institute of Technology’s Sloan School of Management, and co-author of a review of warning-label research. That’s the good news. The not-so-good? In an email to Focal Point, Martel explained that the effectiveness of labels depends on how they’re implemented. Various factors play a role, including how visible the label is, how easy it is to scroll past or avoid, and what information it actually provides consumers.

One thing is becoming increasingly clear: Today’s regulation-free social media realm is unsustainable. Besides concerns about national security (TikTok ban, anyone?), fraud attempts using deepfake audio and video enabled by AI are inundating enterprises, up 2,137% in the last three years. Businesses both big (Tesla, online retailer Wayfair, and in a recent experiment, Chase Bank, among others) and small have been damaged by bad actors posting fake content or scamming employees via social engineering. Hackers’ annual take from social media-related cybercrime? An estimated $3.25 billion, according to one study.

[Read also: With cyber-villains weaponizing AI at an alarming rate, we asked a real-life Marvel superhero how he uses AI to fight back – here’s his CISO success story]

Although watermarks and warning labels are far from reality, experts agree that such guardrails are inevitable. And anyone who recalls the onset of the internet knows we’ve been here before.

Just consider how we dealt with man-in-the-middle attacks, notes Tim Morris, chief security adviser at Tanium, a leading cybersecurity solutions provider (and publisher of this magazine). To assure consumers that they were actually accessing their bank when they logged in to their account, we set up https, signing certificates, and a now-standard infrastructure. “The same is going to have to happen with content,” says Morris. “We’ll have some version of the Good Housekeeping Seal of Approval, just as we do now with software, with websites.”

Questions about social media that boards should start asking now

Not all these measures will come to pass. “But boards are going to have to be forward-looking,” says Morris. “Every business is going to have to deal with this kind of regulation in some kind of way.”

Boards ought to know who their customers are, who they’re advertising to, and who they need to protect. They have a corporate-citizenship responsibility.

Tim Morris, chief security adviser, Tanium

With that in mind, enterprise leaders and board members should start to address where their organization sits in the social media landscape, and the ways in which this puts them at risk. So gather department heads and other stakeholders who oversee cybersecurity, marketing, communications, and compliance. Here are some questions to ask:

1. Who do we serve, and who visits us online?

“I see this as an extension of KYC – know your customer,” says Morris. “Boards ought to know who their customers are, who they’re advertising to, and who they need to protect. They have a corporate-citizenship responsibility to do the best job they can to figure that out.”

2. How much do we rely on social media to conduct business?

Organizations that utilize social media for marketing and customer engagement may need to adjust their strategies to align with the new landscape. This adjustment might involve increased investment in alternative marketing channels or adopting new approaches to user interaction and data management.

3. What compliance tools do we have in place?

Businesses must stay informed about evolving laws and ensure that their practices are compliant with these changes. This may involve revising data privacy policies, updating user agreements, and ensuring compliance with new laws. Staying ahead of regulatory changes is crucial to avoid legal pitfalls and sustain consumer trust.

[Read also: Learn how prioritizing IT compliance boosts your bottom line — and ignoring it can potentially ruin your business]

4. How do we monitor social media for potential cybersecurity threats?

Keep on the lookout for misinformation campaigns or social engineering attacks, in which cybercriminals connect with employees over social media and try to induce them to divulge proprietary information. This includes investing in tools and technologies that can help identify and mitigate these risks.

5. Why aren’t we using multifactor authentication (MFA)?

Seriously? Still? Alas, it’s true. “So few people use MFA or employ strong privacy settings,” writes Chris McGowan, a principal of information security professional practices at ISACA, the global organization of IT professionals, in a report on social media’s threats to the workplace. Implementing strong authentication measures and regularly updating security protocols are critical steps to take to prevent social engineering attacks.

[Read also: Looking for the best practices to combat social engineering attacks? Here’s your comprehensive guide]

6. How are we educating employees about cybersecurity threats related to social media?

Employee training is imperative but even cyber-savvy workers unwittingly disclose a wealth of info that attackers can use, especially as they triangulate data from various online sources and platforms. An innocent post noting a pet’s name, made by an employee on their social media platform of choice, can be used to guess workplace passwords or security questions; or news of, say, an upcoming class reunion could provide a threat actor with enough info to impersonate a classmate, gain a target’s trust, and leverage it for malicious purposes.

And it’s not just workers’ social media channels. An organization’s social media profile can yield staff names, titles, contact info and other data to accelerate intelligence-gathering and craft convincing phishing or business email compromise attacks, notes McGowan.

Regular repeated employee training and materials like posters and newsletters are reliable ways to raise and maintain awareness. By understanding the potential risks associated with social media and taking proactive steps to mitigate the threats, organizations can better protect themselves and their stakeholders. No matter what their age.

Joseph V. Amodio

Joseph V. Amodio is a veteran journalist, television writer, and the Editor-in-Chief of Focal Point. His work has appeared in The New York Times Magazine, Men's Health, Newsday, Los Angeles Times,, and, and has been syndicated in publications around the world. His docudramas have aired on Netflix, Discovery, A&E, and other outlets. He also produces Tanium’s new Let’s Converge podcast—listen here.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.