Forget ChatGPT Hearings and Meta Fines—the Surgeon General Is Big Tech’s Biggest Threat
Soft-spoken Surgeon General Vivek Murthy, with his advisory on social media’s harmful effects on kids, turns up the volume on the need for Big Tech regulation.
While we saw ChatGPT and Meta held to account by Congress and regulators this past week, Big Tech now has to answer to the soft-spoken U.S. Surgeon General Vivek Murthy, who issued an urgent warning today about social media’s harmful effects on kids.
Watch out, all you tech sector Goliaths—here comes David.
Though Murthy’s new Advisory on Social Media and Youth Mental Health acknowledges that social media offers some benefits, it warns there are “ample indicators” that social media can have “a profound risk of harm to the mental health and well-being of children and adolescents.”
Gain visibility to sensitive data at scale and meet regulatory compliance requirements.
The report states in clear language what every parent and teacher can tell you: Social media use by young people is nearly universal, with up to 95% of young people ages 13 to 17 reporting using a social media platform and more than a third admitting they use social media “almost constantly.” Citing harmful online offerings, like violent and sexual content, and the way social media can amplify bullying and harassment, the advisory notes how social media also compromises sleep and quality time with family and friends.
“We are in the middle of a national youth mental health crisis,” Murthy noted, “and I am concerned that social media is an important driver of that crisis—one that we must urgently address.”
Today’s findings from the surgeon general, while not definitive—he admits there’s still a lot we don’t know about social media—feel more forceful and far-reaching than the European Union’s record-busting fine issued Monday against Meta, or last week’s loudly hyped congressional hearing on ChatGPT. It’s a Teddy Roosevelt moment: Speak softly and carry a big advisory.
The doctor is in
In the report, and in interviews delivered throughout the day, Murthy laid the responsibility for that urgent action on the doorsteps of Big Tech—whom he criticized for withholding possibly vital data that could help doctors, researchers, and others more fully understand the health impacts of social media on kids (and adults, for that matter)—and Congress, which he says hasn’t done enough.
We are in the middle of a national youth mental health crisis and I am concerned that social media is an important driver of that crisis—one that we must urgently address.
“This can’t be left up to technology companies alone,” he said in a radio interview on New York radio station WNYC Tuesday morning. “We need our lawmakers to set helpful standards.”
Driving the point home, he tweeted, “Nothing’s more important to parents than keeping our kids safe.”
The kids are (not) all right
Nothing’s more important to legislators, too. Or so it would seem, given the rumblings of the current Congress, which appears to be enlisting kids in its ongoing fight with Big Tech.
Lawmakers are at work on bipartisan bills like the Kids Online Safety Act, introduced this month by Sens. Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT), which would require social media platforms to provide minors with options to protect their personal information, disable addictive product features, and opt out of certain algorithm recommendations. Another bill, the Clean Slate for Kids Online Act, reintroduced earlier this year, would require that websites delete data collected from children under 13 upon request.
[Read also: Huge fines are a wake-up call to prioritize data security]
“Legislators tend to respond very quickly to most things related to children and students,” says Doug Thompson, director of technical solutions engineering and chief education architect at the leading cybersecurity firm Tanium.
Policymakers certainly have not had much success in instituting regulations or otherwise reigning in tech companies’ power via antitrust- or privacy-related efforts.
Fines and deepfakes grab headlines
Monday’s big Big Tech news—a record $1.3 billion fine levied against Meta by the European Union for transferring EU users’ personal information to the U.S.—might have sounded like a significant blow to the corporate monolith formerly known as Facebook, until you read the fine print.
This can’t be left up to technology companies alone. We need our lawmakers to set helpful standards.
The case, which highlights the imbalance between the EU’s strict consumer-privacy laws and the U.S.’ lax limits, has dragged through the courts for 10 years and is far from over. Meta immediately said it would appeal, which delays any ultimate pain. And EU and U.S. negotiators have been at work on a data-transfer agreement, which may be announced as early as this summer and could make this particular legal finding moot.
If that happens, score one for Goliath.
Last week, OpenAI CEO Sam Altman testified before Congress about the safety concerns swirling around his company’s much-hyped ChatGPT, a free chatbot that uses generative-AI technology to create remarkably human-sounding written content at record speed. As hotseats go, Altman’s was lukewarm.
While congressional hearings on tech in years past have been decidedly more cringey than consequential, marked by political grandstanding, last week’s tête-à-tête was less a showdown, more show-and-tell. Blumenthal, chair of the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a deepfake recording of his own voice, created by AI technology that examined recordings of his floor speeches and recited ChatGPT-written opening remarks. It sounded a little stilted, but real enough.
[Read also: Hackers use ChatGPT lures to spread malware on Facebook]
The rest of the hearing was reigned in and downright polite. Altman, for his part, came off as reasonable, likable, and candid, acknowledging fears expressed by scientists that today’s little-understood AI technology being produced at breakneck speed could bring “significant harm to the world,” and “if this technology goes wrong, it can go quite wrong.” Never mind that within weeks of its November debut, it was already being used by cyber gangs to create phishing emails and malware. Altman even said the “R” word, endorsing the creation of a regulatory agency that would license AI systems and block models that could “self-replicate and self-exfiltrate into the wild.”
There was no significant comeuppance for this executive or his virulently popular product, despite the unfettered talk of regulation. Another Goliath win.
“I think people realize you can’t put the genie back in the bottle,” says Thompson. “The legislators don’t know enough about ChatGPT to know what regulations to put in place even if they choose to do so.”
Back at the Department of Health and Human Services, however, Murthy may prove more skilled with a slingshot. He told interviewers today of his 5-year-old daughter, who recently asked Mom and Dad if she could post a photo on social media.
“We were stunned,” he said, his voice just above a whisper.
You could hear it—the fear and concern shared by countless parents who didn’t think they’d have to be facing this issue this soon.
The moment felt…tipping-point-ish.
If Big Tech regulation is coming—and clearly, it is—it won’t come with a bang. Or a creepy deepfake. It’ll be quieter.