Skip to content

Can We Actually Pause AI? 1,100 Experts (and, yeah, Elon Musk) Say Yes

Forget that celebrity techies like Musk and Steve Wozniak have signed a new AI moratorium letter. Thousands of dedicated computer scientists are concerned about AI’s rapid (too rapid?) development, and you should be, too.

Perspective

Elon Musk and Steve Wozniak got the name-checks in yesterday’s headlines about the open letter calling for a six-month moratorium on the development of artificial intelligence (AI) technology. But 1,100 others—hard-working computer scientists, tech geeks, and other knowledgeable experts—also signed on to sound the alarm about AI, and another major AI research group followed up Thursday with its own formal objection to GPT-4 technology.

So it’s well worth paying close attention, no matter what you think of Twitter/Tesla/SpaceX owner Musk or Apple co-founder Wozniak.

The letter, released by the nonprofit Future of Life Institute, called for AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT-4 (including the currently-being-trained GPT-5).”

Gain visibility to sensitive data at scale and meet regulatory compliance requirements.

GPT-4 debuted earlier this month. It is the latest iteration of language technology within the AI chatbot ChatGPT, which OpenAI launched in November, generating a firestorm of buzz, headlines, and social media posts. Within weeks, more than 100 million people had signed up to use the free, automated tool, a generative AI program that allows users to create written content that’s both comprehensive, culled from countless online sources, and so conversational it looks and sounds like it was created by a human.

Other firms have been racing similar chatbots to market, and enterprises and industries of all stripes are feverishly looking for ways to integrate these tools into existing networks, business models, and even exploring AI in cybersecurity.

Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.

Future of Life Institute

“The last thing that we want is to lean so heavily on the technology and into the hype that we forget [the reality],” says Krystal Jackson, a junior AI fellow at Georgetown University’s Center for Security and Emerging Technology (CSET). Jackson, interviewed yesterday for “Let’s Converge,” a new podcast produced by cybersecurity software maker Tanium and due to launch this spring, noted that such chatbots are fed and trained on information found online.

[Listen to the podcast: Obsessed with ChatGPT? Here’s the Hype and the Hope]

“Some of the information on the internet is good,” she says. “Some of it’s really bad and we need to keep a premium on critical thinking.”

What is GPT-4 exactly and should I be concerned?

Since their recent debut, chatbots like ChatGPT have been tested in a variety of ways. So far, they can pass graduate-level exams, build code for websites, even craft the perfect pickup line based on profiles in a dating app. They may also increase cyberattacks: By making it so much easier for just about anybody to create their own malware, phishing emails, and other dangerous content, this new technology will put enterprises at increased risk of ransomware attacks, insider threats, and cyberwarfare.

Some of the information on the internet is good. Some of it’s really bad and we need to keep a premium on critical thinking.

Krystal Jackson, junior AI fellow at Georgetown University’s Center for Security and Emerging Technology

Just a few months ago, GPT-3.5 passed a simulated law school bar exam, scoring in the bottom 10% of the class. GPT-4 also passed the exam, reported OpenAI, ,scoring in the top 10%.

These bots are smart. And getting smarter. And we’re not prepared for what comes next, the signatories to this public letter warn.

[Read also: Yes, ChatGPT will turbocharge hacking—and help fight it, too]

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control,” the letter warns.

For this reason, they say, we need to pause. Which should make all of us pause.

Is a pause realistic?

The race to develop more powerful modes of AI is accelerating at an unprecedented rate. And many may feel that there is nothing that can be done to stop the progression of science and technology, but the idea that scientific advances can’t be slowed or regulated flies in the face of reality.

This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

Future of Life Institute

Regulatory bodies like the U.S. Food and Drug Administration (FDA), which approves the sale and marketing of new drugs, and the U.S. Department of Agriculture (USDA), which oversees the safety of commercial food supplies, were established in part to provide guidelines for manufacturers. We all need to know and agree on basic rules of the road before racing down any highway.

When it comes to AI, we’re all flying down the Autobahn. And even that famed superhighway does at times contain some speed limits.

[Read also: ChatGPT will make cybercrime easier for the less experienced. It’s time to get inside the mind of the elusive teenage hacker]

It’s not so unreasonable, note the signatories, for companies like Microsoft, Google, and others to design new tech with a shared understanding of ethical guidelines and safety provisions, much the way researchers who study cloning have agreed to not—for the moment, at least—clone human babies, no matter how profitable or inevitable that advance may seem.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter continues. “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

Consider this a chance to establish some speed limits, and erect a few signs that warn “Slippery when wet.”

Whether AI makers like Musk—who has warned about the unfettered and extensive use of AI—actually follow the rules or speed past those signs in their Teslas remains to be seen. But that doesn’t mean the signs and limits aren’t worth creating.

Joseph V. Amodio

Joseph V. Amodio is a veteran journalist, television writer, and the Editor-in-Chief of Focal Point. His work has appeared in The New York Times Magazine, Men's Health, Newsday, Los Angeles Times, CNN.com, and Barrons.com, and has been syndicated in publications around the world. His docudramas have aired on Netflix, Discovery, A&E, and other outlets. He also produces Tanium’s new Let’s Converge podcast—listen here.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW