Skip to content

3 Big Business Takeaways From Biden’s AI Executive Order

If you’re racing to capitalize on AI’s revolutionary technology, don’t let FOMO steer you wrong. Take these steps to avoid risk.

Perspective

Among the blur of developments on the artificial intelligence (AI) front last week, President Joe Biden’s sweeping executive order could be a crucial roadmap – or a speed bump – for businesses racing to capitalize on the revolutionary technology.

Issued just before the first-ever global AI safety summit, in London, Biden’s order outlines AI safety and security standards, new consumer and worker protections, and anti-bias safeguards to be developed by industry and federal agencies. Think of it as a set of guardrails to harness and maximize the benefits of AI while mitigating the risks, a bold (if somewhat belated) step forward in governing AI in the U.S.

“This executive order may help enterprise leaders and board members slow down before going all-in on AI,” says Safia Kazi, a privacy professional practices principal at ISACA, an international professional organization focused on IT governance. “It will require enterprises to think about the risks associated with using AI.”

Join thousands of global IT and security experts – in person or online – for four days of engaging keynotes, breakout sessions, labs, and certifications. TANIUM CONVERGE 2023, Nov. 13 – 16

Going slow doesn’t mean stopping. On the contrary, the executive order seeks to rev up AI analysis at federal agencies, lighting a fire under government institutions that usually lag developments in high tech. For example, it directs the Department of Health and Human Services to develop a “strategic plan” for AI within the year. Will that actually happen? Hard to say, but it’s a much-needed push.

Meanwhile, business leaders looking to get a jump on their competitors have the opposite problem: Many are deploying AI at breakneck speed, unaware of hairpin turns that may lie ahead.

ChatGPT launched just 11 months ago and other generative-AI-fueled chatbots soon followed, igniting a consumer craze and transforming workplaces. As enterprises race to adopt the nascent technology, business leaders are succumbing to FOMO (fear of missing out) on overdrive. This is especially troubling in fields like healthcare, where doctors and hospital administrators have been rushing to purchase AI-enhanced medical tools – used to interpret tests, diagnose diseases, and deliver therapy – that have not undergone the repeated testing the government usually requires for such devices, as a new Politico report points out.

The impacts of AI deployment are tough to predict, given that the companies creating these products closely guard the algorithms that control their devices, the report explains. This proprietary “black box” protects intellectual property but leaves business and security leaders who utilize these tools unable to know just how safe or effective they may (or may not) be.

This is where Biden’s new order may provide some clarity. It outlines a slew of potential problems and pressing concerns that all of us must consider as we adopt and adapt to new AI technologies.

We suggest – for now – focusing on these three steps that organizations can take to proceed responsibly in the AI revolution.

1. Take data privacy seriously

The problem: Unlike other countries, the United States lacks a comprehensive national law that protects data privacy.

What the executive order says: Biden’s new order calls on Congress to pass data-privacy legislation and promote privacy-preserving technologies.

This executive order may help enterprise leaders and board members slow down before going all-in on AI.

Safia Kazi, privacy professional practices principal, ISACA

“It is vital to consider privacy when developing and leveraging AI tools, but some enterprises neglect privacy in conversations about emerging technologies,” says Kazi, in an exclusive interview with Focal Point. “It can be too easy for companies to engage in predatory data-processing practices because of the absence of privacy-related laws,” she says.

What business and security leaders can do now: Listen to privacy professionals. A workforce skilled in information security and data privacy is more important than ever.

“Privacy professionals who struggle to convey the importance of privacy to their executive leaders can use this executive order to show that data privacy is a national (and international) priority,” says Kazi.

[Listen also: On our Let’s Converge podcast, ISACA’s Safia Kazi covers the wave of fines, headlines, and reputational damage coming to brands that ignore data privacy]

2. Hire, don’t fire (at least for now)

The problem: Job displacement, which remains a hotly debated topic.

“I recently heard from an engineering professor who was involved in the development of GPT4 itself,” recalls Larry Godec, the former chief information officer (CIO) at First American Financial and a trusted AI adviser to some of the world’s largest enterprises. “He argued that it won’t replace jobs, it’ll simply make humans better.”

In the next one or two years, we’re likely to see a real [AI] revolution, not seen since the inception of the internet.

Larry Godec, trusted AI adviser and former CIO, First American Financial

Godec disagrees. Speaking in an exclusive interview with Tanium, a leading cybersecurity and business-transformation firm (which owns this magazine), Godec explained that the implementation of AI will inevitably result in displaced jobs and disrupted industries.

“In the next one or two years, we’re likely to see a real revolution, not seen since the inception of the internet,” he says. “The expectation is that it will be as big, if not bigger, than the Internet. People will lose jobs and people will have to figure out – very quickly – different jobs or career paths.”

How painful that transition will be is yet to be seen. One thing we do know: Layoffs pose a threat to cybersecurity and can lead to an uptick in insider risks.

What the executive order says: The order seeks to develop a set of best practices to “mitigate the harms and maximize the benefits of AI,” and produce a report on AI’s expected impacts to the labor market, identifying the ways the government can support workers facing labor disruptions.

What business and security leaders can do now: Focus on hiring new security workers with AI experience (and train the ones you have), so you can implement AI strategies that are cost-effective, accurate, and safe.

Now is not the time to streamline or thin your security team.

“You’ll need to figure out what to do with all the disparate data you have and whether you can build a [large-language] model to deal with that,” Godec explains. “To augment that, firms and their employees will need to acquire more skills in teaching or prompting the models.”

[Read also: The key to making the most of emerging AI technology is to know what it may never learn – which is why you’ll still need your IT pros]

3. Test your AI tools for bias.

The problem: AI tools are only as good as the data they’re trained on, and when biased individuals (and that means all of us) design these tools and deliver that data, the tools will unwittingly replicate and deepen discrimination, bias, and other inequities.

What the executive order says: Amplifying strategies put forth in Biden’s Blueprint for an AI Bill of Rights and a previous executive order directing agencies to combat algorithmic discrimination, the new order asks federal agencies to address AI bias through training and technical assistance and by prosecuting AI-related civil rights violations. (They’ve already gotten serious about that last one. See below.)

What business and security leaders can do now: Test, test, and retest your AI tools and models. “There’ll be a clear need for more rigor when it comes to things like security protocols that all code must go through, [and] AI will need its own quality controls,” Godec says.

Business leaders who eagerly implement AI without understanding (and testing for) its flaws will pay the price. Federal agencies have stated unequivocally that companies can be held liable for discriminatory and other harms enacted by AI tools they deploy, even if they didn’t develop those tools. Case in point: iTutorGroup, which settled a lawsuit in August brought by the Equal Employment Opportunity Commission (EEOC) on behalf of more than 200 job applicants. While denying wrongdoing, the tutoring consortium agreed to pay $365,000 to resolve charges that its AI-powered hiring selection tool automatically rejected women applicants over age 55 and men over 60. It is the commission’s first bias lawsuit involving AI, and it doesn’t sound like it will be the last.

“Workers facing discrimination from an employer’s use of technology can count on the EEOC to seek remedies,” said EEOC chair Charlotte A. Burrows in a press release.

[Read also: What the tech sector can learn from TikTok, and other findings from a new Deloitte report on ethics and technology]

Having AI-trained IT and security staffers – people who can conduct diverse tests of AI tools, and bias audits (now required of employers in New York City as of this summer) – will protect organizations from similar lawsuits and public relations disasters. Which means whatever you end up paying those AI-trained folks, it’s money well spent.

Joseph V. Amodio

Joseph V. Amodio is a veteran journalist, television writer, and the Editor-in-Chief of Focal Point. His work has appeared in The New York Times Magazine, Men's Health, Newsday, Los Angeles Times, CNN.com, and Barrons.com, and has been syndicated in publications around the world. His docudramas have aired on Netflix, Discovery, A&E, and other outlets. He also produces Tanium’s new Let’s Converge podcast—listen here.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW