Skip to content

What the Tech Sector Can Learn From TikTok: Trust Is Everything

Experts say that trust in the tech sector depends upon responsible computing. A new Deloitte report recommends how we might bake ethics directly into our technology.

Perspective

Life hasn’t been kind to social media giant TikTok recently. First, Congress grilled it over its privacy practices. Then the UK government fined it £12.7 million (about $15.8 million) for using children’s personal data without parental consent. The list of governments—including the US, EU, Canada, New Zealand, and Australia—that ban its use by employees continues to grow. And in the US, users supporting a TikTok ban outnumber those who don’t by 2-to-1.

TikTok isn’t alone. Scrutiny of the tech sector is at an all-time high. And hating on Big Tech is arguably the one thing Republicans and Democrats can agree on these days. “It is time for 2023,” said Sen. Amy Klobuchar (D-MN) on NBC News, when discussing recent bipartisan legislative efforts to rein in Big Tech. “Let it be our resolution that we finally pass one of these bills.”

In a word, it comes down to trust. For tech companies that resource is in short supply. Regaining it is now mission-critical.

Use benchmarking to compare how you rank against industry peers with a single, accurate, impact-based view of risk.

The problem: Trust requires ethical awareness, and most companies—tech and nontech alike—lack an ethical framework related to technology, according to Deloitte’s inaugural State of Ethics and Trust in Technology report, released in December.

It surveyed almost 1,800 professionals across eight sectors (including technology, financial services, healthcare, and government) on ethical approaches to emerging technologies such as autonomous vehicles, quantum computing, and augmented or virtual reality. It found 87% lacking or unaware of any ethical principles governing the development and use of emerging technology within their organizations.

While this finding applies to enterprises across the board, it is of vital importance to tech companies, now operating under heightened scrutiny in a tough economy.

Trust is the new currency in an increasingly competitive environment, says Rozita Dara, associate professor of computer science at the University of Guelph in Ontario, Canada. “Trust gives organizations a competitive edge,” she says.

Time for ethical frameworks

Ethical gaps show up in different ways. TikTok’s manifested in its alleged misuse of data, and addictive qualities in the app that critics worry ensnare users. A Harvard University study revealed racial bias in facial recognition technology developed by IBM and Microsoft. And observers have identified gender bias in Google Translate and an Amazon AI-powered hiring recruitment system (which the company eventually scrapped).

Trust gives organizations a competitive edge.

Rozita Dara, associate professor of computer science, University of Guelph

Trust in a company depends on its commitment to the ethical use of technology. And that goes double for companies that actually create that technology.

“We set up an expectation for what this technology is going to be and what it’s going to do,” says Yasemin J. Erden, assistant professor at the University of Twente’s Digital Society Institute in Enschede, Netherlands. In promoting the tech to users, there’s a risk that the maker or vendor won’t fully capture its implications. “All of that impacts on the trust that people have in the technology.”

We are seeing this play out now with regard to cognitive (AI) technology. In March, more than 1,100 computer scientists and other tech luminaries, including Elon Musk, signed an open letter asking all labs to suspend work on any training models more powerful than GPT-4, a large language model from research company OpenAI that powers its controversial ChatGPT service.

[Read also: Yes, ChatGPT will turbocharge hacking—and help fight it, too]

Erden feels uncomfortable discussing the specifics of GPT’s latest version because the makers have not been transparent enough. “We don’t know exactly how it’s doing what it’s doing,” she warns, echoing complaints from other AI experts. “So then how can we really assess the claims that are made about it?” (The fact that this little-understood chatbot was quickly co-opted by cyber threat actors to create malware and other dangerous content doesn’t help matters either.)

Big Tech’s best practices

Standards and policies governing the technology industry are coming, there’s no question about that. But until they take effect, it is critical that tech firms enact their own set of ethical principles, which may be used both internally (to guide the development of new trustworthy technologies) and externally (to win over consumers).

[Mozilla is] really transparent—they’re clear about what their aims are, what their limitations are, what they’re doing, and what they’re changing.

Yasemin J. Erden, assistant professor, Digital Society Institute, University of Twente

Deloitte’s report serves as a useful primer. It advises that company leaders meet with the actual teams completing the work. And to get the conversation started, it offers a seven-part framework to help diagnose the “ethical health” of a tech company’s products and services. According to Deloitte’s technology trust ethics framework, any new technologies employed by a tech firm should be:

  • Transparent: Users should understand and be able to inspect how the technology helps make decisions.
  • Safe and secure: Users should be protected from harm and their data should not be used or stored beyond its stated and intended purpose and without the expressed approval of the user.
  • Private: The technology should respect user privacy.
  • Responsible: The technology should be sustainable, humane, and serve a common and social good.
  • Robust and reliable: The technology should produce accurate, consistent results.
  • Accountable: It should be clear who is responsible for the technology’s use.
  • Fair and impartial: The technology should be designed to treat everyone fairly.

Defining the principles is just one part of the challenge. The other is executing them, points out Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University. “How do you actually make these things happen in the context of a corporation when you’re creating new products?” he asks.

The Markkula Center has a toolkit meant for engineers and technology designers to help tackle this process. It will soon release a handbook on applying technology ethics.

[Read also: More companies are practicing “privacy by design” to prioritize data security—and avoid hefty fines. Here’s why you should, too]

Dara cites ethics by design as a foundational best practice. It bakes ethics into the development of a product or service from the beginning, acknowledging its entire ecosystem, including the users’ rights and interests. It’s like adding “Eth” to the DevSecOps team. Developers must consider ethics as they test the functionality of a tool, evaluating its reliability and applicability.

Who’s getting it right?

As Deloitte’s report points out, the application of specific ethical principles might vary across different technologies. AI has different implications, users, and technical characteristics than, say, quantum computing, blockchain, or virtual reality.

Ethicists should drill down on the specifics based on a company’s individual parameters—which raises the question: Among the companies exploring this new ethical landscape, are any doing it right?

Erden nominates Mozilla, the California-based software maker that is part foundation, part corporation.

“They’re really transparent—they’re clear about what their aims are, what their limitations are, what they’re doing, and what they’re changing,” she says. “I think there’s a lot of respect for their platforms, like Firefox, and their ambitions.

Mozilla also invests heavily in engagement, Erden notes. It has demonstrated its commitment to ethical technology in its manifesto, with initiatives such as its Responsible Computing Challenge, its educational material on how to navigate ethical issues in the tech industry, and now its Responsible AI Challenge. These just scratch the surface.

[Read also: Lacking an ethical framework is a business risk. Here are three other pressing risks—and ways to reduce them]

Green co-authored a World Economic Forum case study on ethics at IBM and cites Big Blue as a leader here. The company established an AI ethics board in 2018, published its own principles for trust and transparency, and supported them with five pillars of trust, advocating values like privacy and fairness. It has also donated tools to the open-source community to help with “adversarial robustness”—that is, defending AI against misuse by attackers.

He wrote a similar analysis on Salesforce, which in 2018 developed its Office of Ethical and Humane Use of Technology, and another on Microsoft.

Of course, no company is perfect, which is why many experts in the thorny field of ethics try to avoid issuing blanket approval.

“All we can do is assess individual practices, individual technologies, and individual steps,” concludes Erden. It’s a repetitive process that evaluates actions on their ethical merits, and should in theory encourage companies to keep striving to improve, product by product, new tech service by new tech service. “There’s no end to that.”

Danny Bradbury

Danny Bradbury is a journalist, editor, and filmmaker who writes about the intersection of technology and business. He has won the prestigious BT Information Security Journalism Award, including for Best Cybercrime Feature.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW