Skip to content

Artificial Intelligence: Thinking Beyond Artificiality with Larry Godec

Q&A

Artificial intelligence (AI) is the hot topic of the moment, so we asked Tanium Executive Advisory Board member Larry Godec for his thoughts on generative AI in general and its more well-known applications, such as ChatGPT. Larry is the former CIO of First American Financial and a trusted advisor on AI topics to some of the world’s largest enterprises.

How is the business world currently coping with AI?

Godec: Well, I think many firms are realizing that generative AI can read unstructured data in ways that you simply couldn’t before with things like robotic process automation. This opens the door for enterprises to build large language models (LLM) to manage, predict, and make business recommendations that are real-world examples. I know from my consultancy work with some businesses that you can come up with almost a hundred in just a few days. But, at the end of the day, it still comes down to the data to power these LLMs.

Tanium Executive Advisory Board member Larry Godec

Tanium Executive Advisory Board member Larry Godec

So, when it comes to AI, where should firms put their energies?

Godec: Firstly, it’s about coming up with use cases and then deciding how to prioritize based on business value. Then it’s about identifying the appropriate toolsets and models to solve those business problems and how to implement them into production. That could be integration with existing legacy systems, through open APIs, or standalone specific models that need to take into account security or privacy aspects.

What implications does this have from a security standpoint?

Godec: Once you’ve identified the models you need to bring them behind the firewalls or in a private instance in the cloud. You need make sure the data that you load in and the models you create are proprietary to you and secure.

Can firms, realistically, look to replace humans with AI?

Godec: Well, a consulting organization I know in this space is getting those kinds of questions from clients. One of them asked if they could replace a lead developer, who’s leaving, with ChatGPT. The response was a resounding “OMG, don’t you dare!”.

I think the general feeling from that consultancy side, and to some extent clients too, is that generative AI and ChatGPT aren’t ready – yet – to replace a lead developer. They fear security vulnerabilities, data inconsistencies, and performance issues, so they are advising that another lead developer be recruited and provided with the AI tools to make productivity gains. So, for example, you might not need another junior developer and you make savings there.

So, what’s your view? Is AI too immature right now?

Godec: I recently heard from an engineering professor at an event who was involved in the development of GPT4 itself. He argued that it won’t replace jobs, it’ll simply make humans better. I actually question that general viewpoint based on my experience. Take, for example, students currently using ChatGPT to write entry exams for law school. ChatGPT does it better than they can. Those scenarios will inevitably result in displaced jobs. And it will disrupt entire industries.

In the next one or two years, we’re likely to see a real revolution, not seen since the inception of the Internet. The expectation is that it will be as big, if not bigger, than the Internet. But people will lose jobs and people will have to figure out – very quickly – different jobs or career paths.

So how might that play out, using your example, in the legal profession?

Godec: There are estimates that about 60% of what a lawyer does, day-to-day, can now be done by ChatGPT. So why would you go to law school in the first place? It may not be able to litigate, but it could prepare some briefs that lawyers are currently doing and handle more information in real time. It makes you sit back and think about the kind of disruption that could occur. By all means, get your law degree. But maybe rethink becoming a paralegal which is such a research-heavy role.

So what opportunities, professionally, does this pose for IT teams?

Godec: We may see a focus on making development teams leaner, but there’ll be much more emphasis on testing the components of AI tools and models. There’ll be a clear need for more rigor when it comes to things like security protocols that all code must go through. AI will need its own quality controls.

What are the strategic steps that businesses should take?

Godec: Bring consultants in to talk about your business processes and workflows. To help ideate the generative AI use cases and then implement automation. Collectively, you’ll need to figure out what to do with all the disparate data you have and whether you can build a model to deal with that. To augment that, firms and their employees will need to acquire more skills in teaching or prompting the models.


Check out our eBook: Beat the Big Tech Blues, A CIO’s guide to right-sizing business operations.

Tanium Staff

Tanium’s village of experts co-writes as Tanium Staff, sharing their lens on security, IT operations, and other relevant topics across the business and cybersphere.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW