How a Barclays CISO Manages the Regulators
Focal Point presents Barclays CISO Andy Piper in the first of our sneak-peeks at interviews with stand-out IT and security leaders from the new book The Interconnection of People, Process and Technology.
It was good news for startups in AI and deep tech last month when Barclays announced the launch of a new “Innovation Hub” in London, a tech-firm accelerator fueled by Barclays Eagle Labs, the bank’s incubator ecosystem, and created in partnership with a host of tech giants, including Microsoft and Nvidia.
As the UK’s leading financial institution, Barclays is used to sitting in this powerful yet precarious position, financing innovation in controversial new industries like AI while maintaining its role as a risk advisor to firms across the UK and the world. And CISO Andy Piper is a key player in this high-stakes balancing act.
As a chief information security officer at Barclays, he helps to identify and oversee the AI and automation the bank actually uses. That means establishing systems and protocols for bank facilities in 40-plus countries and answering to more than 74 regulators.
Sure, the finserv industry has used algorithmic trading for over a decade, so it’s somewhat comfortable and experienced with AI – as long as it’s secure, transparent, and well-regulated, Piper notes. Despite this, his enthusiasm has its limits. Specifically, he worries about how AI is reshaping the cybersecurity landscape in distinct and dramatic ways.
Piper is one of 23 stand-out IT and security leaders who share their real-world experiences and perspectives in the new online report The Interconnection of People, Process and Technology, based on in-depth interviews with these industry executives from around the globe.
In the following excerpt – and a few others to come in the next few weeks – Focal Point gives you a taste of the top-level expertise you’ll find in the book, which was produced by Chief Disruptor, a UK-based membership community for business and tech innovators, in partnership with Tanium, a leading provider of cybersecurity solutions and autonomous endpoint management (and publisher of this magazine).
(This interview has been edited for space and clarity.)
ANDY PIPER. . .
On the most important skills a CISO needs to have in this era when advances – AI, automation, quantum tech, etc. — are coming fast and furious:
The skill I find most useful is the ability to communicate well, because a huge aspect of my role is talking to regulators or boards – when they’re not techies.
They don’t necessarily grasp super-granular cyber jargon, nor should they. Boards care about how cybersecurity affects strategic goals, risk, and business operations.
It’s essential for a CISO to be able to explain complex cybersecurity topics to non-technical stakeholders in terms they understand. They don’t necessarily grasp super-granular cyber jargon, nor should they. Boards care about how cybersecurity affects strategic goals, risk, and business operations. My role is to explain [the risk] and line it up against their priorities.
[Read also: Boards and brand reputation – 7 cyber steps to boost investor and consumer confidence]
On the impact of compliance regulations on financial services:
One of the key things I spend my day doing is managing regulators – over 74 at the last count. Each has unique cybersecurity requirements, some overlapping and others conflicting. Managing and staying compliant with all of them is a full-time job. Even incident reporting varies, with some regulators expecting notification in minutes, others in hours, and some when there’s a meaningful update.
If a regulator is proposing a white paper that maybe one day becomes a regulation, we make sure we understand it and hopefully steer it in a direction that is useful to us.
We have horizon-scanning processes that look forward to when these regulations may change. We ask, what is the change? And how do we modify our technology to be ahead of that? If a regulator is proposing a white paper that maybe one day becomes a regulation, we make sure we understand it and hopefully steer it in a direction that is useful to us. We’re also required to meet with regulators on a quarterly, six-month, or annual basis. Communicating effectively with regulators is a key skill in itself. Again, it’s about translating technical realities into accessible, accurate narratives.
[Read also: How to comply with the EU’s AI Act – start with your risk level]
Regulation is a huge, huge – I almost said burden. It’s not a burden. It’s… it’s a challenge, something we have to manage, and if we don’t do it properly, it could have a massive impact on the organization.
On the problem with AI chatbots:
Generative AI has exploded in usage, and that pace is creating a gap, and we’re all playing catchup. It’s in everything now. Almost every vendor I speak to says, “Hey, we’ve got a new AI chatbot.” In a lot of cases, that’s a problem for me. We often have to disable vendor tools that use AI because we can’t verify their security posture, which ties directly into compliance. So speed is posing a significant challenge at the moment.
I want my team to be seen as enablers, not blockers.
Then there are the two sides of it: There’s “How do you secure AI?” and “How do you use AI to secure other things?” We’re asking tough questions: Where does user input go, is it stored, is it used to train models, and can we trust its output? We don’t want sensitive data like PII [personally identifiable information] or [proprietary brand data] going into tools we can’t fully control or audit. Conversely, AI helps us to sift through billions of daily security events to identify real threats, highlighting the operational value of automation.
[Read also: Seeing is believing – how enterprises are using AI to improve cybersecurity]
Equally, from a threat perspective, a lot of the things we’ve trained our workers to look for are gone. AI-generated phishing is nearly perfect – the grammar, tone, and the structure. This is rendering traditional email-threat training obsolete. Model poisoning is another real threat with malicious actors influencing how AI systems behave.
On updating the reputation of security teams:
My aspiration is to redefine the perception of security, to move away from the old idea of the CISO as “the House of No.” I want my team to be seen as enablers, not blockers. That means working in partnership with the business, aligning our goals and strategies. The default answer to the business should be “Yes, with the right controls in place.”
TO LEARN MORE:
Check out the insights from 22 other IT and security leaders at WPP, Spotify, Unilever, NHS Digital and other orgs, in The Interconnection of People, Process and Technology.