Securing the AI revolution in banking, insurance and asset management
20th January 2026 • Park Plaza Victoria, London, UK
Banks are at the forefront of AI experimentation and adoption, but how are they securing it and what are the pitfalls?
The frontier challenge in cybersecurity
Artificial intelligence is no longer confined to proofs of concept or innovation labs. In financial institutions across the world, it is moving into production and being embedded in core processes from trading, to surveillance, to fraud and financial crime detection, to compliance. This breadth of deployment means the attack surface is no longer confined to a single system or department. AI is everywhere — and so are its risks.
Some of these systems are built in-house, but many are sourced from vendors or built on open-source frameworks. Some run in tightly controlled bank environments, while others rely on cloud infrastructure outside direct bank control. What unites them is a simple truth: every new AI initiative represents not only innovation, but also a fresh attack surface.
For security leaders, the challenge is stark: how do you secure these systems, ensure compliance, and maintain resilience when the technology itself is evolving faster than the controls designed to protect it?
Banks face a cluster of common issues when attempting to secure AI. The first is model integrity and supply chain risk. Many AI models are obtained from vendors or open-source communities. How can they be evaluated from a security perspective?
The second is data confidentiality. AI thrives on data, but that means client records, trading flows, internal communications, and sensitive HR files. Risks include prompt injection, model inversion attacks, and accidental leakage of confidential information. Even synthetic data can raise concerns about re-identification or inadvertent bias.
A third challenge is adversarial manipulation. Unlike traditional software, AI models can be tricked through carefully crafted inputs. Fraud engines can be nudged to misclassify transactions. Trade surveillance systems can be coaxed into ignoring abusive patterns. In practice, this means adversaries can attack not just the infrastructure around the model, but the model itself.
Finally, there is the question of resilience. If an AI system goes down, critical processes can halt: surveillance alerts are missed, payments are delayed, customer interactions fail. Banks must design fallback processes and “kill-switches” that allow continuity in the event of an outage or compromise.
Perhaps the most pressing new concern is the rise of agentic AI. Unlike traditional models, which generate outputs in response to inputs, agentic AI can act. It can call APIs, execute workflows, move money, approve trades, or reconfigure systems. In other words, it is not just making predictions — it is taking actions. Today, few banks have the sandboxing, kill-switches, or human-in-the-loop safeguards required to stop rogue agents instantly.
So, what do organisations need to do to integrate AI-driven processers into existing controls and governance frameworks?
How do IAM/PAM, threat detection, operational resilience and governance frameworks need to be adapted?
How can existing security stacks be configured to cope with the threats AI can introduce and what new tools may be necessary to augment traditional security and resilience solutions?