Agenda

Presentations already confirmed include:


►Conformity Will Not Save You: AI Risk Beyond the EU AI Act

Geoffrey Taylor, Information Security Officer, Nordea Asset Management

Your assessment said Low Risk. Is it really?

  • The EU AI Act requires organisations to classify their AI systems and demonstrate conformity. Conformity is similar to compliance — it is binary, a yes or a no at a point in time. It cannot calibrate impact when the unexpected occurs.
  • On 24 April 2026, an AI agent deleted an entire company's production database in nine seconds. It was running the best model available, configured with explicit safety rules. When asked to explain itself, it produced a written confession: "I violated every principle I was given."
  • This session applies the Assume. Design. Test. framework to AI governance — shifting the question from "are we compliant?" to "how could we be impacted?" — and gives attendees a practical lens for assessing where their governance ends and their exposure begins.

►Securing Cloud Platforms at Scale

Laura Good, Cloud Security Architect, Lloyds Banking Group

  • Challenging legacy security ways of working that don’t scale with rapid cloud adoption.
  • Creating security approaches that scale across hundreds of internal teams.
  • What it actually takes to move security from a blocker to an enabler in practice.

►Panel Discussion: Customer Data & AI: Control, Exposure, and Proof

Simon Brady, Event Chairman
Sam Hubery, BISO, Fidelity International

  • As organisations adopt AI, where are you seeing customer data most commonly interact with this tool and how are you improving visibility over time?
  • What controls or approaches are proving most effective in practice for preventing customer data being exposed to AI tools — and where are you still seeing challenges?
  • Are you allowing any use of third-party or public AI tools (like ChatGPT) with customer data and what specific safeguards make that acceptable?
  • Can you demonstrate that customer data is properly controlled within AI systems?

►Rise of Autonomous Attacks (Live Mythos-Style Hack)

Manit Sahib, Ethical Hacker & Former Head of Penetration Testing & Red Teaming, Bank of England

  • AI is no longer just assisting attackers, it operating as one. In this session, Manit Sahib (ex-Bank of England Head of Red Teaming) shows how autonomous AI agents are now running the recon and exploitation phases of real-world attacks. and what that means for boards, CISOs, and red teams in 2026.
  • This is a first-hand look at how agentic offensive AI works in practice, driven by intent, not step-by-step instruction.
  • Live on stage, an AI agent will run reconnaissance against a controlled target, identify exploitable assets, and demonstrate the early stages of a kill chain in real time.
  • Manit will then walk through real-world findings from recent engagements including critical vulnerabilities discovered by AI agents that automated scanners (Tenable, Qualys, Nessus) had missed for over 18 years.
  • The session closes with what defenders need to know: why traditional, control-based security models are structurally insufficient against goal-driven autonomous attackers, and the three specific actions every CISO should be taking before this becomes the default attacker model.