Agenda

Presentations already confirmed include:


►The Challenge of Securing AI on a global scale

Yair Kler, Vice President, Security Architecture, DHL Group

  • Learn how CISOs can enable responsible AI adoption by advancing innovation without resorting to a blanket “no” while preserving strong security foundations.
  • Understand how FOMO‑driven AI adoption is pressuring enterprises into rapid, high‑risk decisions while bypassing established best practices.
  • Addressing the issue of emerging AI‑specific threat classes such as prompt injection and why they create new risks that remain difficult to mitigate.
  • Recognizing how long‑standing security challenges like secrets management and identity lifecycle governance are re‑emerging with greater complexity in AI‑driven environments.

►Beyond Human Identity: Securing AI Agents

Ioan Nascu, GenAI Security Assurance specialist, Citi

  • Agentic Identity: Why AI agents require fundamentally different approaches to traditional Identity and Access Management (IAM)
  • Emerging Threat Landscape: The unique risks that arise when intelligent agents become threat actors or targets
  • Beyond Human Capacity: How oversight mechanisms work when AI agents outnumber, outpace, and outlast human administrators

►Designing Trusted AI: Secure-by-Default Architectures and AI-Enhanced SOC in Practice

Daniyal Naeem, Distinguished Engineer, Principal Security Authority - AI, BT

  • How to design secure-by-default architectures for agentic AI systems, grounded in clear security policies and operational standards
  • A practical MCP secure reference architecture
  • Real-world use cases for AI-augmented SOC operations
  • Key risk considerations when operationalising AI in security environments

►Invisible Leaks: The Hidden Risks of Chatting with AI

Manit Sahib, Ethical Hacker & Former Head of Penetration Testing & Red Teaming, Bank of England

  • AI Privacy Risks: How tools like ChatGPT, Claude, and Co-Pilot can end up knowing more about you than your best friend (and never forget a thing). The hidden dangers of casually sharing information with AI
  • When Small Details Add Up: Why a few “harmless” details can combine to paint a full picture & How scattered information can reveal sensitive data without you realising
  • The Myth of Security: Why AI models aren’t as secure as we might think & How attackers can trick them into spilling information
  • Simple, Practical Steps: For employees: how to keep personal and company data safe & For organisations: reducing AI-related risks before they grow

►Panel Discussion: Buying AI Without Buying Risk

Simon Brady, Event Moderator
Robert Cooper, IT Security Engineering Lead, easyjet
Ali Shepherd, Director of Cyber & Operational Resilience (CISO), FCA
Natalia Shevchuk, SVP, AI Security Architect, Citi
Jon Harrison, Tech Lead, Local AI Division, Ministry for Housing, Communities & Local Government

  • When you’re buying an AI product, what’s the first security concern that comes to mind?
  • When an AI vendor says their product is “secure,” what do you actually want to hear from them?
  • What’s the fastest red flag that makes you pause or stop an AI purchase?
  • What’s one question every procurement team should ask before signing an AI contract?
  • If you could give one piece of advice to someone buying AI today, what would it be?

►Panel Discussion: Who Owns AI Risk? And How Do We Stay Compliant?

Simon Brady, Event Moderator
Jonathan Armstrong, Partner, Punter Southall Law 
Orlando Fernandez, Senior Technical Specialist at the Recovery, Resolution & Resilience team, Prudential Policy Directorate, Bank of England (BoE)
Adaora Ezennia, GRC Lead, THG PLC
Paul Jerram, Compliance and Responsible AI Officer, Keolis UK & Ireland

  • In practice, who owns AI risk in your organisation — and is that ownership clearly defined at executive level?
  • How are you ensuring AI use across the business stays aligned with existing regulatory obligations?
  • What visibility do you have over third-party, embedded & shadow AI tools — and how does that impact your compliance posture?
  • If asked to evidence AI governance to the board or a regulator tomorrow, how confident would you be?

►Humans are the weakest link? Think agAIn

Etay Maor, VP Threat Intelligence and Founding Member of Cato CTRL, Cato Networks

  • Challenge the current approach to AI agents security
  • Demonstrate risks and attacks against agents
  • Look into the dark web and criminal forums as to what threat actors are saying

Education seminars


Every Guardrail Everywhere All At Once


Donato Capitella, Principal Consultant, Reversec

This talk shares practical lessons learned from hands-on testing of real-world generative AI application use cases, focusing on how security failures emerge when LLMs are integrated into production systems.

Attendees will learn:

  • The most common risks identified in LLM-enabled applications and the guardrails that are frequently missing
  • What "good" looks like for LLM guardrails, and how those guardrails can be evaluated in practice
  • How to leverage guardrails in production as detection and response signals to defend against persistent attackers