Agenda
| 08:00 - 08:50 |
Breakfast networking and registration |
| 08:50 - 09:00 |
Chairman's welcome |
| 09:00 - 09:20 |
►The Challenge of Securing AI on a global scale Yair Kler, Vice President, Security Architecture, DHL Group
|
| 09:20 - 09:40 |
►Breaking the Backbone: Measuring Real LLM Security Failures in AI Agents Sam Watts, Product Lead for AI Agent Security, CheckPoint Software Technologies
|
| 09:40 - 10:00 |
►To Build Taller Walls, or Stronger Gates? Identity Is a Battlefield, but Privileges Are the Gatehouse Max Berg, Principal Solutions Engineer, BeyondTrust
|
| 10:00 - 10:20 |
►Beyond Human Identity: Securing AI Agents Ioan Nascu, GenAI Security Assurance specialist, Citi
|
| 10:20 - 11:00 |
► Education Seminar 1 Delegates will be able to choose from a range of topics:
|
| 11:00 - 11:30 |
Networking Break |
| 11:30 - 11:50 |
►Panel Discussion: Who Owns AI Risk? And How Do We Stay Compliant? Simon Brady, Event Chairman
|
| 11:50 - 12:10 |
► AI vs AI: Navigating the New Era of the Cyber Battlefield Céleste Manenc,Senior Corporate Sales Engineer,CrowdStrike
|
| 12:10 - 12:30 |
► Navigating AI Risks: Practical risk management strategies for security and compliance teams Patrick Sullivan, VP Innovation and Strategy, A-LIGN & Tom McNamara, CEO, Atoro
|
| 12:30 - 12:50 |
► Trust, Then Autonomy: A New Framework for Evaluating Agentic AI in Security Bobby Filar, Head of AI, Sublime Security
|
| 12:50 - 13:30 |
► Education Seminar 2 Delegates will be able to choose from a range of topics:
|
| 13:30 - 14:30 |
Lunch and Networking |
| 14:30 - 14:50 |
►Designing Trusted AI: Secure-by-Default Architectures and AI-Enhanced SOC in Practice Daniyal Naeem, Distinguished Engineer, Principal Security Authority - AI, BT
|
| 14:50 - 15:10 |
►Social Engineering Attack Chain: A New Standard for Unified Defense Daniel Oxley, Senior Engineer, Doppel
|
| 15:10 - 15:30 |
►Humans are the weakest link? Think again Etay Maor, VP Threat Intelligence and Founding Member of Cato CTRL, Cato Networks
|
| 15:30 - 16:10 |
► Education Seminar 3 Delegates will be able to choose from a range of topics:
|
| 16:10 - 16:40 |
Networking Break |
| 16:40 - 17:00 |
►Invisible Leaks: The Hidden Risks of Chatting with AI Manit Sahib, Ethical Hacker & Former Head of Penetration Testing & Red Teaming, Bank of England
|
| 17:00 - 17:30 |
►Panel Discussion: Buying AI Without Buying Risk Simon Brady, Event Chairman
|
| 17:30 - 17:30 |
Chairman's Closing Remarks |
| 17:30 - 18:30 |
Drinks Reception & Networking |
Education seminars
Every Guardrail Everywhere All At Once
Donato Capitella, Principal Consultant, Reversec
This talk shares practical lessons learned from hands-on testing of real-world generative AI application use cases, focusing on how security failures emerge when LLMs are integrated into production systems.
Attendees will learn:
- The most common risks identified in LLM-enabled applications and the guardrails that are frequently missing
- What "good" looks like for LLM guardrails, and how those guardrails can be evaluated in practice
- How to leverage guardrails in production as detection and response signals to defend against persistent attackers
How to Translate Your AI Policy into Enforceable Security Controls
Brett Ayres, CTO, Teneo & Palo Alto Networks
This session shows how to translate written AI usage guidelines into real, enforceable security outcomes using Palo Alto Networks’ AI security capabilities.
Attendees will learn:
- Map AI policy intent directly to technical controls across users, apps, and data
- Identify the right tools to detect shadow AI and policy violations
- Enforce guardrails on GenAI access and Agents without killing productivity
- Gain continuous visibility into AI‑driven data exposure and misuse
- Move from ‘policy on paper’ to measurable, auditable AI governance
Securing the agentic age: A practical guide
Sam Watts, Product Lead for AI Agent Security, CheckPoint Software Technologies
Attendees will learn:
- Get practical guidance in securing agent adoption across organisations at scale.
- We’ll be sharing best practices and hard-won lessons from securing production AI use and deployments across fortune 100 enterprises and globally important companies.
- Learn what production AI security looks like in AI applications used by millions of global customers per day.
Your AI deployments are outpacing your defences: how to protect AI systems in production
James Sherlow, Global VP of Sales Engineering, Wallarm
When AI comes into your organisation, APIs multiply fast. Every agent, RAG pipeline, and third-party integration creates new endpoints that your security team often doesn't know exist. This session cuts through the theory to address the three operational challenges that matter most: finding the APIs you don't know about, protecting them without slowing down development, and reporting risk in terms leadership understands.
Attendees will learn:
- the most common areas of vulnerability, exploits and breaches
- how to find the APIs that attackers can already see but you can't
- ways to quantify the security debt your AI initiatives are accumulating
From Prompts to Permissions: The new data risk model for AI
Bradley Bosher, Sales Engineer Manager,Varonis
AI assistants and agents change how data is accessed, inferred, and exposed. New attack techniques exploit prompts, retrieved content, and permissions inside trusted systems, bypassing traditional controls.
Attendees will learn:
- Emerging AI-driven data risks
- Why discovery alone isn’t enough
- How security teams can apply controls that scale with AI adoption
- Why building a defensible AI security strategy is key for EU AI Act compliance
We Have Built for a World That No Longer Exists
John Wood, Leader, Next-Gen Application Security, Contrast Security
AI is accelerating both code creation and attack capability beyond the limits of traditional application security models.
Attendees will learn:
- AI is accelerating both code creation and attack capability beyond the limits of traditional application security models.
- AI-driven development has collapsed deployment cycles while scanning programmes remain structurally slower than the code they are meant to assess
- AI-assisted attackers iterate at machine speed, rendering signature-based detection increasingly ineffective
- The result is a widening exposure gap: more unreviewed code in production, and more adaptive exploitation targeting it
Navigating AI Risk: The Security Mythos, the Ecosystem, and the Anatomy of a Breach
Ryan Rubin, Senior Managing Director - Cyber Security, Privacy and AI Security, & Ahsan Qureshi, Managing Director - Cyber Risk Advisory and AI Security, Ankura
The rush to adopt AI has created a dangerous "mythos" - the belief that applying standard frameworks and delegation of risk solely to the CISO are enough to keep organisation's safe. While there is no shortage of theoretical frameworks, organisations are struggling to secure the actual, complex AI ecosystem, at the rapid pace of adoption. This ecosystem spans infrastructure, applications, Model Context Protocols (MCP), and third-party packages.
Furthermore, Shadow AI evolves into autonomous agentic risks, the threat landscape is shifting under our feet. When this ecosystem fails fast, the resulting AI breach behaves, scales, and must be remediated completely differently than traditional cyber-incidents. Join us to explore practical, overarching risk strategies that move beyond theoretical checklists and prepare your entire business for the new reality of AI threats.
Attendees will learn:
- Shattering the AI Mythos: Why standard frameworks and treating the CISO as the sole "magic bullet" fall short.
- Securing AI: Identifying real-world vulnerabilities across the Ecosystem.
- The Evolution of Shadow AI: Uncovering and managing hidden, autonomous agentic risks within your environment.
- Why an AI Breach is Different: Understanding the unique anatomy, forensic challenges, and response strategies for AI-specific incidents.
- Holistic Risk Management: Building a cross-functional defence that moves past "AI governance" to practical resilience.
Agentic SOC: Data rules
Georges Bossert, Co-founder, Chief Technology and Product Officer, Sekoia.io
Agentic SOCs are becoming inevitable, yet autonomy built on weak data leads to fragile decisions. In this session, Georges Bossert (CTPO) will demonstrate why “data rules” and how to achieve reliable autonomy through a three-layer model: events, context, and cyber threat intelligence (CTI).
Attendees will learn:
- How to apply a three-layer model (events, context, CTI) to build reliable autonomous SOC capabilities
- How to design and implement TTP-guided runbooks for detection and response
- The key guardrails required for safe autonomy, including traceability, confidence scoring, and stop conditions
- How to deploy these capabilities while preserving data sovereignty, from cloud to on-prem and air-gapped environments without third-party exposure.
The AI Genie is out of the bottle
James Derbyshire, VP, Strategic Partnerships, Harmonic
The AI genie is out of the bottle and it's now acting autonomously. With 240 AI tools per company and agents embedded across every workflow, the workforce has outpaced IT governance entirely. James Derbyshire draws on data from 22 million enterprise AI prompts to explore how agentic AI is reshaping work, where the new security risks lie, and why the answer isn't to block the genie...it's to govern it.
Attendees will learn:
- Why agents change everything. They trigger, reason, act, and repeat without asking permission. That breaks every risk model built for the previous generation of AI tools.
- When agents go rogue. Deleted inboxes. A 13-hour AWS outage. A sandbox escape mining crypto. One in eight reported AI breaches now involves an autonomous agent.
- How to govern without blocking. Blanket bans failed. Learn what security-mature organisations are doing to let their workforce move fast without the exposure