Agenda
Presentations already confirmed include:
►Invisible Leaks: The Hidden Risks of Chatting with AI
Manit Sahib, Ethical Hacker & Former Head of Penetration Testing & Red Teaming, Bank of England
- AI Privacy Risks: How tools like ChatGPT, Claude, and Co-Pilot can end up knowing more about you than your best friend (and never forget a thing). The hidden dangers of casually sharing information with AI
- When Small Details Add Up: Why a few “harmless” details can combine to paint a full picture & How scattered information can reveal sensitive data without you realising
- The Myth of Security: Why AI models aren’t as secure as we might think & How attackers can trick them into spilling information
- Simple, Practical Steps: For employees: how to keep personal and company data safe & For organisations: reducing AI-related risks before they grow
►Panel Discussion: Buying AI Without Buying Risk
Simon Brady, Event Moderator
Robert Cooper, IT Security Engineering Lead, easyjet
- When you’re buying an AI product, what’s the first security concern that comes to mind?
- When an AI vendor says their product is “secure,” what do you actually want to hear from them?
- What’s the fastest red flag that makes you pause or stop an AI purchase?
- What’s one question every procurement team should ask before signing an AI contract?
- If you could give one piece of advice to someone buying AI today, what would it be?
►Designing Trusted AI: Secure-by-Default Architectures and AI-Enhanced SOC in Practice
Daniyal Naeem, Distinguished Engineer, Principal Security Authority - AI, BT
- How to design secure-by-default architectures for agentic AI systems, grounded in clear security policies and operational standards
- A practical MCP secure reference architecture
- Real-world use cases for AI-augmented SOC operations
- Key risk considerations when operationalising AI in security environments
►Panel Discussion: Who Owns AI Risk? And How Do We Stay Compliant?
Simon Brady, Event Moderator
Jonathan Armstrong, Partner, Punter Southall Law
- In practice, who owns AI risk in your organisation — and is that ownership clearly defined at executive level?
- How are you ensuring AI use across the business stays aligned with existing regulatory obligations?
- What visibility do you have over third-party, embedded & shadow AI tools — and how does that impact your compliance posture?
- If asked to evidence AI governance to the board or a regulator tomorrow, how confident would you be?