David Willis-Owen
David Willis-Owen is the founder of AIBlade - the first blog and podcast focussed solely on AI Security. AIBlade has reached the top 200 Technology podcasts in the UK, and producing this has allowed David to gain deep technical knowledge on attacking and defending AI. David is an experienced presenter and has delivered over 20 talks on a variety of cybersecurity topics, both internally as a JP Morgan Security Engineer and externally as an Independent Security Researcher. Additionally, he has authored insightful articles for CIISec. In his spare time, David enjoys kickboxing, learning Spanish, and responsibly disclosing vulnerabilities to large organizations such as OpenAI.
Session
Indirect Prompt Injection (IPI) is a fascinating exploit. As organizations race to capitalize on the hype surrounding AI, Large Language Models are being increasingly integrated with existing back-end services. In theory, many of these implementations are vulnerable to Indirect Prompt Injection, allowing cunning attackers to execute arbitrary malicious actions in the context of a victim user. In practice, IPI is poorly understood outside of academia, with few real-world findings and even fewer practical explanations.
This presentation seeks to bridge the gap between academia and industry by introducing the Indirect Prompt Injection Methodology - a structured approach to finding and exploiting IPI vulnerabilities. By analyzing each step, examining sample prompts, and breaking down case studies, participants will gain insights into constructing Indirect Prompt Injection attacks and reproducing similar findings in other applications.
Finally, the talk will cover IPI mitigations, elaborating on why this vulnerability is so difficult to defend against. The presentation will provide practical knowledge on securing LLM applications against IPI and highlight how this exploit poses a major roadblock to the future of advanced AI implementations.