MITRE ATLAS - exploring AI vulnerabilities
2024-12-14 , Rookie track 1

Just how vulnerable are the AI models we are coming to see pop up every other week? We've all heard of "jailbreaking" LLMs, but that's just the tip of the iceberg.
With the rapid adoption of AI technologies, it opens the door for a myriad of attacks.
In this talk, we go over a the MITRE Adversarial Threat Landscape for AI Systems (short for ATLAS) framework, and delve into some case studies exposing some of the most worrying AI attacks in recent years.


This is a talk about the MITRE ATLAS framework.
I'll first discuss how the ATLAS framework is built on top of the ATT&CK framework, before delving into some key differences with respect to vulnerabilities and attack vectors specific to what MITRE calls "AI-Enabled Systems".
I'll walk you through two case studies, one with a 'good' actor, the other a 'bad' one, and how investigation is made easier by using the ATLAS framework.
Finally, I'll show you how you can protect your organization against AI attacks by utilizing various mitigations, of which 25 are documented in this framework, covering various vulnerabilities.


Please confirm that I am a first time speaker and have not spoken in public and will not be before the Bsides London event date (14th December 2024).:

Yes

My name is Arthur Frost and I work for Flutter intl. as a contractor on the blue team.
I also study my MSc in Cyber Security at Leeds Beckett, and I am part of the Leeds Ethical Hacking Society, too, where I help with CTFs.
My interests are varied, but within security, I am particularly interested in zero-days, how enterprise environments can secure themselves against advanced threats, and how AI can be leveraged defensively and offensively.
Apart from that, I am interested in economics, history, geopolitics, literature, and languages - I am fluent in English and Russian, and I am also learning Japanese - if you speak it then connect with me!