Nazeef Khan
With a Master's from the University of Warwick, Nazeef stays at the forefront of offensive security techniques. He holds multiple industry-recognized certifications, including the Certified Red Team Operator (CRTO), HTB Certified Penetration Testing Specialist (CPTS), and Practical Network Penetration Tester (PNPT).
A dedicated learner, Nazeef actively contributes to the cybersecurity community by sharing his knowledge through public talks and technical discussions/blogs, inspiring others to explore the field. His expertise spans across various domains, including Red team Operations, and AI security.
Session
The use of Generative Artificial Intelligence (AI), particularly Large Language Models (LLMs), is rapidly increasing across various sectors, bringing significant advancements in automating tasks, enhancing decision-making, and improving user interactions. However, this growing reliance on LLMs also introduces substantial security challenges, as these models are vulnerable to various cyber threats, including adversarial attacks, data breaches, and misinformation propagation. Ensuring the security of LLMs is essential to maintain the integrity of their outputs, protect sensitive information, and build trust in AI technologies.
This talk will examine the security vulnerabilities that are inherent in Large Language Models (LLMs), with a particular focus on injection techniques, client-side attacks such as Cross-Site Scripting (XSS) and HTML injection, and Denial of Service (DoS) attacks. Through the simulation of these attack vectors, the study assesses the responses of various pre-trained models like GPT-3.5 Turbo and GPT-4, revealing their susceptibility to different forms of manipulation.
The talk will also underscore the critical risk of these vulnerabilities, especially when exploited in a real-time corporate environment, where they can lead to significant disruptions, unauthorized access, data theft, and compromised system integrity.