As organizations rapidly integrate Large Language Models (LLMs) into their infrastructure, traditional security practices often fall short. The probabilistic nature of Generative AI introduces unique vulnerabilities—such as prompt injection, jailbreaking, and model supply chain attacks that standard WAFs and static analysis tools simply cannot catch.
In this presentation, we will dissect the "AI Attack Surface" specifically for engineers deploying LLMs. We will move quickly past high-level theory into practical, hands-on defense strategies that DevSecOps teams can implement immediately.
Key topics include:
- The Anatomy of an AI Attack: A breakdown of how indirect prompt injection can turn an LLM into a "confused deputy," forcing it to exfiltrate data or execute unauthorized code.
- Open Source Red Teaming: A deep dive into automating vulnerability scanning. We will demonstrate how to use open-source frameworks like Garak (Generative AI Red-teaming) and PyRIT (Python Risk Identification Tool) to stress-test models against known jailbreaks before they go to production.
- Defensive Architectures: How to implement "Guardrails" effectively. We will review architectural patterns for sanitizing inputs and validating outputs to block malicious prompts without adding unacceptable latency.
- Supply Chain Hygiene: Methods to verify model integrity (hashing, signing) and ensure you aren't pulling compromised weights or pickled malware from Hugging Face repositories.
This session is designed for DevOps engineers, ML practitioners, and security professionals who need actionable steps to secure their AI workloads today.



