Organizations have invested in GPU resources that can do all kinds of cool new things. And they learned from the last 10 years of security practice, right? Right!?! Oh, no.

As GPUs become critical infrastructure for AI workloads, rendering farms, and scientific computing, we're watching security history repeat itself with a hardware twist. Multi-tenancy in cloud GPU environments brings all the isolation nightmares we thought we solved with containers and CPU virtualization — except now there's shared memory, DMA channels, and firmware to worry about.

The attack surface keeps expanding. GPU drivers rival the Linux kernel in complexity but get a fraction of the scrutiny. Kubernetes can schedule GPU workloads, but can it prevent one tenant's model training from snooping on another's? CUDA code runs with near-kernel privileges, virtualization adds another layer of "trust us," and nobody's quite sure who's responsible when the firmware hasn't been patched since 2019.

Meanwhile, organizations are racing to deploy GPU infrastructure without basic security controls. No monitoring for unusual GPU behavior. No isolation between workloads. Researchers are literally reading cryptographic keys from GPU memory across VM boundaries, and most security teams don't even know their organization has GPUs.

This talk will cover the new threat landscape, real-world attack vectors, and practical approaches to securing GPU infrastructure before your expensive compute cluster becomes someone else's cryptomining operation. And this is why security professionals will have a job for the next 10 years.