Christian is a well rounded technologist with experience in infrastructure engineering, systems administration, enterprise architecture, tech support, advocacy, and product management. Passionate about OpenSource and containerizing the world one application at a time. He is currently a maintainer at the Argo Project, OpenGitOps, and is Co-Chair of ArgoCon. He focuses on GitOps practices, DevOps, Kubernetes, Network security, and Containers.
Presentations
Are You Ready for AI?: A Guide to Running AI Workloads Smoothly and Securely
With the rise in popularity of AI applications, enterprises dive headfirst into development without having the proper foundations in place. AI workloads require heavy resource usage, and few enterprises lack robust infrastructure to handle them efficiently. These AI workloads often involve sensitive data, large-scale data movement, and high-performance compute nodes that require secure communication between components. Typical network security in Kubernetes is no longer limited to isolating services. It now includes protecting model training pipelines, securing inter-node traffic, and enforcing policies that ensure data confidentiality and compliance. The most common challenges enterprises face while developing AI applications are overprovisioning resources, an old-school infrastructure setup (a VM-only mindset), and insufficient security to prevent cybersecurity risks and protect their data.
In this session, we’ll walk through best practices for building optimal AI infrastructure while utilizing k0rdent and Kubernetes, and also leveraging Cilium to maintain stringent data security and compliance.
We’ll cover:
- Why over-provisioning resources is such a common error with AI infrastructure, and best practices for efficient use of resources for maximum ROI.
- How to move beyond a VM-only mindset to a more modern, Kubernetes/bare-metal–aware platform that can keep up with AI teams’ needs.
- How to safeguard AI workloads without sacrificing the scalability that makes Kubernetes effective in the first place.
With proper, strong infrastructure, AI workloads will run smoothly, securely, and without the usual operational overhead. Whether you’re a platform engineer, an AIOps Engineer or someone who wants to get into AIOps, you’ll benefit from this talk on best practices for creating optimal and secure AI infrastructure.
Progressive Delivery with Argo Rollouts
In this hands-on workshop attendees will learn the basics of how to use Argo Rollouts to progressively roll out apps. In this lab you will learn how to:
* Review the Rollouts specification
* Use Rollouts to perform a blue-green deployment
* Update the image with continuous integration and observe the blue-green progressive delivery in action
* Enhance the blue-green rollout with integrated testing by applying an AnalysisTemplate
* Update the image again and review how the AnalysisTemplate works
* Use Rollouts to perform a canary deployment with progressive delivery
GitOps best practices using Argo CD
Kubernetes is a declarative-first platform where manifests written in YAML describe what resources should exist in the cluster and on the nodes. Yet many new users will use imperative processes, running `kubectl apply` either manually or through CI, to define and change the state of their cluster. This approach is known as the “push” model, and while it works initially, it does not scale as more users adopt the platform. Without a shared understanding of the desired state, it’s impossible for teams to collaborate and make changes safely.
Pipelines: Everything Everywhere All At Once
When we talk about CI/CD, we often think of it as an end to end, linear, process. With modern cloud native computing, this ceases to be the case. The reality is your pipelines are hyperdimensional with many branches that can also have it’s own hyperdimension as well. This becomes a problem when dealing with GitOps workflows. Leading open-source GitOps platforms offer little to accommodate modern deployment pipelines. This talk will zero in on an alternative: patterns that address these challenges while remaining true to established GitOps principles.
Stop using kubectl, and use Git instead! - Hands-on GitOps workshop using Argo CD and Helm
Kubernetes is a declarative-first platform where manifests written in YAML describe what resources should exist in a cluster. Many new users will use imperative processes, running `kubectl apply`, to define and change the state of their cluster. This approach is known as the “push” model, and while it works initially, it does not scale as more users adopt the platform. Actors outside of the cluster should not be able to make changes directly. GitOps practices provide a declarative approach to defining and managing the state of Kubernetes.



